Skip to content

Carmen Amo Alonso

Organisation
Stanford University
Biography

Why do you care about AI Existential Safety?

We are lucky to be living in a pivotal moment in history. We are witnessing—perhaps without fully realizing—the unfolding of the fifth industrial revolution. I believe that AI will radically change our work, our lives, and even how we relate to one another. It is in our hands to make sure that this transition is done responsibly. However, current AI developments are in mostly driven by for-profit corporations that have specific agendas. In order to make sure that such developments are also aligned with the common interest of everyday people, we need to strengthen the dialogue between AI engineers and policy-makers, as well as raise awareness about the potential risks of AI technologies. We are all aware of the risks that planes, cars, and weapons carry. But the risks of AI, despite being more subtle, do not fall behind. Language, images, videos are our everyday ways of consuming information. If this content is generated massively with specific agendas, it can easily be used to spread misinformation, polarize society, and ultimately undermine democracy. For this reason, ethics and AI should go hand-in-hand, so AI can bring about a safer society instead of being a threat.

Please give at least one example of your research interests related to AI existential safety:

I work at the intersection of control theory and natural language processing. Informally, control theory studies the behavior of dynamical systems, i.e., systems that create a trajectory over time. Using control theory, we design strategies to steer that trajectory into some desired direction. Moreover, we can often mathematically guarantee the safe behavior of the system: staying away some given region, navigating within some constraints, etc. Although control theory was initially developed for the aerospace industry (we didn’t want our planes to crash!), I believe that its principles and mathematical insights are very much suitable to study a different kind of dynamical systems: artificial intelligence systems! And in particular, foundation models. In my work, I carry out different research agendas focused into ensuring safety and controllable behavior of AI systems. One of my research directions is about how to interact with a robot in natural language, in a way that the behavior of the robot can still satisfy safety constraints. Another one of my research directions is concerned with looking at large language models as dynamical systems (systems that builds trajectories in “word” space) and using classical control theoretical techniques to do things like fine-tuning, i.e., steer the system away from undesired behavior, etc. Since control theory provides guarantees on the behavior, our methods also inherit theoretical guarantees as well. In my other research direction, I look into foundation models and using classical control-theoretical models, I try to understand how we can create models that learn more efficiently are less data-hungry: this will imply that we can actually supervise the content that we use for training, so we are not exposing our systems to toxic data; we will require less power and energy and reduce the environmental footprint; and these systems will become more accessible to more modest computational settings like universities and not belong exclusively to big corporations with huge computing power.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram