Posts in this category get featured at the top of the front page.

FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

As we grapple with questions about AI safety and ethics, we’re implicitly asking something else: what type of a future do we want, and how can AI help us get there?

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.

Topics discussed in this episode include:

  • Hopes for the future of AI
  • AI-human collaboration
  • AI’s influence on art and creativity
  • The UN AI for Good Summit
  • Gaps in AI safety
  • Preparing AI for uncertainty
  • Holding AI accountable

Publications and resources discussed in this episode include:

Ariel: Hello and welcome to another episode of the FLI podcast. I’m your host Ariel Conn, and today we’ll be looking at how to address safety and ethical issues surrounding artificial intelligence, and how we can implement safe and ethical AIs both now and into the future. Joining us this month are Ashley Llorens and Francesca Rossi who will talk about what they’re seeing in academia, industry, and the military in terms of how AI safety is already being applied and where the gaps are that still need to be addressed.

Ashley is the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory where he directs research and development in machine learning, robotics, autonomous systems, and neuroscience all towards addressing national and global challenges. He has served on the Defense Science Board, the Naval Studies Board of the National Academy of Sciences, and the Center for a New American Security’s AI task force. He is also a voting member of the Recording Academy, which is the organization that hosts the Grammy Awards, and I will definitely be asking him about that later in the show.

Francesca is the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab. She is an advisory board member for FLI, a founding board member for the Partnership on AI, a deputy academic director of the Leverhulme Centre for the Future of Intelligence, a fellow with AAAI and EurAI (that’s e-u-r-a-i), and she will be the general chair of AAAI in 2020. She was previously Professor of Computer Science at the University of Padova in Italy, and she’s been president of IJCAI and the editor-in-chief of the Journal of AI Research. She is currently joining us from the United Nations AI For Good Summit, which I will also ask about later in the show.

So Ashley and Francesca, thank you so much for joining us today.

Francesca: Thank you.

Ashley: Glad to be here.

Ariel: Alright. The first question that I have for both of you, and Ashley, maybe I’ll direct this towards you first: basically, as you look into the future and you look at artificial intelligence becoming more of a role in our everyday lives — before we look at how everything could go wrong, what are we striving for? What do you hope will happen with artificial intelligence and humanity?

Ashley: My perspective on AI is informed a lot by my research and experiences at the Johns Hopkins Applied Physics Lab, which I’ve been at for a number of years. My earliest explorations had to do with applications of artificial intelligence to robotics systems, in particular underwater robotics systems, systems where signal processing and machine learning are needed to give the system situational awareness. And of course, light doesn’t travel very well underwater, so it’s an interesting task to make a machine see with sound for all of its awareness and all of its perception.

And in that journey, I realized how hard it is to have AI-enabled systems capable of functioning in the real world. That’s really been a personal research journey that’s turned into an institution-wide research journey for Johns Hopkins APL writ large. And we’re a large not-for-profit R & D organization that does national security, space exploration, and health. We’re about 7,000 folks or so across many different disciplines, but many scientists and engineers working on those kinds of problems — we say critical contributions to critical challenges.

So as I look forward, I’m really looking at AI-enabled systems, whether they’re algorithmic in cyberspace or they’re real-world systems that are really able to act with greater autonomy in the context of these important national and global challenges. So for national security: to have robotic systems that can be where people don’t want to be, in terms of being under the sea or even having a robot go into a situation that could be dangerous so a person doesn’t have to. And to have that system be able to deal with all the uncertainty associated with that.

You look at future space exploration missions where — in terms of AI for scientific discovery, we talk a lot about that — imagine a system that can perform science with greater degrees of autonomy and figure out novel ways of using its instruments to form and interrogate hypotheses when billions of miles away. Or in health applications where we can have systems more ubiquitously interpreting data and helping us to make decisions about our health to increase our lifespan, or health span as they say.

I’ve been accused of being a techno-optimist, I guess. I don’t think technology is the solution to everything, but it is my personal fascination. And in general, just having this AI capable of adding value for humanity in a real world that’s messy and sloppy and uncertain.

Ariel: Alright. Francesca, you and I have talked a bit in the past, and so I know you do a lot of work with AI safety and ethics. But I know you’re also incredibly hopeful about where we can go with AI. So if you could start by talking about some of the things that you’re most looking forward to.

Francesca: Sure. Partially focused on the need of developing autonomous AI systems that can act where humans cannot go, for example, and that’s definitely very, very important. I would like to focus more on the need also of AI systems that can actually work together with humans, augmenting our own capabilities to make decisions or to function in our work environment or in our private environment. That’s the focus of and the purpose of AI that I see, that I work on, and I focus on what are the challenges in making this system really work well with humans.

This means of course that while it may seem that in some sense it’s easier to develop an AI system that works together with humans because there is complementarity — some things are made by humans, some things are made by the machine. But actually, there are several additional challenges because you want these two entities, the human and the machine, to actually become a real team and work together and collaborate together to achieve a certain goal. You want these machines to be able to communicate, interact in a very natural way with human beings and you want these machines to be not just reactive to commands, but also proactive at trying to understand what the human being needs in that moment, in that context in order to provide all the information and knowledge that it needs from the data that surrounds whatever task is going to be addressed.

That’s the focus also of what IBM Business Model is, because of course IBM releases AI to be used in other companies so that their professional people can use it to do better the job that they’re doing. And it has many, many different interesting research directions. The one that I’m mostly focused on is around value alignment. How do you make sure that these systems know and are aware of the values that they should follow and of the ethical principles that they should follow, while trying to help human beings do whatever they need to do? And there are many ways that you can do that and many ways to model them to reason with these ethical principles and so on.

Being here in Geneva at AI For Good, I mean, in general, I think that here for example the emphasis is — and rightly so — about the sustainable development goals of the UN: these 17 goals that define a vision of the future, the future that we want. And we’re trying to understand how we can leverage technologies such as AI to achieve that vision. The vision can be slightly nuanced and different, but to me, the development of advanced AI is not the end goal, but is only a way to get to the vision of the future that I have. And so, to me, this AI For Good Summit and the 17 sustainable development goals define a vision of the future that is important to have when one has in mind how to improve technology.

Ariel: For listeners who aren’t as familiar with the sustainable development goals, we can include links to what all of those are in the podcast description.

Francesca: I was impressed at this AI For Good Summit. This Summit started three years ago with kind of 400 people. Then last year it was like 500 people, and this year there are 3,200 registered participants. To really give you an idea of how more and more everybody’s interested into these subjects.

Ariel: Have you also been equally impressed by the topics that are covered?

Francesca: Well, I mean, it started today. So I just saw in the morning there are five different parallel sessions that will go throughout the following two days. One is AI education and learning. One is health and wellbeing. One is AI human dignity and inclusive society. One is scaling AI for good. And one is AI for space. These five themes will go throughout two days together with many other smaller ones. But for what I’ve seen this morning, it’s really a very high level of the discussion. It’s going to be very impactful. Each event is unique, has its own specificity, but this event is unique because it’s focused on a vision of the future, which in this case are the sustainable development goals.

Ariel: Well, I’m really glad that you’re there. We’re excited to have you there. And so, you’re talking about moving towards futures where we have AIs that can do things that either humans can’t do or don’t want to do or isn’t safe, visions where we can achieve more because we’re working with AI systems as opposed to just humans trying to do things alone. But we still have to get to those points where this is being implemented safely and ethically.

I’ll come back to the question of what we’re doing right so far, but first, what do you see as the biggest gaps in AI safety and ethics? And this is a super broad question, but looking at it with respect to, say, the military or industry or academia. What are some of the biggest problems you see in terms of us safely applying AI to solve problems?

Ashley: It’s a really important question. My answer is going to center around uncertainty and dealing with that in the context of the operation of the system, and let’s say the implementation or the execution of the ethics of the system as well. But first, backing up to Francesca’s comment, I just want to emphasize this notion of teaming and really embrace this narrative in my remarks here.

I’ve heard it said before that every machine is part of some human workflow. I think a colleague Matt Johnson at the Florida Institute for Human and Machine Cognition says that, which I really like. And so, just to make clear, whether we’re talking about the cognitive enhancements, an application of AI where maybe you’re doing information retrieval, or even a space exploration example, it’s always part of a human-machine team. In the space exploration example, the scientists and the engineers are on the earth, maybe many light hours away, but the machines are helping them do science. But at the end of the day, the scientific discovery is really happening on earth with the scientists. And so, whether it’s a machine operating remotely or by cognitive assistance, it’s always part of a human-machine team. That’s just something I wanted to amplify that Francesca said.

But coming back to the gaps, a lot of times I think what we’re missing in our conversations is getting some structure around the role of uncertainty in these agents that we’re trying to create that are going to help achieve that bright future that Francesca was referring to. To help us think about this at APL, we think about agents as needing to perceive, decide, act in teams. This is a framework that just helps us understand these general capabilities that we’ll need and to start thinking about the role of uncertainty, and then combinations of learning and reasoning that would help agents to deal with that. And so, if you think about an agent pursuing goals, the first thing it has to do is get an understanding of the world states. This is this task of perception.

We often talk about, well, if an agent sees this or that, or if an agent finds itself in this situation, we want it to behave this way. Obviously, the trolley problem is an example we revisit often. I won’t go into the details there, but the question is, I think, given some imperfect observation of the world, how does the structure of that uncertainty factor into the correct functioning of the agent in that situation? And then, how does that factor into the ethical, I’ll say, choices or data-driven responses that an agent might have to that situation?

Then we talk about decision making. An agent has goals. In order to act on its goals, it has to decide about how certain sequences of actions would affect future states of the world. And then again how, in the context of an uncertain world, is the agent going to go about accurately evaluating possible future actions when it’s outside of a gaming environment, for example. How does uncertainty play into that and its evaluation of possible actions? And then in the carrying out of those actions, there may be physical reasoning, geometric reasoning that has to happen. For example, if an agent is going to act in a physical space, or reasoning about a cyber-physical environment where there’s critical infrastructure that needs to be protected or something like that.

And then finally, to Francesca’s point, the interactions, or the teaming with other agents that may be teammates or actually may be adversarial. And so, how does the reasoning about what my teammates might be intending to do, what the state of my teammates might be in terms of cognitive load if it’s a human teammate, what might the intent of adversarial agents be in confounding or interfering with the goals of the human-machine team?

And so, to recap a little bit, I think this notion of machines dealing with uncertainty in real world situations is one of the key challenges that we need to deal with over the coming decades. And so, I think having more explicit conversations about how uncertainty manifests in these situations, how you deal with it in the context of the real world operation of an AI-enabled system, and then how we give structure to the uncertainty in a way that should inform our ethical reasoning about the operation of these systems. I think that’s a very worthy area of focus for us over the coming decades.

Ariel: Could you walk us through a specific example of how an AI system might be applied and what sort of uncertainties it might come across?

Ashley: Yeah, sure. So think about the situation where there’s a dangerous environment, let’s say, in a policing action or in a terrorist situation. Hey, there might be hostiles in this building, and right now a human being might have to go into that building to investigate it. We’ll send a team of robots in there to do the investigation of the building to see if it’s safe, and you can think about that situation as analogous for a number of possible different situations.

And now, let’s think about the state of computer vision technology, where straight pattern recognition is hopefully a fair characterization of the state of the art, where we know we can very accurately recognize objects from a given universe of objects in a computer vision feed, for example. Well, now what happens if these agents encounter objects from outside of that universe of training classes? How can we start to bound the performance of the computer vision algorithm with respect to objects from unknown classes? You can start to get a sense from that progression, just from the perception part of that problem, from recognize, of these 200 possible objects, tell me which class it comes from, to having to do vision type tasks in environments that would present many new and novel objects that they may have to perceive and reason about.

You can think about that perception task now as extending to agents that might be in that environment and trying to ascertain from partial observations of what the agents might look like, partial observations of the things they might be doing to try to have some assessment of this is a friendly agent or this is an unfriendly agent, to reasoning about affordances of objects in the environment that might present our systems with ways of dealing with those agents that conform to ethical principles.

That was not a very, very concrete example, but hopefully starts to get one level deeper into the kinds of situations we want to put systems into and the kinds of uncertainty that might arise.

Francesca: To tie to what Ashley just said, we definitely need a lot more ways to have realistic simulations of what can happen in real life. So testbeds, sandboxes, that is definitely needed. But related to that, there is also this ongoing effort — which has already resulted in tools and mechanisms, but many people are still working on it — which is to understand better the error landscape that the machine learning approach may have. We know machine learning always has a small percentage of error in any given situation and that’s okay, but we need to understand what’s the robustness of the system in terms of that error, and also we need to understand the structure of that error space because this information can inform us on what are the most appropriate or less appropriate use cases for the system.

Of course, going from there, this understanding of the error landscape is just one aspect of the need for transparency on the capabilities and limitations of the AI systems when they are deployed. It’s a challenge that spans from academia or research centers to, of course, the business units and the companies developing and delivering AI systems. So that’s why at IBM we are working a lot on this issue of collecting information during the development and the design phase around the properties of the systems, because we think that understanding of this property is very important to really understand what should or should not be done with the system.

And then, of course, there is, as you know, a lot of work around understanding other properties of the system. Like, fairness is one of the values that we may want to inject, but of course it’s not as simple as it looks because there are many, many definitions of fairness and each one is more appropriate or less appropriate in certain scenarios and certain tasks. It is important to identify the right one at the beginning of the design and the development process, and then to inject mechanisms to detect and mitigate bias according to that notion of fairness that we have decided is the correct one for that product.

And so, this brings us also to the other big challenge, which is to help developers understand how to define these notions, these values like fairness that they need to use in developing the system — how to define them not just by themselves within the tech company, but also communicating with the communities that are going to be impacted by these AI product, and that may have something to say on what is the right definition of fairness that they care about. That’s why, for example, another thing that we did, besides developing research and also products, but we also invest a lot in educating developers in trying to help them understand in their everyday jobs how to think about these issues, whether it’s fairness, robustness, transparency, and so on.

And so, we built this very small booklet — we call it the Everyday AI Ethics Guide for Designers and Developers — that raises a lot of questions that should be in their mind in their everyday job. Because as you know, for example, if you don’t think about bias or fairness during these development phases and you just check whether your product is fair or not or when it’s ready to be deployed, then you may discover that actually you need to start from scratch again if you discover that it doesn’t have the right notion of fairness.

Another effort that we really care a lot about in this effort to build teams of humans and machines is the issue of explainability, to make sure that it is possible to understand why these systems are recommending certain decisions. Explainability is something that, especially in this environment of teaming AI machines, is very important, because without this capability of AI systems of explaining why they are recommending certain decision, then the human being part of the team will not in the long run trust the AI system, so will not adopt it possibly. And so we will also lose the positive and beneficial effect of the AI system.

The last thing that I want to say is that this education of developers extends actually much beyond the developers to also the policy makers. That’s why it’s important to have a lot of interaction with policy makers that need to really be educated about the state of the art, about the challenges, about the limits of current AI, in order to understand how to best drive the current technology, to be more and more advanced, but also beneficial and driven towards the beneficial efforts. And what are the right mechanisms to drive the technology into the direction that we want? Still needs a lot more multi-stakeholder discussion to really achieve the best results, I think.

Ashley: Just picking up on a couple of those themes that Francesca raised: first, I just want to touch on simulations. At the applied physics laboratory, one of the core things we do is develop systems for the real world. And so, as the tools of artificial intelligence are evolving, the art and the science of systems engineering is starting to morph into this AI systems engineering regime. And we see simulation as key, more key than it’s ever been, into developing real world systems that are enabled by AI.

One of the things we’re really looking into now is what we call live virtual constructive simulations. These are simulations that you can do distributed learning for agents in a constructive mode where you have highly parallelized learning, but where you actually have links and hooks for live interactions with humans to get the human-machine teaming. And then finally, bridging the gap between simulation and real world where some of the agents represented in the context of the human-machine teaming functionality can be virtual and some can actually be represented by real systems in the real world. And so, we think that these kinds of environments, these live virtual constructive environments, will be important for bridging the gap from simulation to real.

Now, in the context of that is this notion of sharing information. If you think about the complexity of the systems that we’re building, and the complexity and the uncertainty of the real world conditions — whether that’s physical or cyber or what have you — it’s going to be more and more challenging for a single development team to analytically characterize the performance of the system in the context of real-world environment. And so, I think as a community we’re really doing science; We’re performing science, fielding these complex systems in these real-world environments. And so, the more we can make that a collective scientific exploration where we’re setting hypotheses, performing these experiments — these experiments of deploying AI in real world situations — the more quickly we’ll make progress.

And then, finally, I just wanted to talk about accountability, which I think builds on this notion of transparency and explainability. From what I can see — and this is something we don’t talk about enough, I think — is I think we need to change our notion of accountability when it comes to AI-enabled systems. I think our human nature is we want individual accountability for individual decisions and individual actions. If an accident happens, our whole legal system, our whole accountability framework is, “Well, tell me exactly what happened that time,” and I want to get some accountability based on that and I want to see something improve based on that. Whether it’s a plane crash or a car crash, or let’s say there’s corruption in a Fortune 500 company — we want see the CFO fired and we want to see a new person hired.

I think when you look at these algorithms, they’re driven by statistics, and the statistics that drive these models are really not well suited for individual accountability. It’s very hard to establish the validity of a particular answer or classification or something that comes out of the algorithm. Rather, we’re really starting to look at the performance of these algorithms over a period of time. It’s hard to say, “Okay, this AI-enabled system: tell me what happened on Wednesday,” or, “Let me hold you accountable for what happened on Wednesday.” And more so, “Let me hold you accountable for everything that you did during the month of April that resulted in this performance.”

And so, I think our notion of accountability is going to have to embrace this notion of ensemble validity, validity over a collection of activities, actions, decisions. Because right now, I think if you look at the underlying mathematical frameworks for these algorithms, they’re not well supported for this notion of individual accountability for decisions.

Francesca: Accountability is very important. It needs a lot more discussion. This is one of the topic also that we have been discussing in this initiative by the European Commission in defining the AI Ethics Guidelines for Europe, and accountability is one of the seven requirements. But it’s not easy to define what it means. What Ashley said is one possibility: Change our idea of accountability from one specific instance to over several instances. That’s one possibility, but I think that that’s something that needs a lot more discussion with several stakeholders.

Ariel: You’ve both mentioned some things that sound like we’re starting to move in the right direction. Francesca, you talked about getting developers to think about some of the issues like fairness and bias before they start to develop things. You talked about trying to get policy makers more involved. Ashley, you mentioned the live virtual simulations. Looking at where we are today, what are some of the things that you think have been most successful in moving towards a world where we’re considering AI safety more regularly, or completely regularly?

Francesca: First of all, we’ve gone a really long way in a relatively short period of time, and the Future of Life Institute has been instrumental in building the community, and everybody understands that the only approach to address this issue is a multidisciplinary, multi-stakeholder approach. The Future of Life Institute, with the first Puerto Rico conference, showed very clearly that this is the approach to follow. So I think that in terms of building the community that discusses and identifies the issues, I think we have done a lot.

I think that at this point, what we need is greater coordination and also redundancy removal among all these different initiatives. I think we have to find, as a community, the main issues and the main principles and guidelines that we think are needed for the development of more advanced forms of AI, starting from the current state of the art. If you look at the values, at these guidelines or lists of principles around AI ethics from the values initiatives, they are of course different from each other but they have a lot in common. So we really were able to identify these issues, and this identification of the main issues is important as we move forward to more advanced versions of AI.

And then, I think another thing that also we are doing in a rather successful though not complete way is trying to move from research to practice. From high level principles to concrete, develop, and deploy the products that can embed these principles and guidelines into not just the scientific papers that are published, but also into the platform, the services, and the tool kits that companies use with their clients. We needed an initial phase where there were high level discussions about guidelines and principles, but now we are in the second phase where these go and percolate down to the business units and to how products are built and deployed.

Ashley: Yeah, just building on some of Francesca’s comments, I’ve been very inspired by the work of the Future of Life Institute and the burgeoning, I’ll say, emerging AI safety community. Similar to Francesca’s comment, I think that the real frontier here is now taking a lot of that energy, a lot of that academic exploration, research, and analysis and starting to find the intersections of a lot of those explorations with the real systems that we’re building.

You’re definitely seeing within IBM, as Francesca mentioned, within Microsoft, within more applied R & D organizations like Johns Hopkins APL, where I am, internal efforts to try to bridge the gap. And what I really want to try to work to catalyze in the coming years is a broader, more community-wide intersection between the academic research community looking out over the coming centuries and the applied research community that’s looking out over the coming decades, and find the intersection there. How do we start to pose a lot of these longer term challenge problems in the context of real systems that we’re developing?

And maybe we get to examples. Let’s say, for ethics, beyond the trolley problem and into posing problems that are more real-world or closer, better analogies to the kinds of systems we’re developing, the kinds of situations they will find themselves in, and start to give structure to some of the underlying uncertainty. Having our debates informed by those things.

Ariel: I think that transitions really nicely to the next question I want to ask you both, and that is, over the next 5 to 10 years, what do you want to see out of the AI community that you think will be most useful in implementing safety and ethics?

Ashley: I’ll probably sound repetitive, but I really think focusing in on characterizing — I think I like the way Francesca put it — the error landscape of a system as a function of the complex internal states and workings of the system, and the complex and uncertain real-world environments, whether cyber or physical that the system will be operating in, and really get deeper there. It’s probably clear to anyone that works in the space that we really need to fundamentally advance the science and the technology. I’ll start to introduce the word now: trust, as it pertains to AI-enabled systems operating in these complex and uncertain environments. And again, starting to better ground some of our longer-term thinking about AI being beneficial for humanity and grounding those conversations into the realities of the technologies as they stand today and as we hope to develop and advance them over the next few decades.

Francesca: Trust means building trust in the technology itself — and so the things that we already mentioned like making sure that it’s fair, value aligned, robust, explainable — but also building trust in those that produce the technology. But then, I mean, this is the current topic: How do we build trust? Because without trust we’re not going to adopt the full potential of the beneficial effect of the technology. It makes sense to also think in parallel, and more in the long-term, what’s the right governance? What’s the right coordination of initiatives around AI and AI ethics? And this is already a discussion that is taking place.

And then, after governance and coordination, it’s also important with more and more advanced versions of AI, to think about our identity, to think about the control issues, to think in general about this vision of the future, the wellbeing of the people, of the society, of the planet. And how to reverse engineer, in some sense, from a vision of the future to what it means in terms of a behavior of the technology, behavior of those that produce the technology, and behavior of those that regulate the technology, and so on.

We need a lot more of this reverse engineering approach, where instead of starting from the current state of the art of the technology and saying, “Okay, these are the properties that I think I want in this technology: fairness, robustness, transparency, and so on, because otherwise I don’t like this technology to be deployed without these properties.” And then see what happens in the next version, more advanced version of the technology, and think about possibly new properties and so on. This is one approach, but the other approach is that, “Okay, this is the vision of life in, I don’t know, 50 years from now. How do I go from that to the kind of the technology, to the direction that I want to push the technology towards to achieve that vision?

Ariel: We are getting a little bit short on time, and I did want to follow up with Ashley about his other job. Basically, Ashley, as far as my understanding, you essentially have a side job as a hip hop artist. I think it would be fun to just talk a little bit in the last couple of minutes that we have about how both you and Francesca see artificial intelligence impacting these more creative fields. Is this something that you see as enhancing artists’ abilities to do more? Do you think there’s a reason for artists to be concerned that AI will soon be a competition for them? What are your thoughts for the future of creativity and AI?

Ashley: Yeah. It’s interesting. As you point out, over the last decade or so, in addition to furthering my career as an engineer, I’ve also been a hip hop artist and I’ve toured around the world and put out some albums.I think where we see the biggest impact of technology on music and creativity, I think, is, one, in the democratization of access to creation. Technology is a lot cheaper. Having a microphone and a recording setup or something like that, from the standpoint of somebody that does vocals like me, is much more accessible to many more people. And then, you see advances and — you know, when I started doing music I would print CDs and press vinyl. There was no iTunes. And just, iTunes has revolutionized how music is accessed by people, and more generally how creative products are accessed by people in streaming, etc. So I think looking backward, we’ve seen most of the impact of technology on those two things: access to the creation and then access to the content.

Looking forward, will those continue to be the dominant factors in terms of how technology is influencing the creation of music, for example? Or will there be something more? Will AI start to become more of a creative partner? We’ll see that and it will be evolutionary. I think we already see technology being a creative partner more and more so over time. A lot of the things that I studied in school — digital signal processing, frequency, selective filtering — a lot of those things are baked into the tools already. And just as we see AI helping to interpret other kinds of signal processing products like radiology scans, we’ll see more and more of that in the creation of music where an AI assistant — for example, if I’m looking for samples from other music — an AI assistant that can comb through a large library of music and find good samples for me. Just as we do with Instagram filters — an AI suggesting good filters for pictures I take on my iPhone — you can see in music AI suggesting good audio filters or good mastering settings or something, given a song that I’m trying to produce or goals that I have for the feel and tone of the product.

And so, already I think as an evolutionary step, not even a revolutionary step, AI becoming more present in the creation of music. I think maybe, as in other application areas, we may see, again, AI being more of a teammate, not only in the creation of the music, but in the playing of the music. I heard an article or a cast on NPR about a piano player that developed an AI accompaniment for himself. And so, as he played in a live show, for example, there would be an AI accompaniment and you could dial back the settings on it in terms of how aggressive it was in rhythm and time, where it situated with respect to the lead performer. Maybe in hip hop we’ll see AI hype men or AI DJs. It’s expensive to travel overseas, and so somebody like me goes overseas to do a show, and instead of bringing a DJ with me, I have an AI program that can select my tracks and add cuts at the right places and things like that. So that was a long-winded answer, but there’s a lot there. Hopefully that was addressing your question.

Ariel: Yeah, absolutely. Francesca, did you have anything you wanted to add about what you think AI can do for creativity?

Francesca: Yeah. I mean, of course I’m less familiar of what AI is already doing right now, but I am aware of many systems from companies into the space of delivering content or music or so on, systems where the AI part is helping humans develop their own creativity even farther. And as Ashley said, I mean, I hope that in the future AI can help us be more creative — even people that maybe are less able than Ashley to be creative themselves. And I hope that this will enhance the creativity of everybody, because this will enhance the creativity, yes, in hip hop or in making songs or in other things, but also I think it will help to solve some very fundamental problems because having a population which is more creative, of course, is more creative in everything.

So in general, I hope that AI will help us human beings be more creative in all aspects of our life besides entertainment — which is of course very, very important for all of us for the wellbeing and so on — but also in all the other aspects of our life. And this is the goal that I think — going to the beginning where I said AI’s purpose should be the one of enhancing our own capabilities. And of course, creativity is also a very important capability that human beings have.

Ariel: Alright. Well, thank you both so much for joining us today. I really enjoyed the conversation.

Francesca: Thank you.

Ashley: Thanks for having me. I really enjoyed it.

Ariel: For all of our listeners, if you have been enjoying this podcast, please take a moment to like it or share it and maybe even give us a good review. And we will be back again next month.

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous moral and empirical implications for life’s purpose and role in the universe. Is existence without consciousness meaningful?

In this podcast, Lucas spoke with Mike Johnson and Andrés Gómez Emilsson of the Qualia Research Institute. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder. Mike is interested in neuroscience, philosophy of mind, and complexity theory.

Topics discussed in this episode include:

  • Functionalism and qualia realism
  • Views that are skeptical of consciousness
  • What we mean by consciousness
  • Consciousness and casuality
  • Marr’s levels of analysis
  • Core problem areas in thinking about consciousness
  • The Symmetry Theory of Valence
  • AI alignment and consciousness

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can learn more about consciousness research at the Qualia Research InstituteMike‘s blog, and Andrés blog. You can listen to the podcast above or read the transcript below. Thanks to Ian Rusconi for production and edits as well as Scott Hirsh for feedback.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe a information processing system. First of all, the competitional/behavioral level. What that is about is understanding the input output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, consciousness, it’s used in many different ways. There’s like eight to definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel of the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric plate space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a framing variant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a framing variant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here he is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is a objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connecting specific harmonic wave model and what we’ve done is combined it with our symmetry threory of valence and said this is sort of a way of basically getting a foyer transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way that has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, the link to the Symmetry Theory of Valence than it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to concealiance, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing Xrisk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long term frame

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last year global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what effective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario, don’t knock it until you’ve tried it, but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia of formalism and valence realism and offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religious actually extend that into the future or past with reincarnation or maybe with heaven.

The sense of ontological distinction between you and others while at the same time ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define the rational decision making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants so to speak. But I would say that there is a very tricky aspect here that has to do with a game theory. We evolved to believe in close individualism. the fact that it’s evolutionarily adaptive, it’s obviously not an argument for it being fundamentally true, but it does seem to be some kind of a evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way If you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to maybe acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

FLI Podcast: The Unexpected Side Effects of Climate Change With Fran Moore and Nick Obradovich

It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.

In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.

Topics discussed in this episode include:

  • How getting used to climate change may make it harder for us to address the issue
  • The social cost of carbon
  • The effect of temperature on mood, exercise, and sleep
  • The effect of temperature on public safety and democratic processes
  • Why it’s hard to get people to act
  • What we can all do to make a difference
  • Why we should still be hopeful

Publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hello, and a belated happy Earth Day to everyone. I’m Ariel Conn, your host of The Future of Life podcast. And in honor of Earth Day this month, I’m happy to have two climate-related scientists joining the show. We’ve all heard about the devastating extreme weather that climate change will trigger; We’ve heard about melting ice caps, rising ocean levels, warming oceans, flooding, wildfires, hurricanes, and so many other awful natural events.

And it’s not hard to imagine how people living in these regions will be negatively impacted. But climate change won’t just affect us directly. It will also impact the economy, agriculture, our mental health, our sleep patterns, how we exercise, food safety, the effectiveness of policing, and more.

So today, I have two scientists joining me to talk about some of those issues. Doctor Nick Obradovich is a research scientist at the MIT Media Lab. He studies the way that climate change is likely impacting humanity now and into the future. And Doctor Fran Moore is an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. Her work sits at the intersection of climate science and environmental economics and is focused on understanding how climate change will affect the social and natural systems that people value.

So Nick and Fran, thank you so much for joining us.

Nick: Thanks for having us.

Fran: Thank you.

Ariel: Now, before we get into some of the topics that I just listed, I want to first look at a paper you both published recently called “Rapidly Declining Remarkability of Temperature Anomalies May Obscure Public Perception of Climate Change.” And essentially, as you describe in the paper, we’re like frogs in boiling water. As long as the temperatures continue to increase, we forget that it used to be cooler and we recalibrate what we consider to be normal for weather. So what may have been considered extreme 15 years ago, we now think of as normal.

Among other things, this can make trying to address climate change more difficult. I want both of you now to talk more about what the study was and what it means for how we address climate change. But first, if you could just talk about what prompted this study.

Fran: So I’ve been interested for a long time in the question of: as the climate changes and people are gradually exposed to this new weather in their everyday life that used to be very unusual but because of climate change more and more typical, how do we think about defining things like extreme events under those kind of conditions?

I think researchers have this intuition that there’s something about human perception and judgment that goes into that or that there’s some kind of limit of how humans kind of understand the weather that define what we think of as normal and extreme, but no one had really been able to measure it. What I think is really cool in this study, and working with Nick and our other coworkers, we’re able to use data from Twitter to actually measure what people think of as remarkable, and then we can show that that changed quickly over time.

Ariel: I found this use of social media to be really interesting. Can you talk a little bit about how you used Twitter? And I was also curious if that — aside from being a new source of information — does it also present limitations in any way or is it just exciting new information?

Nick: The crux of this insight was that we talk about the weather all the time. It’s sort of the way to pass time in casual conversation, to say hi to people, to awkwardly change the topic — if someone has said something a little awkward, start talking about the weather. And we realized that Twitter is a great source for what people are talking about, and I had been collecting billions of tweets over the last number of years. And Fran and I met, and then we got talking about this idea and we were like, “Huh, you know, I bet you could use Twitter to measure how people are talking about the weather.” And then Fran had the excellent insight that you could also use it to get a metric of how remarkable people find the weather by how unusually much they’re talking about unusual weather. And so that was kind of the crux of the insight there.

And then really what we did is we said, “Okay, what terms exist in the English language that might likely refer to weather when people are talking about the weather?” And we combed through the billions of tweets that I had in my store and found all of the tweets plausibly about the weather and used that for our analysis and then mapped that to the historical temperatures that people had experienced and also the rates of warming over time that the locations that people lived in had experienced.

Ariel: And what was the timeframe that you were looking at?

Fran: So it’s about three years: from March of 2014 to the end of 2016. But then we’re able to combine that with weather data that goes back to 1980. So what we can then look at — we can match the tweeting behavior going on in this relatively recent time period, but we can look at how is that explained by all the patterns of temperature change across these counties.

So what we found that, firstly, maybe exactly what you would expect, right, which is that the rate at which people tweet about particular temperatures depends on what is typical for that location, for that time of year. And so if you have very cold weather but that very cold weather is basically what you should be expecting, you’re going to tweet about that less than if that very cold weather is atypical.

But then what we were able to show is that what people think of as “usual” that defines this tweeting behavior changes really quickly, so that if you have these unusual temperatures multiple years in a row the tweeting response quickly starts to decline. So what that indicates is that people are adjusting their ideas of normal weather very quickly. And we’re actually able to use the tweets to directly estimate the rate at which this updating happens and, to our best estimate, we think that people are using approximately the last two to eight years as a baseline for establishing normal temperatures for that location for that time of year. When people think of, look at the weather outside, and they’re evaluating is it hot, is it cold, the reference point they’re using is set by the fairly recent past.

Ariel: What does this mean as we’re trying to figure out ways to address climate change?

Nick: When we saw this result, we were a bit troubled because it was faster than we would perhaps hope. I’m a political scientist by training, and I saw this and I said, “This is not ideal,” because if you have people getting used to a climate that is changing on geologically rapid scales but perhaps on human time scales somewhat slow — if people get used to that as it changes, then some of the things that we know helps to drive political action, policy, and political attention is just awareness of a problem. And so if you’re having people’s expectations adapt pretty quickly to climate change, then all of a sudden a hundred-degree day in North Dakota would have been very unusual in 2000 but maybe it’s fairly normal in 2030. And so as a result, people aren’t as aware of the signal that climate change is producing. And that could have some pretty troubling political implications.

Fran: My takeaway from this is that I think it certainly points to the risk that these conditions that are geologically or even historically very, very unusual — that they are not perceived as such. We’re really limited by our human perception, and that’s even within individuals, right — what we’re estimating is something that happens within an individual’s lifetime.

So what it means is that you can’t just assume that as climate change gets worse it’s going to automatically rise to the top of the political agenda in terms of urgency. And that, like a lot of other chronic, serious social problems we have, that it takes a lot of work on the part of activists and norm entrepreneurs to do something about climate change. And that just because it’s happening and it’s becoming, at least statistically or scientifically, increasingly clear that it’s happening, that won’t necessarily translate into people wanting to do something about it.

Ariel: And so you guys were looking more at what we might consider sort of abnormalities in relatively normal weather: if it’s colder in May than we’d expect or it’s hotter in January than we’d expect. But that’s not the same as some of the extreme weather events that we’ve also seen. I don’t know if this is sort of a speculative question, but do you think the extreme weather events could help counter our normalization of just changing temperatures or do you think we would eventually normalize the extreme weather events as well?

Nick: That’s a great question. So one of the things we didn’t look at is, for example, giant hurricanes, big wildfires, and things like that that are all likely to increase in frequency and severity in the future. So it could certainly be the case that the increase in frequency and intensity of those events offsets the adaptation, as you suggest. We actually are trying to think about ways to measure how people might adapt to other climate-driven phenomena aside from just regular, day-to-day temperature.

I hope that’s the case, right? Because if we’re also adapting to sea level rise pretty rapidly as it goes along and we’re also adapting to increased frequency of wildfires and things like that, a few things might happen; one being that if we’re getting used to semi-regular flooding, for example, we don’t move as quickly as we need to — up to the point where basically cities start getting inundated, and that could be very problematic. So I hope that what you suggest actually turns out to be the case.

Fran: I think that this is a question we get a lot, like, “Oh, well temperature is one thing, but really the thing that’s really going to spur people is these hurricanes or floods or these wildfires.” And I think that’s a hypothesis, but I would say it’s as yet untested. And sure, a hurricane is an extreme event, but when they start happening frequently, is that going to be subject to the same kind of normalization phenomenon that we show here? I would say I don’t know, and it’s possible it would look really different.

But I think it’s also possible that it wouldn’t, and that when you start seeing these happen on a very regular basis, that they become normalized in a very similar way to what you see here. And it might be that they spur some kind of adaptation or response policy, but the idea that they would automatically spur a lot of mitigation policy I think is something that people seem to think might be true, but I would say that we need some more empirical evidence.

Nick: I like to think of humans as an incredibly adaptable species. I think we’re a great species for that reason. We’re arguably the most successful ever. But our adaptability in this instance may perhaps prove to be part of our undoing, just in normalizing worsening conditions as they deteriorate around us. I hope that the hypothesis that Fran lays out ends up being the case: that, as the climate gets weirder and weirder, there is enough signal that people become concerned enough to do something about it. But it is just an empirical hypothesis at this point.

Fran: What I thought was a really neat thing that we were able to do in this paper was ask: are people just not talking about these conditions because they’ve normalized them and they’re no longer interesting or have people actually been able to take action to reduce the negative consequences of these conditions? And so to do that we used sentiment analysis. So this is something that Nick and our other author Patrick Baylis have used before: Just based on the words that are being used in the tweets, you can measure the overall mood being conveyed or the kind of emotional state of people sending those tweets and what very hot and very cold temperatures have negative effects on sentiment. And we find that those effects persist even if people stop talking about these unusual temperatures.

What that’s saying is that this is not a good news story of effective adaptation, that people are able to reduce the negative consequences of these temperatures. Actually, they’re still being very negatively affected by them — and they’re just not talking about them anymore. And that’s kind of the worst of both worlds.

Ariel: So I want to actually follow up with that because I had a question about that paper that you just referenced. And if I was reading it correctly, it sort of seemed like you’re saying that we basically get crankier as the weather falls onto either extreme of our preferred comfort zone. Is that right? Are we just going to be crankier as climate gets worse?

Nick: So that was the paper that Patrick Baylis and I had with a number of other co-authors, and the key point about that paper is that we were looking at historical contemporaneous weather and we weren’t looking for adaptation over time with that analysis. So what we found is that at certain level of temperature, for example when it’s really hot outside, people’s sentiment goes down — their mood is worsened. When it’s really cold outside, we also found that people’s sentiment was worsened; and we found that, for example, lots of precipitation made people unhappy as well.

But with that paper what we didn’t do was examine the degree to which — changes in the weather over time, people got used to those. And so that’s what we were able to do in this paper with Fran, and what we saw was, as Fran points out, troubling, which is that people weren’t substantially adapting to these temperature shocks over time, to longer term changes in climate —  they just weren’t talking about them as much.

So if you think though that there is no adaptation, then yeah, if the world becomes much hotter, on the hot end of things — so in the summer, in the northern hemisphere for example — people will probably be a bit grumpier. Importantly though, on the other side of things, in the wintertime, if you have warming, you might expect that people are in somewhat better moods because they’re able to enjoy nicer weather outside. So it is a little bit of a double-edged sword in that way, but again important that we don’t see that people are adapting, which is pretty critical.

Ariel: Okay. So we can potentially expect at least the possibility of decrease in life satisfaction just because of weather, without us even really appreciating that it’s the weather that’s doing it to us?

Nick: Yes, during hotter periods. The converse is that during the wintertime, in the northern hemisphere, we would have to say that warming temperatures, people would probably enjoy for the most part. If it was supposed to be 35 degrees Fahrenheit outside and it’s now 45 Fahrenheit, that’s a bit more pleasant. Now you can go with a lighter jacket.

So there will be those small positive benefits — although, as Fran is probably going to talk about here in a little bit, there are other big countervailing negatives that we need to consider too.

Fran: What I like about this paper that Nick and Patrick wrote previously on sentiment, they have these comparisons to it being a Monday or to home team loss. Sometimes it’s hard to put these measures in perspective, and so Mondays on average make people miserable and it being very, very hot out also makes people miserable in kind of similar ways to it being a Monday.

Nick: Yeah. We found that particularly cold temperatures, for example, were a similar magnitude of effect on positive sentiment. A reduced positive sentiment of a magnitude that was equivalent to a small earthquake in your location and things like that. So the magnitude effects of the weather are much larger than we necessarily thought that they would be, which we thought was I guess interesting. But also there was a whole big literature from psychology and economics and political science that had looked at weather and various outcomes and found that sometimes the effect sizes were very large and sometimes the effect sizes were effectively zero. So we tried to basically just provide the answer to that question in that paper: The weather matters.

Ariel: I want to go back to the idea of whether or not extreme events will be normalized, because I tend to be slightly cynical — and maybe this is hopeful for once — that the economic cost of the extreme events is not something we would normalize too, that we would not get used to having to spend billions of dollars a year, whatever it is, to rebuild cities.

And Fran, I think that touches on some of your work if I’m correct, in that you look at what some of these costs of climate change would be. So first, is that correct? Is that one of the things that you look at?

Fran: Yeah. A large component of my work has been on improving the representation of climate change damages, so kind of what we know from the physical sciences about how climate change affects the things that we care about and including the representation of that in the thing called the social cost of carbon, which is a measure that’s very relevant for the regulatory and policy analysis for climate change.

Ariel: Can you explain what the social cost of carbon is? What is being measured?

Fran: So if you think about when we emit a ton of CO2, right, and that ton of CO2 goes off into the elements of the earth and it’s going to affect the climate, that change in the climate is going to have consequences around the world in many different sectors and is going to stay in the atmosphere for a long time. And so those effects are going to persist far out into the future.

What the social cost of carbon is, it’s really just an accounting exercise that tries to quantify what are all those impacts and then add them all up together and put them in common units and assign that as a cost of that ton of CO2 that you emitted. You can see in that description why this is an ambitious exercise in that we’re talking about, theoretically there should be all these climate change impacts around the world for all time. And then there’s another step too, which is in order to aggregate these to add them up, you need to put everything into common units. So the units that we use are dollars, so that’s a critical economic valuation step in order to think about these things that happen in agriculture or they happen along coastlines or they affect mortality risk and how do you take all them and then put them into some kind of common unit and value them all.

And so depending on what type of impact you’re talking about, that’s more or less challenging. But it’s an important number because at least in the United States, we have a requirement that all regulations have to have passed a cost-benefit analysis. So in order to do a cost-benefit analysis of climate regulation, you need to understand what are the benefits of not emitting CO2? So pretty much any policy that’s affecting emissions needs to account for these damages in some way. That’s why this is very directly relevant to policy.

Ariel: I want to keep looking at what this means. In one of your papers you have a sentence that reads, “impacts on the agriculture increase from net benefits $2.7 ton per carbon to net cost of $8.5 per ton of CO2.” I think that seemed like a really good example for you to explain what these costs actually mean?

Fran: Yeah. This was an exercise I did a couple of years ago with coauthors Tom Hertel and Uris Baldos and Delavane Diaz. The idea was that we know now a lot about how climate change affects crop yields. There’s been an awful lot of work on that in economics and agricultural sciences. But that was essentially not represented in the social cost of carbon, where our estimates of climate change damages really came from studies that were either in the late 80s or the early 90s, and really our understanding of how climate change will affect agriculture has really changed since then.

What those numbers represent, the benefits of $2.7 per ton is what is currently represented in the models that calculate the social cost of carbon. So the fact that it’s negative, that indicates that these models were thinking that agriculture on net is going to benefit from climate change. This is largely because a combination of CO2 fertilization and a fair bit of assumption that in most of the world crops are going to benefit from higher temperatures. Now we know that’s more or less not the case.

When we look at how we think temperature and CO2 is going to affect the major crops around the world, we use these estimates from the IPCC, and then we introduce those into an economic model. This is a valuation set. That economic model will kind of account for the fact that countries can shift what they grow, they can change their consumption patterns, they can change their trading partners. A lot of these economic adjustments that we know can be done, and this modeling accounts for all of that. We find a fairly large negative effect of climate change on agriculture, which amounts to about $9 per ton of CO2, and those are kind of discounted paths. So you emit a ton of CO2 today, that’s the dollar value today of all the future damages that ton of CO2 will have via the agricultural sector.

Ariel: As a reminder, how many tons of CO2 were emitted, say, last year, or the year before? Something that we know?

Fran: We do know that. I’m not sure I can tell you that off the top of my head. I would caution you that you also don’t want to take this number and just multiply it by the total tons emitted, because this is a marginal value. This is merely about do we emit this ton or not? It’s really not a value that can be used for saying, “Okay, well the total damages from climate change are X.” There’s distinction between total damages and marginal damages, and the social cost of carbon number is very much about marginal damages.

So it’s like at the margin, how much should we tax CO2? It’s really not going to tell you, should we be on a two-degree pathway, or should we be on a four-degree pathway, or should we be on a 1.5-degree pathway? That you need a really different analysis for.

Ariel: I want to ask one more follow-up question to this, and then I want to get onto some of the other papers of Nick’s. What are the cost estimates that we’re looking at right now? What are you comfortable saying that we’re, I don’t know, losing this much money, we’re going to pay this much money, we’re going to negatively be impacted by X number of dollars?

Fran: The exercise that the Obama administration went through, a fairly comprehensive exercise to take the existing models and standardize them in certain ways to try and say, “What is the social cost of carbon value that we should use?” They have a number that’s around $40 per ton of CO2. If you take that number as a benchmark, there’s obviously a lot of uncertainty around it, and I think it’s fair to say a lot of that uncertainty is on the high end rather than on the low end. So if you think about probability distribution around that existing number, I would say there’s a lot of reasons why it might be higher than $40 per ton, and there’s a few, but not a ton, of reasons why it might be lower.

Ariel: Nick, was there anything you wanted to add to what Fran has just been talking about?

Nick: Yeah. The only thing I would say is I totally agree that the uncertainty is on the upper bound of the estimate of the social cost of carbon, and possibly on the extreme upper bound. So there are unknowns that we can’t estimate from the historical data in terms of being able to figure out what happens in the natural system and how that translates through to the social system and the social costs. We and Fran are basically just doing the best we can with the historical evidence that we can bring to bear on the question, but there are giant “unknown unknowns,” to quote Donald Rumsfeld.

Ariel: I want to sort of quantify this ever so slightly. I Googled it, and it looks like we are emitting in the tens of billions of tons of carbon each year? Does that sound right?

Fran: Check that it’s carbon and not CO2. I think it’s eight to nine gigatons of carbon.

Ariel: Okay.

Nick: CO2 equivalence.

Ariel: Anyway, it’s a lot.

Nick: It’s a lot, yeah.

Ariel: That’s the point.

Nick: It’s a lot; It’s increasing. I think 2018 was an increased blip in terms of the rate of emissions. We need to be decreasing, and we’re still increasing. Not great.

Ariel: All right. We’ll take a quick break from the economic side of things and what this will financially cost us, and look at some of the human impacts that we many not necessarily be thinking about, but which Nick has been looking into. I’m just going to go through a list of very quick questions that I asked about a few papers that I looked at.

The first one I looked at is apparently — and this makes sense when I think about it — climate change is going to impact our physical activity, because it’s too hot in places, or things like that. I was wondering if you could talk a little bit about the research you did into that and what you think the health implications are.

Nick: Yeah, totally. So I like to think about the climate impacts that are not necessarily easily and readily and immediately translated into dollar value because I think really we live in a pretty complex system, and when you turn up the temperature on that complex system, it’s probably going to affect basically everything. The question is what’s going to be affected and how much are the important things going to be affected? And so a lot of my work has focused on identifying things that we hadn’t yet thought about as social scientists in doing the social impact estimates in the cost of carbon and just raising questions about those areas.

Physical activity was one. The idea to look at that actually came from back in 2015 — there was a big heat wave in San Diego when I was living there, and I was in a regular running regimen. I would go running at 4:00 or 5:00 PM, but there were a number of weeks, definitely strings of days, where it was 100 degrees or more in October in San Diego, which is very unusual. At 4:00 PM it would be 100 degrees and kind of humid, so I just didn’t run as much for a couple of weeks, and that threw off my whole exercise schedule. I was like, “Huh, that’s an interesting impact of heat that I hadn’t really heard about.”

So I was like, “Well, I know this big data set that collects people’s reported physical activity over time, and has a decade worth of data on randomly sampled US, I think about a million randomly sampled US citizens.” Over a million. So I had those data, and I was like, “Well, I wonder if you see the weather and the climate that these people are living in, does that influence their exercise patterns?” What we found was a little bit surprising to me because I had thought about it on the hot end: “Oh, I stopped running because it was too hot.” But the reality is that temperature, and also rainfall, impact our physical activity patterns across the full distribution.

When it’s really cold outside, people don’t report being very physically active and one of the main reasons for that is one of the primary ways Americans get physical activity is by going outside for a run or a jog or a walk. When it’s very nasty outside, people report not being as physically active. We saw on the cold end of the distribution that as temperatures warmed up, people exercised more. That was actually up to a relatively high peak in that function. It was an inverted U shape, and the peak was relatively high in terms of temperature. It was somewhere around 84 degrees fahrenheit.

What we realized actually is that at least in the US, at least in some of the northern latitudes in the US, people might exercise more as temperatures warm up to a point. They might exercise more in the wintertime, for example. That was this small little silver lining in what is otherwise, from my research and from Fran’s research and most research on this topic, a cascade of negative news that is likely to result from climate change. But the health impacts of being more physically active are positive. It’s one of the most important things we can do for our health. So a small, positive impact of warming temperatures offset by all the other things that we’ve found.

Ariel: I know from personal experience I definitely don’t like to run in the winter. I don’t like ice, so that makes sense.

Nick: Ice, frostbite.

Ariel: Yeah.

Nick: All these things are … yeah. So just observationally, if I look out my window, and there’s a running path near me, I see dramatically more people on a sunny, mild day than I do during the middle of the winter. That’s how most people get their exercise. A lot of people, we know from the public health literature, if they’re not going out for a walk or a stroll, they’re not really getting any physical activity at all.

Ariel: Okay. So potential good news.

Nick: A little bit. Just a little bit.

Fran: Yeah. Nick moved from San Diego to Boston, so I think he’s got a better appreciation of the benefits of warmer wintertime temperatures.

Nick: I do! Although, and this is an important limitation in that study, is we didn’t really, again, look at adaptation over time. And what I found moving to Boston was that I got used to the cold winters much faster than I thought I would coming from San Diego, and now do go running in the wintertime here, though I thought I would barely be able to go outside. So perhaps that’s a positive thing in terms of our ability to adapt on the hotter end as well, and perhaps that undercuts a little bit the degree to which warming during the winter might increase physical activity.

This is a broader and more general point. A lot of these studies — it’s pretty hard to look at long-term adaptation over time because some of the data sets that we have just don’t give us enough span of time to really see people adapt their behaviors within person. So, many of the studies are kind of estimating the direct effect of temperature, for example, on physical activity, and not estimating how much long-term warming has changed people’s physical activity patterns. There are some studies that do that with respect to some outcomes — for example, agricultural yields. But it’s less common to do that with some of the public health-related outcomes and psychological-related outcomes.

Ariel: I want to ask about some of these other studies you’ve done as well, but do you think starting these studies now will help us get more research into this in the future?

Nick: Yeah. I think the more and the better data that we have, the better we’re going to be able to answer some of these questions. For example, the physical activity paper, also we did a sleep paper — the self-report data that we used in those papers are indeed just self-report data. So we’re able to get access to what are called actigraph data, or data that come from monitors like Fitbit and actually track people’s sleep and physical activity. We’re working on those follow-up studies, and the more data that we have and the longer that we have those data, the more we can identify potential adaptation over time.

Ariel: The sleep study was actually where I was going to go next. It seemed nicely connected to the physical activity one. Basically we’ve been told for years to get eight hours of sleep and to try to set the temperatures in our rooms to be cooler so that our quality of sleep is better. But it seems that increasing temperatures from climate change might affect that. So I was hoping you could weigh in on that too.

Nick: Yeah. I think you said it pretty well. The results in that paper basically indicate that higher nighttime temperatures outside, higher ambient temperatures outside, increase the frequency that people report a bad night of sleep. Basically what we say is absent adaptation, climate change might worsen human sleep in the future.

Now, one of the primary ways you adapt, as you just mentioned, is by turning the AC on, keeping it cooler in the room in the summertime, and trying to fight the fact that it’s — as it was in San Diego — it’s 90 degrees and humid at 12:00 AM. The problem with that is that a lot of our electricity grid is currently still on carbon. Until we decarbonize the grid, if we’re using more air conditioning to make it cooler and make it comfortable in our rooms in the summers, we are emitting more carbon.

That poses something else that Fran and I have talked about and others are starting to work on: the idea that it’s not a one-way street. In other words, if the climate system is changing, and it’s changing our behaviors in order to adapt to it, or just frankly changing our behaviors, we are potentially altering the amount of carbon that we put back into the system and the positive feedback loop that’s driven by humans this time, as opposed to permafrost and things like that. So, it’s a big, complex equation. And that makes estimating the social cost of carbon all the harder because it’s no longer just this one-way street. But if it means emitting carbon through behavioral effects of emitting that carbon causes the emission of more carbon, then you have a harder-to-estimate function.

Fran: Yeah, you’re right, and it is hard. I often get questions of like, “Oh, is this in the social cost of carbon? Is this?” And usually the answer is no.

Ariel: Yeah. I guess I’ve got another one sort of like that. I mean, I think studies indicate pretty well right now that if you don’t get enough sleep, you’re not as productive at work, and that’s going to cost the economy as well. Is stuff like that also being considered or taken into account?

Fran: I think in general, I think researchers’ ideas a few decades ago was very much that there were a very limited set of pathways by which a developed economy could be affected by climate. We could enumerate those, and they were things like agriculture or forestry and coastline affected by sea level rise. The newer work that’s being done now, like Nick’s papers that we just talked about, and a lot of other work, is showing that actually we seem to be very sensitive to temperature on a number of fronts, and that has these quite pervasive economic effects.

Fran: And so, yeah, the sleep question is a huge one, right? If you don’t get a good night’s sleep, that affects how much you can learn in school the next day, it affects your productivity at work the next day. So we do see evidence that temperature affects labor productivity in developed countries. Even in sectors that you think should be relatively well insulated against them, let’s say because there’s work that’s being done inside, there’s evidence too that high temperatures affect how well students can learn in school and their test scores. That has potentially a very long term effect on their educational trajectory in life and their ability to accumulate human capital and their earning potential in the future.

Fran: And so, these newer findings I think are suggesting that even developed economies are sensitive in ways that we’re only beginning to learn to climate change, and pretty much none of that is currently represented in our current estimates of the social cost of carbon.

Nick: Yeah, that’s a great point. And to add an example to that, I did a study last year in which I looked at government productivity, so government workers’ productivity. Because we had seen a number of these studies, as Fran mentioned, that private sector productivity was declining, and I was wondering if government workers that are tasked with overseeing our safety, especially in times of heat stress and other forms of stress, if those workers themselves were affected by heat stress and other forms of environmental stress.

We indeed found that they were, so we found that police officers were less likely to stop people in traffic stops even though there was an increased risk of traffic fatalities and also crime increases with higher temperatures as well. We found that food safety inspectors were less likely to do inspections. The probability of an inspection declined as the temperature increased, though the risk of violation conditional on an inspection happening increased. So it’s more likely that there’s a food safety problem when it’s hot out, but food safety inspectors were less likely to go out and do inspections.

That’s another thing that fits into, “Okay, we’re affected in really complex ways.” Maybe it’s the case that the food safety inspectors were less likely to go do their job because they were really tired because they didn’t sleep well the night before, or perhaps because they were grumpy because it was really hot outside. We don’t know exactly, but these systems are indeed really complicated and probably a lot of things are in play all at once.

Ariel: Another one that you have looked that I think is also important to consider in this whole complex system that’s being impacted by climate change is democratic processes.

Nick: Yeah, yeah. I’m a political scientist by training, and what we political scientists do is think a lot about politics, the democratic process, voting, turnout, and one of things that we know best in political science is this thing called retrospective voting or perhaps economic voting — basically the idea that people vote largely based on either how well they individually are doing, or how well they perceive their society is doing under the current incumbent. So in the US for example, if the economy is doing well the incumbent faces better prospects than if the economy is doing poorly. If individuals perceive that they are doing well, the incumbent faces better prospects.

I basically just sat down and thought for a while, and was like, you know, climate change across all these dimensions is likely to worsen both economic well-being, and also just personal, psychological, and physiological well-being. I wonder if it’s the case that it might somewhat disrupt the way that democracies function, and the way that elections function in democracies. For example, if you’re exposed to hotter temperatures there are lots of reasons to suspect that you might perceive being yourself less well-off — and whoever’s in office, you might just be a little bit less likely to vote for them in the next election.

So I put together a bunch of election results from a variety of countries around the world, a variety of democratic institutions around the world, and looked at the effect of hotter temperatures on the incumbent politicians’ prospects in the upcoming elections: So, what were the effects of the temperatures prior to the election on the electoral success of that incumbent? And what I found was that as you had unusual increases in temperature the year prior to an election, and as those got hotter on the distribution — so hotter places — you saw that the incumbent prospects declined in that election. Incumbent politicians were more likely to get thrown out of office when temperatures were unusually warm, especially in hotter places.

And that, as a political scientist, is a little bit troubling because it could be two things. It could be the case that politicians are being thrown out of office because they don’t respond well to the stressors associated with added temperature. So they could, for example, if there was a heatwave, and it caused some crop losses, maybe those politicians didn’t do a good enough job helping the people who lost those crops. But it also might just be the case that people are grumpier, and they’re not feeling as good, and there’s really no way the politician can respond, or the politician has limited resources and can only respond so much.

And if that’s the driving function then what you see is this exogenous shock leading to an ouster of a democratically elected politician, perhaps not directly related to the performance of that politician. And that can lead to added electoral churn; If you see increased rates of electoral churn where politicians are losing office with increasing frequency, it can shorten the electoral time horizons that politicians have. If they think that every election they stand a real good chance of losing office they may be less likely to pursue policies that have benefits over two or three election cycles. That was the crux of that paper.

Ariel: Fran, did you have anything you wanted to add to that?

Fran: I think it’s a really really fascinating question. This is one of my favorite of Nick’s papers. We think about how these really fundamental institutions that we think when we go to the ballot box, and we do our election, there’s a lot of factors that go into that, right? Even the very fact that you can pick up any kind of temperature signal on that is surprising to me, and I think it’s a really important finding. And then trying to pin down these mechanisms I think is interesting for trying to play out the scenarios of how does climate change proceed in terms of the effects of changing the political environment in which we’re operating, and having, like Nick said, these potentially long term effects on the types of issues politicians are willing to work on. It’s really important, and I think it’s something that needs more work.

Nick: Fran makes an excellent point embedded in there, which is the understanding of what we call causal mediation. In other words, if you see that hot temperatures lead to a reduction in GDP growth, why is that? What exactly is causing that? GDP growth is this huge aggregate of all of these different things. Why might temperature be causing that? Or even, for example, if you see that temperature is affecting people’s sleep quality, why is that the case? Is it because it’s influencing the degree to which people are stressed out during the day because they’re grumpier, they’re having more negative interactions, and then they’re thinking about that before they fall asleep? Is it due to purely physiological reasons, circadian rhythm and sleep cascades?

The short of it is, we don’t actually have very good answers to most of these questions for most of the climates impacts that we’ve looked at, and it’s pretty critical to have better answers, largely because if you want to adapt to coming climate changes, you’d like to spend your policy money on the things that are most important in those equations for reducing GDP growth or causing mental health outcomes or worsening people’s mood. You’d like to really be able to tell people precisely what they can do to adapt, and also spend money precisely where it’s needed, and it’s just strictly difficult science to be able to do that well.

Ariel: I want to actually go back real quick to something that you had said earlier, too: the idea that if politicians know that they’re unlikely to get elected during the next cycle, they’re also unlikely to plan long term. And I think especially when we’re looking at a situation like climate change where we need politicians who can plan long term, it seems like can this actually exacerbate our short-term thinking?

Nick: Yeah. That’s what I was concerned about, and still something that I am concerned about. As you get more and more extremes that are occurring more and more regularly and politicians are either responding well or not responding well to those extremes it may be somewhat like our weather and expectations paper — similar underlying psychological dynamics — which is just that people become more and more focused on their recent past, and their recent experience in history, and what’s going on now.

And if that’s the case then if you’re a politician, and you’ve had a bunch of hurricanes, or you’re dealing with the aftermath of hurricanes in your district, really should you be spending your policy efforts on carbon mitigation, or should you be trying to make sure that all of your constituents right now are housed and fed? That’s a little bit of a false dichotomy there, but it isn’t fully a false dichotomy because politicians only have so many resources, and they only have so much time. So as their risk of losing election goes up due to something that is more immediate, politicians will tend to focus on those risks as opposed to longer-term risks.

Ariel: I feel like in that example, too, in defense of the politicians, if you actually have to deal with people who are without homes and without food, that is sort of the higher priority.

Nick: Totally. I mean, I did a bunch of field work in Sub-Saharan Africa for my graduate studies and spent a lot of time in Malawi and South Africa, and talking to politicians there about how they felt about climate change, and specifically climate change mitigation policy. And half the time that I asked them they just looked at me as if I was crazy, and would explicitly say, like, “You must be crazy if you think that we have a  time horizon that gives us 20 years to worry about how our people are doing 20 years from now when they can’t feed themselves, and don’t have running water, and don’t have electricity right now. We’re working on the day to day things, the long term perspective just gets thrown out the window.” I think to a lesser degree that operates in every democratic polity.

Fran: This gets back to that question that we were talking about earlier: Are extreme events kind of fundamentally different in motivating action to reduce emissions? And this is exactly the reason why I’m not convinced that it’s the case, in that when you have the repeated extreme events, yes, there’s a lot of focus on rebuilding or restoring or kind of recovering from those events — potentially at the detriment of longer-term, less immediate action that would affect the long-term probability of getting those events in the future, which is reducing emissions.

And so I think it’s a very complex, causal argument to make in the face of a hurricane or a catastrophe that you need to be reducing emissions to address that, right, and that’s why I’m not convinced that just getting more and more disasters is going to automatically lead to more action on climate change. I think it’s actually almost this kind of orthogonal process that generates the political will to do something about climate change.

Having these disasters and operating in this very resource-constrained world — that’s a world in which action on climate change might be less likely, right? Doing some things that are quite costly involve a lot of political will and political leadership, and doing that in an environment where people are feeling vulnerable and feeling kind of exposed to natural disasters I think is actually going to be more difficult.

Nick: Yeah. So that’s an excellent point, Fran. I think you could see both things operating, which is I think you could see that people aren’t necessarily adapting their expectations to giant wildfires every single summer, that they realize that something is off and weird about that, but that they just simply can’t direct that attention to doing something about climate change because literally their house just burnt down. So they’re not going to be out in the streets lobbying their politicians as directly because they have more things to worry about. That is troubling to me, too.

Ariel: So that, I think, is a super, super important point, and now I have something new to worry about. It makes sense that the local communities that are being directly impacted by these horrific events have to deal with what’s just happened to them, but do we see an increase in external communities looking at what’s happening and saying, “Oh, we’ve got to stop this, and because we weren’t directly impacted we actually can do something?”

Nick: Anecdotally, somewhat yes. I mean, for example, if you look at the last couple of summers and the wildfire season, when there are big wildfire outbreaks the news media does a better than average job at linking that extreme weather to climate change, and starting to talk about climate change.

So if it is the case that people consume that news media and are now thinking about climate change more, that is good. And I think actually from some of the more recent surveys we’ve actually seen an uptick in awareness about climate change, worry about climate change, and willingness to list it as a top priority. So there are some positive trends on that front.

The bigger question is still an empirical one, though, which is what happens when you have 10 years of wildfires every summer. Maybe people are now not talking about it as much as they did in the very beginning.

Ariel: So I have two final questions for both of you. The first is: is there something that you think is really important for people to know or understand that we didn’t touch on?

Nick: I would say this, and this is maybe more extreme than Fran would say, but we are in really big trouble. We are in really, really big trouble. We are emitting more and faster than we were previously. We are probably dramatically underestimating the social cost of carbon because of all the reasons that we noted here and for many more, and the one thing that I kind of always tell people is don’t be lulled by the relatively banal feeling of your sleep getting disrupted, because if your sleep is disrupted it’s because everything is being disrupted, and it’s going to get worse.

We’ve not seen even a small fraction of  the likely total cost of climate change, and so yeah, be worried, and ideally use that worry in a productive way to lobby your politicians to do something about it.

Fran: I would say we talked about the social cost of carbon and the way it’s used, and I think sometimes it does get criticized because we know there’s a lot of things that it doesn’t capture, like what Nick’s been talking about, but I also know that we’re very confident that it’s greater than zero at this point, and substantially greater than zero, right? So the question of, should it be 40 dollars a ton, or should it be 100 dollars a ton, or should it be higher than that, is frankly quite irrelevant when right now we’re really not putting any price on carbon, we’re not doing any kind of ambitious climate policy.

Sometimes I think people get bogged down in these arguments of, is it bad, or is it catastrophic, and frankly either way we should be doing something to reduce our emissions, and they shouldn’t be going up, they should be going down, and we should be doing more than we’re doing right now. And arguing about where we end that process, or when we end that process of reducing our emissions is really not a relevant discussion to be having right now because right now everyone can agree that we need to start the process.

And so I think not getting too hung up on should it be two degrees, should it be 1.5, but just really focused on let’s do more, and let’s do it now, and let’s start that, and see where that gets us, and once we start that process and can begin to learn from it, that’s going to take us a long way to being where we want to be. I think these questions of, “Why aren’t we doing more than we’re doing now?” are the most important and some of the most interesting around climate change right now.

Nick: Yeah. Let’s do everything we can to avoid four or five degrees Celsius, and we can quibble over 1.5 or two later. Totally agree.

Ariel: Okay. So I’m going to actually add a question. So we’ve got two more questions for real this time I think. What do we do? What do you suggest we do? What can a listener right now do to help?

Fran: Vote. Make climate change your priority when you’re thinking about candidates, when you’re engaged in the democratic process, and when you’re talking to your elected representative — reach out to them, and make sure they know that this is the priority for you. And I would also say talk to your friends and family, right? Like these scientists or economists talking about this, that’s not something that’s going to reach everyone, right, but reaching out to your network of people who value your opinion, or just talking about this, and making sure people realize this is a critical issue for our generation, and the decisions we take now are going to shape the future of the planet in very real ways, and collectively we do have agency to do something about it.

Nick: Yes. I second all of that. I think the key is that no one can convince your friends and family that climate change is a threat perhaps better than you, the listener, can. Certainly Fran and I are not going to be able to convince your friends, and that’s just the way that humans work. We trust those that we are close to and trust. So if we want to get a collective movement to start doing something about carbon, it’s going to have to happen via the political process, and it’s also just going to have to happen in our social networks, by actually going out there and talking to people about it. So let’s do that.

Ariel: All right. So final question, now that we’ve gone through all these awful things that are going to happen: what gives you hope?

Fran: If we think about a world that solves this problem, that is a world that has come together to work on a truly global problem. The reason why we’ll solve this problem is because we recognize that we value the future, that we value people living in other countries, people around the world, and that we value nature and nonhuman life on the planet, and that we’ve taken steps to incorporate those values into how we organize our life.

When we think about that, that is a very big ask, right? We shouldn’t underestimate just how difficult this is to do, but we should also recognize that it’s going to be a really amazing world to live in. It’s going to provide a kind of foundation for all kinds of cooperation and collective action I think on other issues to build a better world.

Recognizing that that’s what we’re working towards, these are the values that we want to reflect in our society, and that is a really positive place to be, and a place that is worth working towards — that’s what’s giving me hope.

Nick: That’s a beautiful answer, Fran. I agree with that. It would be a great world to live in. The thing that I would say is giving me hope is actually if I had looked forward in 2010 and said, “Okay, where do I think that renewables are going to be? Where do I think that the electrification of vehicles is going to be?” I would have guessed that we would not be anywhere close to where we are right now on those fronts.

We are making much more progress on getting certain aspects of the economy and our lives decarbonized than I thought we would have been, even without any real carbon policy on those fronts. So that’s pretty hopeful for me. I think that as long as we can continue that trend we won’t have everything go poorly, but I also hesitate to hinge too much of our fate on the hope that technological advances from the past will continue at the same rate into the future. At the end of the day we probably really do need some policy, and we need to get together and engage in collective action to try and solve this problem. I hope that we can.

Ariel: I hope that we can, too. So Nick and Fran, thank you both so much for joining us today.

Nick: Thanks for having me.

Fran: Thanks so much for the interesting conversation.

Ariel: Yeah. I enjoyed this, thank you.

As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us no your preferred podcast platform.

 

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

Topics discussed in this episode include:

  • Embedded agency
  • The field of “getting AI systems to do what we want”
  • Ambitious value learning
  • Corrigibility, including iterated amplification, debate, and factored cognition
  • AI boxing and impact measures
  • Robustness through verification, adverserial ML, and adverserial examples
  • Interpretability research
  • Comprehensive AI Services
  • Rohin’s relative optimism about the state of AI alignment

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today’s episode is the second part of our two part series with Rohin Shah, developing an overview of technical AI alignment efforts. If you haven’t listened to the first part, we highly recommend that you do, as it provides an introduction to the varying approaches discussed here. The second part is focused on exploring AI alignment methodologies in more depth, and nailing down the specifics of the approaches and lenses through which to view the problem.

In this episode, Rohin will begin by moving sequentially through the approaches discussed in the first episode. We’ll start with embedded agency, then discuss the field of getting AI systems to do what we want, and we’ll discuss ambitious value learning alongside this. Next, we’ll move to corrigibility, in particular, iterated amplification, debate, and factored cognition.

Next we’ll discuss placing limits on AI systems, things of this nature would be AI boxing and impact measures. After this we’ll get into robustness which consists of verification, adversarial machine learning, and adversarial examples to name a few.

Next we’ll discuss interpretability research, and finally comprehensive AI services. By listening to the first part of the series, you should have enough context for these materials in the second part. As a bit of announcement, I’d love for this podcast to be particularly useful and interesting for its listeners. So I’ve gone ahead and drafted a short three minute survey that you can find link to on the FLI page for this podcast, or in the description of where you might find this podcast. As always, if you find this podcast interesting or useful, please make sure to like, subscribe and follow us on your preferred listening platform.

For those of you that aren’t already familiar with Rohin, he is a fifth year PhD student in computer science at UC Berkeley with the Center for Human Compatible AI working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week he collects and summarizes recent progress relative to AI alignment in the Alignment Newsletter. With that, we’re going to start off by moving sequentially through the approached just enumerated. All right. Then let’s go ahead and begin with the first one, which I believe was embedded agency.

Rohin: Yeah, so embedded agency. I kind of want to just differ to the embedded agency sequence, because I’m not going to do anywhere near as good a job as that does. But the basic idea is that we would like to have this sort of theory of intelligence, and one major blocker to this is the fact that all of our current theories, most notably, the reinforcement learning make this assumption that there is a nice clean boundary between the environment and the agent. It’s sort of like the agent is playing a video game, and the video game is the environment. There’s no way for the environment to actually affect the agent. The agent has this defined input channel, takes actions, those actions get sent to the video game environment, the video game environment does stuff based on that and creates an observation, and that observation was then sent back to the agent who gets to look at it, and there’s this very nice, clean abstraction there. The agent could be bigger than the video game, in the same way that I’m bigger than tic tac toe.

I can actually simulate the entire game tree of tic tac toe and figure out what the optimal policy for tic tac toe is. It’s actually this cool XKCD that does just show you the entire game tree, it’s great.

So in the same way in the video game setting, the agent can be bigger than the video game environment, in that it can have a perfectly accurate model of the environment and know exactly what its actions are going to do. So there are all of these nice assumptions that we get in video game environment land, but in real world land, these don’t work. If you consider me on the Earth, I cannot have an exact model of the entire environment because the environment contains me inside of it, and there is no way that I can have a perfect model of me inside of me. That’s just not a thing that can happen. Not to mention having a perfect model of the rest of the universe, but we’ll leave that aside even.

There’s the fact that it’s not super clear what exactly my action space is. Once there is now a laptop available to me, does the laptop start talking as part of my action space? Do we only talk about motor commands I can give to my limbs? But then what happens if I suddenly get uploaded and now I just don’t have any lens anymore? What happened to my actions, are they gone? So Embedded Agency broadly factors this question out into four sub problems. I associate them with colors, because that’s what Scott and Abram do in their sequence. The red one is decision theory. Normally decision theory is consider all possible actions to simulate their consequences, choose the one that will lead to the highest expected utility. This is not a thing you can do when you’re an embedded agent, because the environment could depend on what policy you do.

The classic example of this is Newcomb’s problem where part of the environment is all powerful being, Omega. Omega is able to predict you perfectly, so it knows exactly what you’re going to do, and Omega is 100% trustworthy, and all those nice simplifying assumptions. Omega provides you with the following game. He’s going to put two transparent boxes in front of you. The first box will always contain $1,000 dollars, and the second box will either contain a million dollars or nothing, and you can see this because they’re transparent. You’re given the option to either take one of the boxes or both of the boxes, and you just get whatever’s inside of them.

The catch is that Omega only puts the million dollars in the box if he predicts that you would take only the box with the million dollars in it, and not the other box. So now you see the two boxes, and you see that one box has a million dollars, and the other box has a thousand dollars. In that case, should you take both boxes? Or should you just take the box with the million dollars? So the way I’ve set it up right now, it’s logically impossible for you to do anything besides take the million dollars, so maybe you’d say okay, I’m logically required to do this, so maybe that’s not very interesting. But you can relax this to a problem where Omega is 99.999% likely to get the prediction right. Now in some sense you do have agency. You could choose both boxes and it would not be a logical impossibility, and you know, both boxes are there. You can’t change the amounts that are in the boxes now. Man, you should just take both boxes because it’s going to give you $1,000 more. Why would you not do that?

But I claim that the correct thing to do in this situation is to take only one box because the fact that you are the kind of agent who would only take one box is the reason that the one box has a million dollars in it anyway, and if you were the kind of agent that did not take one box, took two boxes instead, you just wouldn’t have seen the million dollars there. So that’s the sort of problem that comes up in embedded decision theory.

Lucas: Even though it’s a thought experiment, there’s a sense though in which the agent in the thought experiment is embedded in a world where he’s making the observation of boxes that have a million dollars in them with genius posing these situations?

Rohin: Yeah.

Lucas: I’m just seeking clarification on the embeddedness of the agent and Newcomb’s problem.

Rohin: The embeddedness is because the environment is able to predict exactly, or with close to perfect accuracy what the agent could do.

Lucas: The genie being the environment?

Rohin: Yeah, Omega is part of the environment. You’ve got you, the agent, and everything else, the environment, and you have to make good decision. We’ve only been talking about how the boundary between agent and environment isn’t actually all that clear. But to the extent that it’s sensible to talk about you being able to choose between actions, we want some sort of theory for how to do that when the environment can contain copies of you. So you could think of Omega as simulating a copy of you and seeing what you would do in this situation before actually presenting you with a choice.

So we’ve got the red decision theory, then we have yellow embedded world models. With embedded world models, the problem that you have is that, so normally in our nice video game environment, we can have an exact model of how the environment is going to respond to our actions, even if we don’t know it initially, we can learn it overtime, and then once we have it, it’s pretty easy to see how you could plan in order to do the optimal thing. You can sort of trial your actions, simulate them all, and then see which one does the best and do that one. This is roughly AIXI works. AIXI is the model of the optimally intelligent RL agent in this four video game environment like settings.

Once you’re in embedded agency land, you cannot have an exact model of the environment because for one thing the environment contains you and you can’t have an exact model of you, but also the environment is large, and you can’t simulate it exactly. The big issue is that it contains you. So how you get any sort of sensible guarantees on what you can do, even though the environment can contain you is the problem off of embedded world models. You still need a world model. It can’t be exact because it contains you. Maybe you could do something hierarchical where things are fuzzy at the top, but then you can go focus in on each particular levels of hierarchy in order to get more and more precise about each particular thing. Maybe this is sufficient? Not clear.

Lucas: So in terms of human beings though, we’re embedded agents that are capable of creating robust world models that are able to think about AI alignment.

Rohin: Yup, but we don’t know how we do it.

Lucas: Okay. Are there any sorts of understandings that we can draw from our experience?

Rohin: Oh yeah, I’m sure there are. There’s a ton of work on this that I’m not that familiar with, and probably a cog psy or psychology or neuroscience, all of these fields I’m sure will have something to say about it. Hierarchical world models in particular are pretty commonly talked about as interesting. I know that there’s a whole field of hierarchical reinforcement learning in AI that’s motivated by this, but I believe it’s also talked about in other areas of academia, and I’m sure there are other insights to be getting from there as well.

Lucas: All right, let’s move on then from hierarchical world models.

Rohin: Okay. Next is blue robust delegation. So with robust delegation, the basic issue here, so we talked about Vingean reflection a little bit in the first podcast. This is a problem that falls under robust delegation. The headline difficulty under robust delegation is that the agent is able to do self improvement, it can reason about itself and do things based on that. So one way you can think of this is that instead of thinking about it as self modification, you can think about it as the agent is constructing a new agent to act at future time steps. So then in that case your agent has the problem of how do I construct an agent for future time steps such that I am happy delegating my decision making to that future agent? That’s why it’s called robust delegation. Vingean reflection in particular is about how can you take an AI system that uses a particular logical theory in order to make inferences and have it move to a stronger logical theory, and actually trust the stronger logical theory to only make correct inferences?

Stated this way, the problem is impossible because lots of theorems, it’s a well known result in logic that a weaker theory can not prove the consistency of well even itself, but also any stronger theory as a corollary. Intuitively in this pretty simple example, we don’t know how to get an agent that can trust a smarter version of itself. You should expect this problem to be hard, right? It’s in some sense dual to the problem that we have of AI alignment where we’re creating something smarter than us, and we need it to pursue the things we want it to pursue, but it’s a lot smarter than us, so it’s hard to tell what it’s going to do.

So I think of this aversion of the AI alignment problem, but apply to the case of some embedded agent reasoning about itself, and making a better version of itself in the future. So I guess we can move on to the green section, which is sub system alignment. The tagline for subsystem alignment would be the embedded agent is going to be made out of parts. Its’ not this sort of unified coherent object. It’s got different pieces inside of it because it’s embedded in the environment, and the environment is made of pieces that make up the agent, and it seems likely that your AI system is going to be made up of different cognitive sub parts, and it’s not clear that those sub parts will integrate together into a unified whole such that unified whole is pursuing a goal that you like.

It could be that each individual sub part has its own goal and they’re all competing with each other in order to further their own goals, and that the aggregate overall behavior is usually good for humans, at least in our current environment. But as the environment changes, which it will due to technological progression, one of the parts might just win out and be optimizing some goal that is not anywhere close to what we wanted. A more concrete example would be one way that you could imagine building a powerful AI system is to have a world model that is awarded for making accurate predictions about what the world will look like, and then you have a decision making model, which has a normal reward function that we program in, and tries to choose actions in order to maximize that reward. So now we have an agent that has two sub systems in it.

You might worry for example that once the world model gets sufficiently powerful, it starts realizing that the decision making thing is depending on my output in order to make decisions. I can trick it into making the world easier to predict. So maybe I give it some models of the world that say make everything look red, or make everything black, then you will get high reward somehow. Then if the agent actually then takes that action and makes everything black, and now everything looks black forever more, then the world model can very easily predict, yeah, no matter what action you take, the world is just going to look black. That’s what the world is now, and that gets the highest possible reward. That’s a somewhat weird story for what could happen. But there’s no real stronger unit that says nope, this will definitely not happen.

Lucas: So in total sort of, what is the work that has been done here on inner optimizers?

Rohin: Clarifying that they could exist. I’m not sure if there has been much work on it.

Lucas: Okay. So this is our fourth cornerstone here in this embedded agency framework, correct?

Rohin: Yup, and that is the last one.

Lucas: So surmising these all together, where does that leave us?

Rohin: So I think my main takeaway is that I am much more strongly agreeing with MIRI that yup, we are confused about how intelligence works. That’s probably it, that we are confused about how intelligence works.

Lucas: What is this picture that I guess is conventionally held of what intelligence is that is wrong? Or confused?

Rohin: I don’t think there’s a thing that’s wrong about the conventional. So you could talk about a definition of intelligence, of being able to achieve arbitrary goals. I think Eliezer says something like cross domain optimization power, and I think that seems broadly fine. It’s more that we don’t know how intelligence is actually implemented, and I don’t think we ever claim to know that, but embedded agency is like we really don’t know it. You might’ve thought that we were making progress on figuring out how intelligence might be implemented with a classical decision theory, or the Von Neumann–Morgenstern utility theorem, or results like value of perfect information and stuff like being always non negative.

You might’ve thought that we were making progress on it, even if we didn’t fully understand it yet, and then you read on method agency and you’re like no, actually there are lots more conceptual problems that we have not even begin to touch yet. Well MIRI has begun to touch them I would say, but we really don’t have good stories for how any of these things work. Classically we just don’t have a description of how intelligence works. MIRI’s like even the small threads of things we thought about how intelligence could work are definitely not the full picture, and there are problems with them.

Lucas: Yeah, I mean just on simple reflection, it seems to me that in terms of the more confused conception of intelligence, it sort of models it more naively as we were discussing before, like the simple agent playing a computer game with these well defined channels going into the computer game environment.

Rohin: Yeah, you could think of AIXI for example as a model of how intelligence could work theoretically. The sequence is like no, this is why I see it as not a sufficient theoretical model.

Lucas: Yeah, I definitely think that it provides an important conceptual shift. So we have these four corner stones, and it’s illuminating in this way, are there any more conclusions or wrap up you’d like to do on embedded agency before we move on?

Rohin: Maybe I just want to add a disclaimer that MIRI is notoriously hard to understand and I don’t think this is different for me. It’s quite plausible that there is a lot of work that MIRI has done, and a lot of progress that MIRI has made that I either don’t know about or know about but don’t properly understand. So I know I’ve been saying I want to differ to people a lot, or I want to be uncertain a lot, but on MIRI I especially want to do so.

Lucas: All right, so let’s move on to the next one within this list.

Rohin: The next one was doing what humans want. How do I summarize that? I read a whole sequence of posts on it. I guess the story for success, to the extent that we have one right now is something like use all of the techniques that we’re developing, or at least the insights from them, if not the particular algorithms to create an AI system that behaves corrigibly. In the sense that it is trying to help us achieve our goals. You might be hopeful about this because we’re creating a bunch of algorithms for it to properly infer our goals and then pursue them, so this seems like a thing that could be done. Now, I don’t think we have a good story for how that happens. I think there are several open problems that show that our current algorithms are insufficient to do this. But it seems plausible that with more research we could get to something like that.

There’s not really a good overall summary of the field because it’s more like a bunch of people separately having a bunch of interesting ideas and insights, and I mentioned a bunch of them in the first part of the podcast already. Mostly because I’m excited about these and I’ve read about them recently, so I just sort of start talking about them whenever they seem even remotely relevant. But to reiterate them, there is the notion of analyzing the human AI system together as pursuing some sort of goal, or being collectively rational as opposed to having an individual AI system that is individually rational. So that’s been somewhat formalized in Cooperative Inverse Reinforcement Learning. Typically with inverse reinforcement learning, so not the cooperative kind, you have a human, the human is sort of exogenous, the AI doesn’t know that they exist, and the human creates a demonstration of the sort of behavior that they want the AI to do. If you’re thinking about robotics, it’s picking up a coffee cup, or something like this. Then the robot just sort of sees this demonstration and comes out of thin air, it’s just data that it gets.

Let’s say that I had executed this demonstration, what reward function would I have been optimizing? And then it figures out a reward function, and then it uses that reward function however it wants. Usually you would then use reinforcement learning to optimize that reward function and recreate the behavior. So that’s normal inverse reinforcement learning. Notably in here is that you’re not considering the human and the robot together as a full collective system. The human is sort of exogenous to the problem, and also notable is that the robot is sort of taking the reward to be something that it has as opposed to something that the human has.

So CIRL basically says, no, no, no, let’s not model it this way. The correct thing to do is to have a two player game that’s cooperative between the human and the robot, and now the human knows the reward function and is going to take actions somehow. They don’t necessarily have to be demonstrations. But the human knows the reward function and will be taking actions. The robot on the other hand does not know the reward function, and it also gets to take actions, and the robot keeps a probability distribution over the reward that the human has, and updates this overtime based on what the human does.

Once you have this, you get this sort of nice, interactive behavior where the human is taking actions that teach the robot about the reward function. The robot learns the reward function over time and then starts helping the human achieve his or her goals. This sort of teaching and learning behavior comes simply under the assumption that the human and the robot are both playing the game optimally, such that the reward function gets optimized as best as possible. So you get this sort of teaching and learning behavior from the normal notion of optimizing a particular objective, just from having the objective be a thing that the human knows, but not a thing that the robot knows. One thing that, I don’t know if CIRL introduced it, but it was one of the key aspects of CIRL was having probability distribution over a reward function, so you’re uncertain about what reward you’re optimizing.

This seems to give a bunch of nice properties. In particular, once the human starts taking actions like trying to shut down the robot, then the robot’s going to think okay, if I knew the correct reward function, I would be helping the human, and given that the human is trying to turn me off, I must be wrong about the reward function, I’m not helping, so I should actually just let the human turn me off, because that’s what would achieve the most reward for the human. So you no longer have this incentive to disable your shutdown button in order to keep optimizing. Now this isn’t exactly right, because better than both of those option is to disable the shutdown button, stop doing whatever it is you were doing because it was clearly bad, and then just observe humans for a while until you can narrow down what their reward function actually is, and then you go and optimize that reward, and behave like a traditional goal directed agent. This sounds bad. It doesn’t actually seem that bad to me under the assumption that the true reward function is a possibility that the robot is considering and has a reasonable amount of support in the prior.

Because in that case, once the AI system eventually narrows down on the reward function, it will be either the true reward function, or a reward function that’s basically indistinguishable from it, because otherwise, there would be some other information that I could gather in order to distinguish between them. So you actually would get good outcomes. Now of course in practice it seems likely that we would not be able to specify the space of reward functions well enough for this to work. I’m not sure about that point. Regardless, it seems like there’s been some sort of conceptual advance here about when the AI’s trying to do something for the human, it doesn’t have the disabling the shutdown button, the survival incentive.

So while maybe reward uncertainty is not exactly the right way to do it, it seems like you could do something analogous that doesn’t have the problems that reward uncertainty does.

One other thing that’s kind of in this vein, but a little bit different is the idea of an AI system that infers and follows human norms, and the reason we might be optimistic about this is because humans seem to be able to infer and follow norms pretty well. I don’t think humans can infer the values that some other human is trying to pursue and then optimize them to lead to good outcomes. We can do that to some extent. Like I can infer that someone is trying to move a cabinet, and then I can go help them move that cabinet. But in terms of their long term values or something, it seems pretty hard to infer and help with those. But norms, we do in fact do infer and follow all the time. So we might think that’s an easier problem, like our AI systems could do it as well.

Then the story for success is basically that with these AI systems, we are able to accelerate technological progress as before, but the AI systems behave in a relatively human like manner. They don’t do really crazy things that a human wouldn’t do, because that would be against our norms. As with the accelerating technological progress, we get to the point where we can colonize space, or whatever else it is you want to do with the feature. Perhaps even along the way we do enough AI alignment research to build an actual aligned superintelligence.

There are problems with this idea. Most notably if you accelerate technological progress, bad things can happen from that, and norm following AI systems would not necessarily stop that from happening. Also to the extent that if you think human society, if left to its own devices would lead to something bad happening in the future, or something catastrophic, then a norm following AI system would probably just make that worse, in that it would accelerate that disaster scenario, without really making it any better.

Lucas: AI systems in a vacuum that are simply norm following seem to have some issues, but it seems like an important tool in the toolkit of AI alignment to have AIs which are capable of modeling and following norms.

Rohin: Yup. That seems right. Definitely agree with that. I don’t think I had mentioned the reference on this. So for this one I would recommend people look at Incomplete Contracting and AI Alignment I believe is the name of the paper by Dylan Hadfield-Menell, and Gillian Hadfield, or also my post about it in the Value Learning Sequence.

So far I’ve been talking about sort of high level conceptual things within the, ‘get AI systems to do what we want.’ There are also a bunch of more concrete technical approaches. It’s like inverse reinforcement learning, deep reinforcement learning from human preferences, and there you basically get a bunch of comparisons of behavior from humans, and use that to infer a reward function that your agent can optimize. There’s recursive reward modeling where you take the task that you are trying to do, and then you consider a new auxiliary task of evaluating your original task. So maybe if you wanted to train an AI system to write fantasy books, well if you were to give human feedback on that, it would be quite expensive because you’d have to read the entire fantasy book and then give feedback. But maybe you could instead outsource the task, even evaluating fantasy books, you could recursively apply this technique and train a bunch of agents that can summarize the plot of a book or comment on the pros of the book, or give a one page summary of the character development.

Then you can use all of these AI systems to help you give feedback on the original AI system that’s trying to write a fantasy book. So that’s a recursive reward modeling. I guess going a bit back into the conceptual territory, I wrote a paper recently on learning preferences from the state of the world. So the intuition there is that the AI systems that we create aren’t just being created into a brand new world. They’re being instantiated in a world where we have already been acting for a long time. So the world is already optimized for our preferences, and as a result, our AI systems can just look at the world and infer quite a lot about our preferences. So we gave an algorithm that did this in some poor environments.

Lucas: Right, so again, this covers the conceptual category of methodologies of AI alignment where we’re trying to get AI systems to do what we want?

Rohin: Yeah, current AI systems in a sort of incremental way, without assuming general intelligence.

Lucas: And there’s all these different methodologies which exist in this context. But again, this is all sort of within this other umbrella of just getting AI to do things we want them to do?

Rohin: Yeah, and you can actually compare across all of these methods on particular environments. This hasn’t really been done so far, but in theory it can be done, and I’m hoping to do it at some point in the future.

Lucas: Okay. So we’ve discussed embedded agency, we’ve discussed this other category of getting AIs to do what we want them to do. Just moving forward here through diving deep on these approaches.

Rohin: I think the next one I wanted to talk about was ambitious value learning. So here the basic idea is that we’re going to build a superintelligent AI system, it’s going to have goals, because that’s what the Von Neumann—Morgenstern theorem tells us is that anything with preferences, if they’re consistent and coherent, which they should be for a superintelligent system, or at least as far as we can tell they should be consistent. Any type system has a utility function. So natural thought, why don’t we just figure out what the right utility function is, and put it into the AI system?

So there’s a lot of good arguments that you’re not going to be able to get the one correct utility function, but I think Stuart’s hope is that you can find one that is sufficiently good or adequate, and put that inside of the AI system. In order to do this, he wants to, I believe the goal is to learn the utility function by looking at both human behavior as well as the algorithm that human brains are implementing. So if you see that the human brain, when it knows that something is going to be sweet, tends to eat more of it. Then you can infer that humans like to eat sweet things. As opposed to humans really dislike eating sweet things, but they’re really bad at optimizing their utility function. In this project of ambitious value learning, you also need to deal with the fact that human preferences can be inconsistent, that the AI system can manipulate the human preferences. The classic example of that would be the AI system could give you a shot of heroin, and that probably change your preferences from I do not want heroin to I do want heroin. So what does it even mean to optimize for human preferences when they can just be changed like that?

So I think the next one was corrigibility and the associated iterated amplification and debate basically. I guess factored cognition as well. To give a very quick recap, the idea with corrigibility is that we would like to build an AI system that is trying to help us, and that’s the property that we should aim for as opposed to an AI system that actually helps us.

One motivation for focusing on this weaker criteria is that it seems quite difficult to create a system that knowably actually helps us, because that means that you need to have confidence that your AI system is never going to make mistakes. It seems like quite a difficult property to guarantee. In addition, if you don’t make some assumption on the environment, then there’s a no free lens theorem that says this is impossible. Now it’s probably reasonable to put some assumption on the environment, but it’s still true that your AI system could have reasonable beliefs based on past experience, and nature still throws it a curve ball, and that leads to some sort of bad outcome happening.

While we would like this to not happen, it also seems hard to avoid, and also probably not that bad. It seems like the worst outcomes come when your superintelligent system is applying all of its intelligence in pursuit of their own goal. That’s the thing that we should really focus on. That conception of what we want to enforce is probably the thing that I’m most excited about. Then there are particular algorithms that are meant to create corrigible agents, assuming we have the capabilities to get general intelligence. So one of these is iterated amplification.

Iterated amplification is really more of a framework to describe particular methods of training systems. In particular, you alternate between amplification and distillation steps. You start off with an agent that we’re going to assume is already aligned. So this could be a human. A human is a pretty slow agent. So the first thing we’re going to do is distill the human down into a fast agent. So we could use something like imitation learning, or maybe inverse reinforcement learning plus reinforcement learning, followed by reinforcement learning or something like that in order to train a neural net or some other AI system that mostly replicates the behavior of our human, and remains aligned. By aligned maybe I mean corrigible actually. We start with a corrigible agent, and then we produce agents that continue to be corrigible.

Probably the resulting agent is going to be a little less capable than the one that you started out with just because if the best you can do is to mimic the agent that you stated with, that gives you exactly as much capabilities as that agent. So if you don’t succeed at properly mimicking, then you’re going to be a little less capable. Then you take this fast agent and you amplify it, such that it becomes a lot more capable, at perhaps the cost of being a lot slower to compute.

One way that you could image doing amplification would be to have a human get a top level task, and for now we’ll assume that the task is question answering, so they get this top level question and they say okay, I could answer this question directly, but let me make use of this fast agent that we have from the last turn. We’ll make a bunch of sub questions that seem relevant for answering the overall question, and ask our distilled agent to answer all of those sub questions, and then using those answers, the human can then make a decision for their top level question. It doesn’t have to be the human. You could also have a distilled agent at the top level if you want.

I think having the human there seems more likely. So with this amplification you’re basically using the agent multiple times, letting it reason for longer in order to get a better result. So the resulting human, plus many copies of the agent system is more capable than the original distilled agent, but also slower. So we started off with something, let’s call it capability level five, and then we distilled it and it became capability level four, but it was a lot faster. Then we amplified it and maybe now it’s capability level eight. But it’s a lot slower. So we can distill it again and get something at capability level seven that’s pretty fast, and then amplify it again and so on and so forth. So the hope is that this would allow us to continually train an agent that can reach arbitrary levels of capabilities that are actually physically possibly while remaining aligned or corrigible the entire time.

Lucas: Yeah, that’s quite a great picture. I’m happy to jump into debate next, and then I’d like to talk about the potential relationship between the two.

Rohin: Sure. So with debate, the key hope that it is based on is that it is easier to point out a lie then it is to lie in the first place. This would be under the assumption that if you’ve got two agents, one agent is trying to lie, the other agent is trying to detect when the first agent is lying, you would need the second to have access to all of the information that the first agent is using to make this decision. But if that can happen, then we would hope that it is easier to detect lies than it is to actually lie. If this is actually true, we can imagine the following training process. We give a top level question, and then we have two AI systems debate what the correct answer to that top level question is. One of the agents initially says the answer is 12, then the other agent says no, the answer is 15, then they start arguing about why the answer should be 12 versus 15.

Eventually, after some amount of time, a human is asked to judge this debate and say which of the AI systems provided more true and useful information. Whichever agent the human decided was correct gets a reward of plus one, and the other one gets a reward of minus one, and is then updated presumably by a gradient decent or something. So the hope here is basically that the two AI system that are debating each other are about equally powerful, such that if any one of them tries to lie or manipulate the user in some way, the other AI system would be able to point that out, and then the human would give the reward to the AI systems that point it out. So at equilibrium you should expect that both of the agents are just trying to give you true and useful information, and they’re not trying to lie, because if they do try to lie, then the other one will point it out.

Lucas: Right. So there’s this important feature here where as you mentioned, the claim is that it’s easier to spot out lies then to make lies, and this sort of asymmetry here is one of the motivations which says that generally it’ll be easier to tell when agents are telling the truth rather than making up a lie.

Rohin: As along as you have another AI system that can point this out. Certainly a super intelligent AI system could lie to me and I wouldn’t be able to tell, probably, but it’s a lot harder for a superintelligent AI system to lie to me when I have another superintelligent AI system that’s trying to point out lies that the first one makes.

Lucas: Right. So now I think we can go ahead and cover its relationship to iterated amplification?

Rohin: Sure. There is actually quite a close relationship between the two, even though it doesn’t seem like it on first site. The hope with both of them is that your AI systems will learn to do human like reasoning, but on a much larger scale than humans can do. In particular, consider the following kind of agent. You have a human who is given a top level question that they have to answer, and that human can create a bunch of sub questions and then delegate each of those sub questions to another copy of the same human, initialized from scratch or something like that so they don’t know what the top level human has thought.

Then they now have to answer the sub question, but they too can delegate to another human further down the line. And so on you can just keep delegating down until you get something that questions are so easy that the human can just straight up answer them. So I’m going to call this structure a deliberation tree, because it’s a sort of tree of considerations such that every node, the answer to that node, it can be computed from the answers to the children nodes, plus a short bit of human reasoning that happened at that node.

In iterated amplification, what’s basically happening is you start with leaf nodes, the human agent. There’s just a human agent, and they can answer questions quickly. Then when you amplify it the first time, you get trees of depth one, where at the top level there’s a human who can then delegate sub questions out, but then those sub questions have to be answered by an agent that was trained to be like a human. So you’ve got something that approximates depth one human deliberation trees. Then after another round of distillation and amplification, you’ve got human delegating to agents that were trained to mimic humans that could delegate to agents that were trained to mimic humans. An approximate version of a depth two deliberation tree.

So iterated amplification is basically just building up the depth of the tree that the agent is approximating. But we hope that these deliberation trees are always just basically implementing corrigible reasoning, and that eventually once they get deep enough, you get arbitrarily strong capabilities.

Lucas: Can you offer some clarification as to why one might expect a group of copies of an AI, plus the human to scale to be able to make sure that during distillation, that alignment is retained?

Rohin: That’s an open problem. Whether you can do a distillation step that does preserve alignment/corrigibility, it’s a thing that Paul in a few recent posts in the Iterated Amplification Sequence, he calls it the reward engineering problem. The hope is that if you believe that the amplified agent is corrigible, then they are going to be smarter than the agent that they are trying to train via distillation. So you can actually just use the amplified agent to create a reward signal in order to train an agent during distillation. Since the amplified agent is smarter than the agent you’re distilling, you could plausibly actually create a reward function that’s not easily gameable, and actually gets the AI system to do what you want. I think that’s the concise lacking nuance story of why you might be optimistic about this.

Lucas: All right.

Rohin: So I’ll move on to how debate is also related to this. So we talk about how iterated amplification is basically like growing the depth of deliberation trees that the agent is approximating. The human part of this is judging any one node and its children. In debate on the other hand, you can imagine the same sort of deliberation tree, although now they’re more like arguments and counter arguments as opposed to considerations and counter considerations. But broadly the same thing. So imagine there’s this actual debate tree of ways the debate could possibly go.

Then you could think of the AI systems as choosing a particular path in the debate tree that makes them most likely to win. The key point is that given that the entire question can be resolved by exponentially size deliberation tree, if the two AI systems are capable of competing this exponential deliberation tree, then optimal play in the debate game is to go along the path that is going to lead in your victory, even given that the other player is trying to win themselves. The relation between iterated amplification and debate is that they both want the agents to implicitly be able to compute this exponential sized deliberation tree that humans could not do, and then use humans to detect a particular part of that tree. In iterated amplification you check a parent and its children. Those nodes, you look at that one section of the debate tree, and you make sure that it looks good, and then debate you look at a particular path on the debate tree and judge whether that path is good. One critique about these methods, is it’s not actually clear that an exponential sized deliberation tree is able to solve all problems that we might care about. Especially if the amount of work done at each node is pretty short, like ten minutes of a stent of a normal human.

One question that you would care about if you wanted to see if an iterated amplification could work is can these exponential sized deliberation trees actually solve hard problems? This is the factored cognition hypothesis. These deliberation trees can in fact solve arbitrarily complex tasks. And Ought is basically working on testing this hypothesis to see whether or not it’s true. It’s like finding the tasks, which seemed hardest to do in this decompositional way, and then seeing if teams of humans can actually figure out how to do them.

Lucas: Do you have an example of what would be one of these tasks that are difficult to decompose?

Rohin: Yeah. Take a bunch of humans who don’t know differential geometry or something, and have them solve the last problem in a textbook on differential geometry. They each only get ten minutes in order to do anything. None of them can read the entire textbook. Because that takes way more than ten minutes. I believe Ought is maybe not looking into that one in particular, that one sounds extremely hard, but they might be doing similar things with books of literature. Like trying to answer questions about a book that no one has actually read.

But I remember that Andreas was actually talking about this particular problem that I mentioned as well. I don’t know if they actually decided to do it.

Lucas: Right. So I mean just generally in this area here, it seems like there are these interesting open questions and considerations about I guess just the general epistemic efficacy of debate. And how good AI and human systems will be at debate, and again also as you just pointed out, whether or not arbitrarily difficult tasks can be solved through this decompositional process. I mean obviously we do have proofs for much simpler things. Why is there a question as to whether or not it would scale? How would it eventually break?

Rohin: With iterated amplification in particular, if you’re starting with humans who have only ten minutes to look at resources and come up with an answer, the particular thing I would say they might not be able to do is take a math textbook that the human did not know already and solve the final problem in it. Iterated amplification, to the extent that it starts with a human who’s limited to ten minutes, and whenever it does a decomposition, does it with a human who’s limited to ten minutes, would inherit all of the same problems if those problems are real problems.

Lucas: Cool, that’s interesting. I don’t have anything to add here then. So happy to keep on moving.

Rohin: I think I want to talk about limited AI systems next. So with AI boxing, the hope is to create a box around an AI system that prevents anything from inside the box from going out, and anything from outside the box from going in, except a particular input, output channels that we explicitly allow. So the hope here is that this allows us to make assumptions about the input, output channels that actually will hold in practice, and can be used to enter that AI’s function more like tool AIs or something, rather than agential AIs that will do bad things.

I think most literature on boxing has had an AI system in a box, and a human operator outside of it. One interesting proposal I’ve heard recently is to have the human also be inside the box, because if you’ve got information pulling from an AI to a human, that’s a pretty big hole in your box. But if the human is actually inside the box, then there’s not as much that can be done. There’s not as much information flowing into or out of the box.

Lucas: What does it actually mean to have the human locked in the box?

Rohin: One proposal would be have a human go inside a secured room from which no information an enter or leave, apart from the fact of whether or not the human has entered or left the room. The human can then interact with the AI as normal. As soon as the human leaves the room, the AI’s reward function is zero. Basically just doesn’t care about anything that happens in the future after that. So in this setting you would hope that the AI system has no incentive to affect the outside world because the only way it can affect the outside world is by sending the human outside. But as soon as it sends the human outside, its reward gets set to zero forever, so it doesn’t care about any of that.

Lucas: But wouldn’t it care before the human went out?

Rohin: Yes, but before the human goes out, there is nothing it can do to influence the outside world because both the human and the AI are in the box. It could influence the human while the human is in the box. But the human also can’t affect the outside world while it’s in the box.

Lucas: Yeah, but the human leaves the box and then can affect the world.

Rohin: Right. But any change to the world that happens after the human leaves, doesn’t make it into the reward function.

Lucas: But it does because the AI system would be planning over time from its initial moment where the reward isn’t set to zero.

Rohin: But your reward can be limited to the next K times steps, or something like this. Basically you’re limited to until the point that the human leaves. That’s the proposal here.

Lucas: I see, okay. That makes sense.

Rohin: The only information leaving the box, assuming security is a box in all the normal ways, is whether or not the human has chosen to leave. Pretty low information, doesn’t seem like an AI system would be able to use that to influence the outside world in some ridiculous way. So that’s boxing.

The next thing on my list for limited AGI is impact measures of side effects. There is also mild optimization and conservative concepts, but let’s start with impact measures. The basic hope is to create some quantification of how much impact a particular action that the AI chooses, has on the world, and to then penalize the AI for having a lot of impact so that it only does low impact things, which presumably will not cause catastrophe. One approach to this relative reachability. With relative reachability, you’re basically trying to not decrease the number of states that you can reach from the current state. So you’re trying to preserve option value. You’re trying to keep the same states reachable.

It’s not okay for you to make one state unreachable as long as you make a different state reachable. You need all of the states that were previously reachable to continue being reachable. The relative part is that the penalty is calculated relative to a baseline that measures what would’ve happened if the AI had done nothing, although there are other possible baselines you could use. The reason you do this is so that we don’t penalize the agent for side affects that happen in the environment. Like maybe I eat a sandwich, and now these states where there’s a sandwich in front of me are no longer accessible because I can’t un-eat a sandwich. We don’t want to penalize our AI system for that impact, because then it’ll try to stop me from eating a sandwich. We want to isolate the impact of the agent as opposed to impact that were happening in the environment anyway. So that’s what we need the relative part.

There is also attainable utility preservation from Alex Turner, which makes two major changes from relative reachability. First, instead of talking about reachability of states, it talk about how much you can achieve different utility functions. So if previously you were able to make lots of paperclips, then you want to make sure that you can still make lots of paperclips. If previously you were able to travel across the world within a day, then you want to still be able to travel across the world in a day. So that’s the first change I would make.

The second change is not only does it penalize decreases in attainable utility, it also penalizes increase in attainable utility. So if previously you could not mine asteroids in order to get their natural resources, you should still not be able to mine asteroids and get their resources. This seems kind of crazy when you first hear it, but the rational for it is that all of the convergent instrumental sub goals are about increases in power of your AI system. For example, for a broad range of utility functions, it is useful to get a lot of resources and a lot of power in order to achieve those utility functions. Well, if you penalize increases in attainable utility, then you’re going to penalize actions that just broadly get more resources, because those are helpful for many, many, many different utility functions.

Similarly, if you were going to be shutdown, but then you disable the shutdown button, well that just makes it much more possible for you to achieve pretty much every utility, because instead of being off, you are still on and can take actions. So that also will get heavily penalized because it led to such a large increase in attainable utilities. So those are I think the two main impact measures that I know of.

Okay, we’re getting to the things where I have less things to say about them, but now we’re at robustness. I mentioned this before, but there are two main challenges with verification. There’s the specification problem, making it computationally efficient, and all of the work is on the computationally efficient side, but I think the hardest part is the specification side, and I’d like to see more people do work on that.

I don’t think anyone is really working on verification with an eye to how to apply it to powerful AI systems. I might be wrong about that. Like I know something people who do care about AI safety who are working on verification, and it’s possible that they have thoughts about this that aren’t published and that I haven’t talked to them about. But the main thing I would want to see is what specifications can we actually give to our verification sub routines. At first glance, this is just the full problem of AI safety. We can’t just give a specification for what we want to an AGI.

What specifications can we get for a verification that’s going to increase our trust in the AI system. For adversarial training, again, all of the work done so far is in the adversarial example space where you try to frame an image classifier to be more robust to adversarial examples, and this kind of work sometimes, but doesn’t work great. For both verification and adversarial training, Paul Christiano has written a few blog posts about how you can apply this to advance AI systems, but I don’t know if anyone actively working on these with AGI in mind. With adversarial examples, there is too much work for me to summarize.

The thing that I find interesting about adversarial examples is that is shows that are we no able to create image classifiers that have learned human preferences. Humans have preferences over how we classify images, and we didn’t succeed at that.

Lucas: That’s funny.

Rohin: I can’t take credit for that framing, that one was due to Ian Goodfellow. But yeah, I see adversarial examples as contributing to a theory of deep learning that tells us how do we get deep learning systems to be closer to what we want them to be rather than these weird things that classify pandas as givens, even when they’re very clearly still pandas.

Lucas: Yeah, the framing’s pretty funny, and makes me feel kind of pessimistic.

Rohin: Maybe if I wanted to inject some optimism back in, there’s a frame under which an adversarial examples happen because our data sets are too small or something. We have some pretty large data sets, but humans do see more and get far richer information than just pixel inputs. We can go feel a chair and build 3D models of a chair through touch in addition to sight. There is actually a lot more information that humans have, and it’s possible that what we need as AI systems is just to have way more information, and are good to narrow it down on the right model.

So let us move on to I think the next thing is interpretability, which I also do not have much to say about, mostly because there is tons and tons of technical research on interpretability, and there is not much on interpretability from an AI alignment perspective. One thing to note with interpretability is you do want to be very careful about how you apply it. If you have a feedback cycle where you’re like I built an AI system, I’m going to use interpretability to check whether it’s good, and then you’re like oh shit, this AI system was bad, it was not making decisions for the right reasons, and then you go and fix your AI system, and then you throw interpretability at it again, and then you’re like oh, no, it’s still bad because of this other reason. If you do this often enough, basically what’s happening is you’re training your AI system to no longer have failures that are obvious to interpretability, and instead you have failures that are not obvious to interpretability, which will probably exist because your AI system seems to have been full of failures anyway.

So I would be pretty pessimistic about the system that interpretability found 10 or 20 different errors in. I would just expect that the resulting AI system has other failure modes that we were not able to uncover with interpretability, and those will at some point trigger and cause bad outcomes.

Lucas: Right, so interpretability will cover things such as super human intelligence interpretability, but also more mundane examples of present day systems correct, where the interpretability of say neural networks is basically, my understand is nowhere right now.

Rohin: Yeah, that’s basically right. There have been some techniques developed like sailiency maps, feature visualization, neural net models that hallucinate explanations post hoc, people have tried a bunch of things. None of them seem especially good, though some of them definitely are giving you more insight than you had before.

So I think that only leaves CAIS. With comprehensive AI service, it’s like a forecast for how AI will develop in the future. It also has some prescriptive aspects to it, like yeah, we should probably not do these things, because these don’t seem very safe, and we can do these other things instead. In particular, CAIS takes a strong stance AGI agents that are God-like fully integrated systems that are optimizing some utility function over the long term future.

It should be noted that it’s arguing against a very specific kind of AGI agent. This sort of long term expected utility maximizer that’s fully integrated and is okay to black box, can be broken down into modular components. That entire cluster of features, it’s what CAIS is talking about when it says AGI agent. So it takes a strong sense against that, saying A, it’s not likely that this is the first superintelligent thing that we built, and B, it’s clearly dangerous. That’s what we’ve been saying the entire time. So here’s a solution, why don’t we just not build it? And we’ll build these other things instead? As for what the other things are, the basic intuition pump here is that if you look at how AI is developed today, there is a bunch of research in development practices that we do. We try out a bunch of models, we try some different ways to clean our data, we try different ways of collecting data sets, and we try different algorithms and so on and so forth, and these research and development practices allow us to create better and better AI systems.

Now, our AI systems currently are also very bounded in their tasks that they do. There are specific tasks, and they do that task and that task alone, they do it in episodic ways. They are only trying to optimize over a bounded amount of time, they use a bounded amount of computation and other resources. So that’s what we’re going to call a service. It’s an AI system that does a bounded task, in bounded time, with bounded computation. Everything is bounded. Now our research and development practices are themselves bound to tasks, and AI has shown itself to be quite good at automating bounded tasks. We’ve definitely not automated all bounded tasks yet, but it does seem like we are in general are pretty good at automating bounded tasks with enough effort. So probably we will also automate research and development tasks.

We’re seeing some of this already with neural architecture search for example, and once AI R and D processes have been sufficiently automated, then we get this cycle where AI systems are doing the research and development needed to improve AI systems, and so we get to this point of recursive improvement that’s not self improvement anymore, because there’s not really an agentic itself to improve, but you do have recursive AI improving AI. So this can lead to the sort of very quick improvement and capabilities that we often associate with superintelligence. With that we can eventually get to a situation where any task that we care about, we could have a service that breaks that task down into a bunch of simple, automatable bounded tasks, and then we can create services that do each of those bounded tasks and interact with each other in order to in tandem complete the long term task.

This is how humans do engineering and building things. We have these research and development things, we have these modular systems that are interacting with each other via a well defined channel, so this seems more likely to be the firs thing that we build that’s capable of super intelligent reasoning rather than an AGI agent that’s optimizing the utility function of a long term, yada, yada, yada.

Lucas: Is there no risk? Because the superintelligence is the distributed network collaborating. So is there no risk for the collective distributed network to create some sort of epiphenomenal optimization effects?

Rohin: Yup, that’s definitely a thing that you should worry about. I know that Erik agrees with me on this because he explicitly lists this out in the tech report as a thing that needs more research and that we should be worried about. But the hope is that there are other things that you can do that normally we wouldn’t think about with technical AI safety research that would make more sense in this context. For example, we could train a predictive model of human approval. Given any scenario, the AI system should predict how much humans are going to like it or approve of it, and then that service can be used in order to check that other services are doing reasonable things.

Similarly, we might look at each individual service and see which of the other services it’s accessing, and then make sure that those are reasonable services. If we see a CEO of paper clip company going and talking to the synthetic biology service, we might be a bit suspicious and be like why is this happening? And then we can go and check to see why exactly that has happened. So there are all of these other things that we could do in this world, which aren’t really options in the AGI agent world.

Lucas: Aren’t they options in the AGI agential world where the architectures are done such that these important decision points are analyzable to the same degree as they would be in a CAIS framework?

Rohin: Not to my knowledge. As far as I can tell, most end to end train things, you might have the architectures be such that there are these points at which you expect that certain kinds of information will be flowing there, but you can’t easily look at the information that’s actually there and deduce what the system is doing. It’s just not interpretable enough to do that.

Lucas: Okay. I don’t think that I have any other questions or interesting points with regards to CAIS. It’s a very different and interesting conception of the kind of AI world that we can create. It seems to require its own new coordination challenge as if your hypothesis is true and that the agential AIs will be afforded more causal power in the world, and more efficiency than sort of the CAIS systems, that’ll give them a competitive advantage that will potentially bias civilization away from CAIS systems.

Rohin: I do want to note that I think the agential AI systems will be more expensive and take longer to develop than CAIS. So I do think CAIS will come first. Again, this is all in a particular world view.

Lucas: Maybe this might be abstracting too large, but does CAIS claim to function as an AI alignment methodology to be used on the long term? Do we retain the sort of CAIS architecture path, CAIS creating super intelligence or some sort of distributed task force?

Rohin: I’m not actually sure. There’s definitely a few chapters in the technical report that are like okay, what if we build AGI agents? How could we make sure that goes well? As long as CAIS comes before AGI systems, here’s what we can do in that setting.

But I feel like I personally think that AGI systems will come. My guess is that Erik does not think that this is necessary, and we could actually just have CAIS systems forever. I don’t really have a model for when to expect AGI separately of the CAIS world. I guess I have a few different potential scenarios that I can consider, and I can compare it to each of those, but it’s not like it’s CAIS and not CAIS. It’s more like it’s CAIS and a whole bunch of other potential scenarios, and in reality it’ll be some mixture of all of them.

Lucas: Okay, that makes more sense. So, there’s sort of an overload here, or just a ton of awesome information with regards to all of these different methodologies and conceptions here. So just looking at all of it, how do you feel about all of these different methodologies in general, and how does AI alignment look to you right now?

Rohin: Pretty optimistic about AI alignment, but I don’t think that’s so much from the particular technical safety research that we have. That’s some of it. I do think that there are promising approaches, and the fact that there are promising approaches makes me more optimistic. But I think more so my optimism comes from the strategic picture. A belief that A, that we will be able to convince people that this is important, such that people start actually focusing on this problem more broadly, and B that we would be able to get a bunch of people to coordinate such that they’re more likely to invest in safety. C, that I don’t place as much weight on the AI systems that are at long term, utility maximizers, and therefor we’re basically all screwed, which seems to be the position of many other people in the field.

I say optimistic. I mean optimistic relative to them. I’m probably pessimistic relative to the average person.

Lucas: A lot of these methodologies are new. Do you have any sort of broad view about how the field is progressing?

Rohin: Not a great one. Mostly because I would consider myself, maybe I’ve just recently stopped being new to the field, so I didn’t really get to observe the field very much in the past, but it seems like there’s been more of a shift towards figuring out how all of the things people were thinking about apply to real machine learning systems, which seems nice. The fact that it does connect is good. I don’t think the connections are super natural, or they just sort of clicked, but they did mostly work out. I’d say in many cases, and that seems pretty good. So yeah, the fact that we’re now doing a combination of theoretical, experimental, and conceptual work seems good.

It’s no longer the case that we’re mostly doing theory. That seems probably good.

Lucas: You’ve mentioned already a lot of really great links in this podcast, places people can go to learn more about these specific approaches and papers and strategies. And one place that is just generally great for people to go is to the Alignment Forum, where a lot of this information already exists. So are there just generally in other places that you recommend people check out if they’re interested in taking more technical deep dives?

Rohin: Probably actually at this point, one of the best places for a technical deep dive is the alignment newsletter database. I write a newsletter every week about AI alignment, all the stuff that’s happened in the past week, that’s the alignment newsletter, not the database, which also people can sign up for, but that’s not really a thing for technical deep dives. It’s more a thing for keeping a pace with developments in the field. But in addition, everything that ever goes into the newsletter is also kept in a separate database. I say database, it’s basically a Google sheets spreadsheet. So if you want to do a technical deep dive on any particular area, you can just go, look for the right category on the spreadsheet, and then just look at all the papers there, and read some or all of them.

Lucas: Yeah, so thanks so much for coming on the podcast Rohin, it was a pleasure to have you, and I really learned a lot and found it to be super valuable. So yeah, thanks again.

Rohin: Yeah, thanks for having me. It was great to be on here.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI alignment series.

End of recorded material

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 1)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin’s take on these different approaches.

You can take a short (3 minute) survey to share your feedback about the podcast here.

In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others
  • Where and why they disagree on technical alignment
  • The kinds of properties and features we are trying to ensure in our AI systems
  • What Rohin is excited and optimistic about
  • Rohin’s recommended reading and advice for improving at AI alignment research

Lucas: Hey everyone, welcome back to the AI Alignment podcast. I’m Lucas Perry, and today we’ll be speaking with Rohin Shah. This episode is the first episode of two parts that both seek to provide an overview of the state of AI alignment. In this episode, we cover technical research organizations in the space of AI alignment, their research methodologies and philosophies, how these all come together on our path to beneficial AGI, and Rohin’s take on the state of the field.

As a general bit of announcement, I would love for this podcast to be particularly useful and informative for its listeners, so I’ve gone ahead and drafted a short survey to get a better sense of what can be improved. You can find a link to that survey in the description of wherever you might find this podcast, or on the page for this podcast on the FLI website.

Many of you will already be familiar with Rohin, he is a fourth year PhD student in Computer Science at UC Berkeley with the Center For Human-Compatible AI, working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter. And so, without further ado, I give you Rohin Shah.

Thanks so much for coming on the podcast, Rohin, it’s really a pleasure to have you.

Rohin: Thanks so much for having me on again, I’m excited to be back.

Lucas: Yeah, long time, no see since Puerto Rico Beneficial AGI. And so speaking of Beneficial AGI, you gave quite a good talk there which summarized technical alignment methodologies approaches and broad views, at this time; and that is the subject of this podcast today.

People can go and find that video on YouTube, and I suggest that you watch that; that should be coming out on the FLI YouTube channel in the coming weeks. But for right now, we’re going to be going in more depth, and with more granularity into a lot of these different technical approaches.

So, just to start off, it would be good if you could contextualize this list of technical approaches to AI alignment that we’re going to get into within the different organizations that they exist at, and the different philosophies and approaches that exist at these varying organizations.

Rohin: Okay, so disclaimer, I don’t know all of the organizations that well. I know that people tend to fit CHAI in a particular mold, for example; CHAI’s the place that I work at. And I mostly disagree with that being the mold for CHAI, so probably anything I say about other organizations is also going to be somewhat wrong; but I’ll give it a shot anyway.

So I guess I’ll start with CHAI. And I think our public output mostly comes from this perspective of how do we get AI systems to do what we want? So this is focusing on the alignment problem, how do we actually point them towards a goal that we actually want, align them with our values. Not everyone at CHAI takes this perspective, but I think that’s the one most commonly associated with us and it’s probably the perspective on which we publish the most. It’s also the perspective I, usually, but not always, take.

MIRI, on the other hand, takes a perspective of, “We don’t even know what’s going on with intelligence. Let’s try and figure out what we even mean by intelligence, what it means for there to be a super-intelligent AI system, what would it even do or how would we even understand it; can we have a theory of what all of this means? We’re confused, let’s be less confused, once we’re less confused, then we can think about how to actually get AI systems to do good things.” That’s one of the perspectives they take.

Another perspective they take is that there’s a particular problem with AI safety, which is that, “Even if we knew what goals we wanted to put into an AI system, we don’t know how to actually build an AI system that would, reliably, pursue those goals as opposed to something else.” That problem, even if you know what you want to do, how do you get an AI system to do it, is a problem that they focus on. And the difference from the thing I associated with CHAI before is that, with the CHAI perspective, you’re interested both in how do you get the AI system to actually pursue the goal that you want, but also how do you figure out what goal that you want, or what is the goal that you want. Though, I think most of the work so far has been on supposing you know the goal, how do you get your AI system to properly pursue it?

I think DeepMind safety came, at least, is pretty split across many different ways of looking at the problem. I think Jan Leike, for example, has done a lot of work on reward modeling, and this sort of fits in with the how do we get our AI systems be focused on the right task, the right goal. Whereas Vika has done a lot of work on side effects or impact measures. I don’t know if Vika would say this, but the way I interpret it how do we impose a constraint upon the AI system such that it never does anything catastrophic? But it’s not trying to get the AI system to do what we want, just not do what we don’t want, or what we think would be catastrophically bad.

OpenAI safety also seems to be, okay how do we get deep enforcement learning to do good things, to do what we want, to be a bit more robust? Then there’s also the iterated amplification debate factored cognition area of research, which is more along the lines of, can we write down a system that could, plausibly, lead to us building an aligned AGI or aligned powerful AI system?

FHI, no coherent direction, that’s all of FHI. Eric Drexler is also trying to understand how AI will develop it in the future is somewhat very different from what MIRI’s doing, but the same general theme of trying to figure out what is going on. So he just recently published a long technical report on comprehensive AI services, which is the general worldview for predicting what AI development will look like in the future. If we believed that that was, in fact, the way AI would happen, we would probably change what we work on from the technical safety point of view.

And Owain Evans does a lot of stuff, so maybe I’m just not going to try to categorize him. And then Stuart Armstrong works on this, “Okay, how do we get value learning to work such that we actually infer a utility function that we would be happy for an AGI system to optimize, or a super-intelligent AI system to optimize?”

And then Ought works on factory cognition, so it’s very adjacent to be iterated amplification and debate research agendas. Then there’s a few individual researchers, scattered, for example, Toronto, Montreal, and AMU and EPFL, maybe I won’t get into all of them because, yeah, that’s a lot; but we can delve into that later.

Lucas: Maybe a more helpful approach, then, would be if you could start by demystifying some of the MiRI stuff a little bit; which may seem most unusual.

Rohin: I guess, strategically, the point would be that you’re trying to build this AI system that’s going to be, hopefully, at some point in the future vastly more intelligent than humans, because we want them to help us colonize the universe or something like that, and lead to lots and lots of technological progress, etc., etc.

But this, basically, means that humans will not be in control unless we very, very specifically arrange it such that we are in control; we have to thread the needle, perfectly, in order to get this to work out. In the same way that, by default you, would expect that the most intelligent creatures, beings are the ones that are going to decide what happens. And so we really need to make sure and, also it’s probably hard to ensure, that these vastly more intelligent beings are actually doing what we want.

Given that, it seems like what we want is a good theory that allows us to understand and predict what these AI systems are going to do. Maybe not in the fine nitty, gritty details, because if we could predict what they would do, then we could do it ourselves and be just as intelligent as they are. But, at least, in broad strokes what sorts of universes are they going to create?

But given that they can apply so much more intelligence that we can, we need our guarantees to be really, really strong; like almost proof level. Maybe actual proofs are a little too much to expect, but we want to get as close to it as possible. Now, if we want to do something like that, we need a theory of intelligence; we can’t just sort of do a bunch of experiments, look at the results, and then try to extrapolate from there. Extrapolation does not give you the level of confidence that we would need for a problem this difficult.

And so rather, they would like to instead understand intelligence deeply, deconfuse themselves about it. Once you understand how intelligence works at a theoretical level, then you can start applying that theory to actual AI systems and seeing how they approximate the theory, or make predictions about what different AI systems will do. And, hopefully, then we could say, “Yeah, this system does look like it’s going to be very powerful as approximating this particular idea, this particular part of theory of intelligence. And we can see that with this particular theory of intelligence, we can align it with humans somehow, and you’d expect that this was going to work out.” Something like that.

Now, that sounded kind of dumb even to me as I was saying it, but that’s because we don’t have the theory yet; it’s very fun to speculate how you would use the theory before you actually have the theory. So that’s the reason they’re doing this, the actual thing that they’re focusing on is centered around problems of embedded agency. And I should say this is one of their, I think, two main strands of research, the other stand of research, I do not know anything about because they have not published anything about it.

But one of their strands of research is about embedded agency. And here the main point is that in the real world, any agent, any AI system, or a human is a part of their environment. They are smaller than the environment and the distinction between agent and environment is not crisp. Maybe I think of my body as being part of me but, I don’t know, to some extent, my laptop is also an extension of my agency; there’s a lot of stuff I can do with it.

Or, on the other hand, you could think maybe my arms and limbs aren’t actually a part of me, I could maybe get myself uploaded at some point in the future, and then I will no longer have arms or legs; but in some sense I am still me, I’m still an agent. So, this distinction is not actually crisp, and we always pretend that it is in AI, so far. And it turns out that once you stop making this crisp distinction and start allowing the boundary to be fuzzy, there are a lot of weird, interesting problems that show up and we don’t know how to deal with any of them, even in theory, so that’s what they focused on.

Lucas: And can you unpack, given that AI researchers control of the input/output channels for AI systems, why is it that there is this fuzziness? It seems like you could extrapolate away the fuzziness given that there are these sort of rigid and selected IO channels.

Rohin: Yeah, I agree that seems like the right thing for today’s AI systems; but I don’t know. If I think about, “Okay, this AGI is a generally intelligent AI system.” I kind of expect it to recognize that when we feed it inputs which, let’s say, we’re imagining a money maximizing AI system that’s taking in inputs like stock prices, and it outputs which stocks to buy. And maybe it can also read the news that lets it get newspaper articles in order to make better decisions about which stocks to buy.

At some point, I expect this AI system to read about AI and humans, and realize that, hey, it must be an AI system, it must be getting inputs and outputs. Its reward function must be to make this particular number in a bank account be as high as possible and then once it realizes this, there’s this part of the world, which is this number in the bank account, or it could be this particular value, this particular memory block in its own CPU, and its goal is now make that number as high as possible.

In some sense, it’s now modifying itself, especially if you’re thinking of the memory block inside the CPU. If it goes and edits that and sets that to a million, a billion, the highest number possible in that memory block, then it seems like it has, in some sense, done some self editing; it’s changed the agent part of it. It could also go and be like, “Okay actually what I care about is this particular award function box is supposed to output as high a number as possible. So what if I go and change my input channels such that it feeds me things that caused me to believe that I’ve made tons and tons of profit?” So this is a delusion backs consideration.

While it is true that I don’t see a clear, concrete way that an AI system ends up doing this, it does feel like an intelligent system should be capable of this sort of reasoning, even if it initially had these sort of fixed inputs and outputs. The idea here is that its outputs can be used to affect the inputs or future outputs.

Lucas: Right, so I think that that point is the clearest summation of this; it can affect its own inputs and outputs later. If you take human beings who are, by definition, human level intelligences we have, say, in a classic computer science sense if you thought of us, you’d say we strictly have five input channels: hearing seeing, touch, smell, etc.

Human beings have a fixed number of input/output channels but, obviously, human beings are capable of self modifying on those. And our agency is sort of squishy and dynamic in ways that would be very unpredictable, and I think that that unpredictability and the sort of almost seeming ephemerality of being an agent seems to be the crux of a lot of the problem.

Rohin: I agree that that’s a good intuition pump, I’m not sure that I agree it’s the crux. The crux, to me, it feels more like you specify some sort of behavior that you want which, in this case, was make a lot of money or make this number in a bank account go higher, or make this memory cell go as high as possible.

And when you were thinking about the specification, you assumed that the inputs and outputs fell within some strict parameters, like the inputs are always going to be news articles that are real and produced by human journalists, as opposed to a fake news article that was created by the AI in order to convince the reward function that actually it’s made a lot of money. And then the problem is that since the AI’s outputs can affect the inputs, the AI could cause the inputs to go outside of the space of possibilities that you imagine the inputs could be in. And this then allows the AI to game the specification that you had for it.

Lucas: Right. So, all the parts which constitute some AI system are all, potentially, modified by other parts. And so you have something that is fundamentally and completely dynamic, which you’re trying to make predictions about, but whose future structure is potentially very different and hard to predict based off of the current structure?

Rohin: Yeah, basically.

Lucas: And that in order to get past this we must, again, tunnel down on this decision theoretic and rational agency type issues at the bottom of intelligence to sort of have a more fundamental theory, which can be applied to these highly dynamic and difficult to understand situations?

Rohin: Yeah, I think the MIRI perspective is something like that. And in particular, it would be like trying to find a theory that allows you to put in something that stays stable even while the system, itself, is very dynamic.

Lucas: Right, even while your system, whose parts are all completely dynamic and able to be changed by other parts, how do you maintain a degree of alignment amongst that?

Rohin: One answer to this is give the AI a utility function. There is a utility function that’s explicitly trying to maximize that and in that case, it probably has an incentive in order to keep that to protect that the utility function, because if it gets changed, well then it’s not going to maximize that utility function anymore, it’ll maximize something else which will lead to worse behavior by the likes of the original utility function. That’s a thing that you could hope to do with a better theory of intelligence is, how do you create a utility function in an AI system stays stable, even as everything else is dynamically changing?

Lucas: Right, and without even getting into the issues of implementing one single stable utility function.

Rohin: Well, I think they’re looking into those issues. So, for example, Vingean Reflection is a problem that is entirely about how you create better, more improved version of yourself without having any value drift, or a change to the utility function.

Lucas: Is your utility function not self-modifying?

Rohin: So in theory, it could be. The hook would be that we could design an AI system that does not self-modify its utility function under almost all circumstances. Because if you change your utility function, then you’re going to start maximizing that new utility function which, by the original utility function’s evaluation, is worse. If I told you, “Lucas, you have got to go fetch coffee.” That’s the only thing in life you’re concerned about. You must take whatever actions are necessary in order to get the coffee.

And then someone goes like, “Hey Lucas, I’m going to change your utility function so that you want to fetch tea instead.” And then all of your decision making is going to be in service of getting tea. You would probably say, “No, don’t do that, I want to fetch coffee right now. If you change my utility function for being ‘fetch tea’, then I’m going to fetch tea, which is bad because I want to fetch coffee.” And so, hopefully, you don’t change your utility function because of this effect.

Lucas: Right. But isn’t this where corrigibility comes in, and where we admit that as we sort of understand more about the world and our own values, we want to be able to update utility functions?

Rohin: Yeah, so that is a different perspective; I’m not trying to describe that perspective right now. It’s a perspective for how you could get something stable in an AI system. And I associate it most with Eliezer, though I’m not actually sure if he holds this opinion.

Lucas: Okay, so I think this was very helpful for the MIRI case. So why don’t we go ahead and zoom in, I think, a bit on CHAI, which is the Center For Human-Compatible AI.

Rohin: So I think rather than talking about CHAI, I’m going to talk about the general field of trying to get AI systems do what we want; a lot of people at CHAI work on that but not everyone. And also a lot of people outside of CHAI work on that, because that seems to become more useful carving of the field. So there’s this broad argument for AI safety which is, “We’re going to have very intelligent things based on the orthagonality thesis, we can’t really say anything about their goals.” So, the really important thing is to make sure that the intelligence is pointed at the right goals, it’s pointed at doing what we actually want.

And so then the natural approach is, how do we get our AI systems to infer what we want to do and then actually pursue that? And I think, in some sense, it’s one of the most obvious approaches to AI safety. This is a clear enough problem, even with narrow current systems that there are plenty of people outside of AI safety working on this, as well. So this incorporates things like inverse reinforcement learning, preference learning, reward modeling, the CIRL cooperative IRL paper also fits into all of this. So yeah, I can begin to ante up those in more depth.

Lucas: Why don’t you start off by talking about the people who exist within the field of AI safety, give sort of a brief characterization of what’s going on outside of the field, but primarily focusing on those within the field. How this approach, in practice, I think generally is, say, different from MIRI to start off with, because we have a clear picture of them painted right next to what we’re delving into now.

Rohin: So I think difference of MiRI is that this is more targeted directly at the problem right now, in that you’re actually trying to figure out how do you build an AI system that does what you want. Now, admittedly, most of the techniques that people have come up with are not likely to scale up to super-intelligent AI, they’re not meant to, no one claims that they’re going to scale up to super-intelligent AI. They’re more like some incremental progress on figuring out how to get AI systems to do what we want and, hopefully, with enough incremental progress, we’ll get to a point where we can go, “Yes, this is what we need to do.”

Probably the most well known person here would be Dylan Hadfield-Menell, who you had on your podcast. And so he talked about CIRL and associated things quite a bit there, there’s not really that much I would say in addition to it. Maybe a quick summary of Dylan’s position is something like, “Instead of having AI systems that are optimizing for their own goals, we need to have AI systems that are optimizing for our goals, and try to infer our goals in order to do that.”

So rather than having an AI system that is individually rational with respect to its own goals, you instead want to have a human AI system such that the entire system is rationally optimizing for the human’s goals. This is sort of the point made by CIRL, where you have an AI system, you’ve got a human, they’re playing those two player game, the humans is the only one who knows the reward function, the robot is uncertain about what the reward function is, and has to learn by observing what the humans does.

And so, now you see that the robot does not have a utility function that it is trying to optimize; instead is learning about a utility function that the human has and then helping the human optimize that reward function. So summary, try to build human AI systems that are group rational, as opposed to an AI system that is individually rational; so that’s Dylan’s view. Then there’s Jan Leike at DeepMind, and a few people at OpenAI.

Lucas: Before we pivot into OpenAI and DeepMind, just sort of focusing here on the CHAI end of things and this broad view, and help me explain here how you would characterize it. The present day actively focused view on current issues, and present day issues and alignment and making incremental progress there. This view here you see as a sort of subsuming multiple organizations?

Rohin: Yes, I do.

Lucas: Okay. Is there a specific name you would, again, use to characterize this view?

Rohin: Oh, getting AI systems to do what we want. Let’s see, do I have a pithy name for this? Helpful AI systems or something.

Lucas: Right which, again, is focused on current day things, is seeking to make incremental progress, and which subsumes many different organizations?

Rohin: Yeah, that seems broadly true. I do think there are people who are doing more conceptual work, thinking about how this will scale to AGI and stuff like that; but it’s a minority of work in the space.

Lucas: Right. And so the question of how do we get AI systems to do what we want them to do, also includes these views of, say, Vingean Reflection or how we become idealized versions of ourselves, or how we build on value over time, right?

Rohin: Yeah. So, those are definitely questions that you would need to answer at some point. I’m not sure that you would need to answer Vingean Reflection at some point. But you would definitely need to answer how do you update, given that humans don’t actually know what they want, for a long-term future; you need to be able to deal with that fact at some point. It’s not really a focus of current research, but I agree that that is a thing about this approach will have to deal with, at some point.

Lucas: Okay. So, moving on from you and Dylan to DeepMind and these other places that you view as this sort of approach also being practice there?

Rohin: Yeah, so while Dylan and I and other at CHAI has been focused on sort of conceptual advances, like in toy environments, does this do the right thing? What are some sorts of data that we can learn from? Do they work in these very simple environments with quite simple algorithms? I would say that OpenAI and DeepMind safety teams are more focused on trying to get this to work in complex environments of the sort that we’re getting this to work on state-of-the-art environments, the most complex ones that we have.

Now I don’t mean DoTA and StarCraft, because running experiments with DoTAi and StarCraft is incredibly expensive, but can we get AI systems that do what we want for environments like Atari or MuJoCo? There’s some work on this happening at CHAI, there are pre-prints available online, but it hasn’t been published very widely yet. Most of the work, I would say, has been happening with an OpenAI/DeepMind collaboration, and most recently, there was a position paper from DeepMind on recursive reward modeling.

Right before that there was also a paper on combining first a paper, deeper enforcement learning from human preferences, which said, “Okay if we allow humans to specify what they want by just comparing between different pieces of behavior from the AI system, can we train an AI system to do what the human wants?” And then they built on that in order to create a system that could learn from demonstrations, initially, using a kind of imitation learning, and then improve upon the demonstrations using comparisons in the same way that deep RL from human preferences did.

So one way that you can do this research is that there’s this field of human computer interaction, which is about … well, it’s about many things. But one of the things that it’s about is how do you make the user interface for humans intuitive and easy to use such that you don’t have user error or operator? One comment from people that I liked is that most of the things that are classified as ‘user error’ or ‘operator error’ should not be classified as such, they should be classified as ‘interface errors’ where you had such a confusing interface that well, of course, at some point some user was going to get it wrong.

And similarly, here, what we want is a particular behavior out of the AI, or at least a particular set of outcomes from the AI; maybe we don’t know exactly how to achieve those outcomes. And AI is about giving us the tools to create that behavior in automated systems. The current tool that we all use is the reward function, we write down the reward function and then we give it to an algorithm, and it produces behaviors and the outcomes that we want.

And reward functions, they’re just a pretty terrible user interface, they’re better than the previous interface which is writing a program explicitly, which humans cannot do it if the task is something like image classification or continuous control in MuJoCo; it’s an improvement upon that. But reward functions are still a pretty poor interface, because they’re implicitly saying that they encode perfect knowledge of the optimal behavior in all possible environments; which is clearly not a thing that humans can do.

I would say that this area is about moving on from reward functions, going to the next thing that makes the human’s job even easier. And so we’ve got things like comparisons, we’ve got things like inverse award design where you specify a proxy to work function that only needs to work in the training environment. Or you do something like inverse reinforcement learning, where you learn from demonstrations; so I think that’s one nice way of looking at this field.

Lucas: So do you have anything else you would like to add on here about how we present-day get AI systems to do what we want them to do, section of the field?

Rohin: Maybe I want to plug my value learning sequence, because it talks about this much more eloquently than I can on this podcast?

Lucas: Sure. Where can people find your value learning sequence?

Rohin: It’s on the Alignment Forum. You just go to the Alignment Forum, at the top there’s ‘Recommended Sequences’, there’s ‘Embedded Agency’, which is from MIRI, the sort of stuff we already talked about; so that’s also great sequence, I would recommend it. There’s iterated amplification, also great sequence we haven’t talked about it yet. And then there’s my value learning sequence, so you can see it on the front page of the Alignment Forum.

Lucas: Great. So we’ve characterized these, say, different parts of the AI alignment field. And probably just so far it’s been cut into this sort of MIRI view, and then this broad approach of trying to get present-day AI systems to do what we want them to do, and to make incremental progress there. Are there any other slices of the AI alignment field that you would like to bring to light?

Rohin: Yeah, I’ve got four or five more. There’s the interated amplification and debate side of things, which is how do we build using current technologies, but imagining that they were way better? How do we build and align AGI? So they’re trying to solve the entire problem, as opposed to making incremental progress and, simultaneously, hopefully thinking about, conceptually, how do we fit all of these pieces together?

There’s limiting the AGI system, which is more about how do we prevent AI systems from behaving catastrophically? It makes no guarantees about the AI systems doing what we want, it just prevents them from doing really, really bad things. Techniques in that section includes boxing and avoiding side effects. There’s the robustness view, which is about how do we make AI systems well behaved or robustly? I guess that’s pretty self explanatory.

There’s transparency or interpretability, which I wouldn’t say is a technique by itself, but seems to be broadly useful for almost all of the other avenues, it’s something we would want to add to other techniques in order to make those techniques more effective. There’s also, in the same frame as MIRI, can we even understand intelligence? Can we even forecast what’s going to happen with AI? And within that, there’s comprehensive AI services.

here’s also lots of efforts on forecasting, but comprehensive AI services actually makes claims about what technical AI safety should do. So I think that one actually does have a place in this podcast, whereas most of the forecasting things do not, obviously. They have some implications on the strategic picture, but they don’t have clear implications on technical safety research directions, as far as I can tell it right now.

Lucas: Alright, so, do you want to go ahead and start off with the first one on the list there And then we’ll move sequentially down?

Rohin: Yeah, so iterated amplification and debate. This is similar to the helpful AGI section in the sense that we are trying to build an AI system that does what we want. That’s still the case here, but we’re now trying to figure out, conceptually, how can we do this using things like reinforcement learning and supervised learning, but imagining that they’re way better than they are right now? Such that the resulting agent is going to be aligned with us and reach arbitrary levels of intelligence; so in some sense, it’s trying to solve the entire problem.

We want to come up with a scheme such that if we run that scheme, we get good outcomes, we’ve solved almost all the problem. I think that it also differs in that the argument for why we can be successful is also different. This field is aiming to get a property of corrigibility, which I like to summarize as trying to help the overseer. It might fail to help the overseer, or the human, or the user, because it’s not very competent and maybe it makes a mistake and things that I like apples when actually I want oranges. But it was actually trying to help me; it actually thought I wanted apples.

So in corrigibility, you’re trying to help the overseer, whereas, in the previous thing about helpful AGI, you’re more getting an AI system that actually does what we want; there isn’t this distinction between what you’re trying to do versus what you actually do. So there’s a slightly different property that you’re trying to ensure, I think, on the strategic picture that’s the main difference.

The other difference is that these approaches are trying to make a single, unified generally intelligent AI system, and so they will make assumptions like, given that we’re trying to imagine something that’s generally intelligent, it should be able to do X, Y, and Z. Whereas the research agenda that’s let’s try to get AI systems that do want you want, tends not to make those assumptions. And so it’s more applicable to current systems or narrow system where you can’t assume that you have general intelligence.

For example, a claim that that Paul Christiano often talks about is that, “If your AI agent is generally intelligent and a little bit corrigible, it will probably easily be able to infer that its overseer, or the user, would like to remain in control of any resources that they have, and would like to be better informed about the situation, that the user would prefer that the agent does not lie to them etc., etc.” It was definitely not something that current day AI systems can do unless you really engineer them to, so this is presuming some level of generality, which we do not currently have.

So the next thing I said was limited AGI. Here the idea is, there are not very many policies or AI systems that will do what we want; what we want is a pretty narrow space in the space of all possible behaviors. Actually selecting one of the behaviors out of that space is quite difficult and requires a lot of information in order to narrow in on that piece of behavior. But if all you’re trying to do is avoid the catastrophic behaviors, then there are lots and lots of policies that successfully do that. And so it might be easier to find one of those policies; a policy that doesn’t ever kill all humans.

Lucas: At least the space of those policies, one might have this view and not think it sufficient for AI alignment, but see it as sort of a low hanging fruit to be picked. Because the space of non-catastrophic outcomes is larger than the space of extremely specific futures that human beings support.

Rohin: Yeah, exactly. And the success story here is, basically, that we develop this way of preventing catastrophic behaviors. All of our AI systems are filled with the system in place, and then technological progress continues as usual; it’s maybe not as fast as it would have been if we had an aligned AGI doing all of this for us, but hopefully it would still be somewhat fast, and hopefully enabled a bit by AI systems. Eventually, we will either make it to the future without ever building an AI system that doesn’t have a system in place, or we use this to do a bunch more AI research until we solve the full alignment problem, and then we can build, with high confidence that it’ll go well.

And actual proper aligned, super-intelligence that is helping us without any of these limitations systems in place. I think from a strategic picture, that’s basically the important parts about limited AGI. There are two subsections within those limits based on trying to change what the AI’s optimizing for, so this would be something like impact measures versus limits on the input/output channels of the AI system; so this would be something like AI boxing.

So, with robustness, I sort of think of the robustness mostly, it’s not going to give us safety by itself, probably, though there are some scenarios in which it could happen. It’s more meant to harden whichever other approach that we use. Maybe if we have an AI system that is trying to do what we want, to go back to the helpful AGI setting, maybe it does that 99.9 percent of the time. But we’re using this AI to make millions of decisions, which means it’s going to not do what we want 1,000 times. That seems like way too many times for comfort, because if it’s applying its intelligence to the wrong goal in those 1,000 times, you could get some pretty bad outcomes.

This is a super heuristic and fluffy argument, but there are lots of problems with it. I think it sets up the general reason that we would want robustness. So with robustness techniques, you’re basically trying to get some nice worst case guarantees that say, “Yeah, the AI system is never going to screw up super, super bad.” And this is helpful when you have an AI system that’s going to make many, many, many decisions, and we want to make sure that none of those decisions are going to be catastrophic.

And so some techniques in here include verification, adversarial training, and other adversarial ML techniques like Byzantine fault tolerance, or stuff like that. These are all the data poisoning, interpretability can also be helpful for robustness if you’ve got a strong overseer who can use interpretability to give good feedback to your AI system. But yeah, the overall goal is take something that doesn’t fail 99 percent of the time, and get it to not fail 100 percent of the time, or check whether or not it ever fails, so that you don’t have this very rare but very bad outcome.

Lucas: And so would you see this section as being within the context of any others or being sort of at a higher level of abstraction?

Rohin: I would say that it applies to any of the others, well okay, not the MIRI embedded agency stuff, because we don’t really have a story for how that ends up helping with AI safety. It could apply to however that caches out in the future, but we don’t really know right now. With limited AGI, many have this theoretical model, if you apply this sort of penalty, this sort of impact measure, then you’re never going to have any catastrophic outcomes.

But, of course, in practice, we train our AI systems to optimize that penalty and get the sort of weird black box thing out. And we’re not entirely sure if it’s respecting the penalty or something like this. Then you could use something like verification or your transparency in order to make sure that this is actually behaving the way we would predict them behave based on our analysis of what limits we need to put on the AI system.

Similarly, if you build AI systems that are doing what we want, maybe you want to use adversarial training to see if you can find any situations in which the AI system’s doing something weird, doing something which we wouldn’t classify as what we want, with iterated amplification or debate, maybe we want to verify that the corrigibility property happens all the time. It’s unclear how you would use verification for that, because it seems like a particularly hard property to formalize, but you could still do things like adversarial training or transparency.

We might have this theoretical arguments for why our systems will work, then once we turn them into actual real systems that will probably use neural nets and other messy stuff like that, are we sure that in the translation from theory to practice, all of our guarantees stayed? Unclear, we should probably use some robustness techniques to check that.

Interpretability, I believe, was next. It’s sort of similar in that it’s broadly useful for everything else. If you want to figure out whether an AI system is doing what you want, it would be really helpful to be able to look into the agent and see, “Oh, it chose to buy apples because it had seen me eat apples in the past.” Versus, “It chose to buy apples because there was this company that made it to buy the apples, so that it would make more profit.”

If we could see those two cases, if we could actually see into the decision making process, it becomes a lot easier to tell whether or not the AI system is doing what we want, or whether or not the AI system is corrigible, or whether or not be AI system is properly … Well, maybe it’s not as obvious for impact measures, but I wouldn’t expect it to be useful there as well, even if I don’t have a story off the top of my head.

Similarly with robustness, if you’re doing something like adversarial training, it sure would help if your adversary was able to look into the inner workings of the agent and be like, “Ah, I see this agent, it tends to underwrite this particular class of risky outcomes. So why don’t I search within that class of situations for one that is going to take a big risk on that it shouldn’t have taken otherwise?” It just makes all of the other problems a lot easier to do.

Lucas: And so how is progress made on interpretability?

Rohin: Right now I think most of the progress is in image classifiers. I’ve seen some work on interpretability for deep RL as well. Honestly, that’s probably most of the research is happening with classification systems, primarily image classifiers, but others as well. And then I also see the deep RL explanation systems because I read a lot of deep RL research.

But it’s motivated a lot, there are real problems with current AI systems, and interpretability helps you to diagnose and fix those, as well. For example, the problems of bias in classifiers, one thing that I remember from Deep Dream is you can ask Deep Dream to visualize barbells. And you always see these sort of muscular arms that are attached to the barbells because, in the training set, barbells were always being picked up by muscular people. So, that’s a way that you can tell that your classifier is not really learning the concepts that you wanted it to do.

In the bias case maybe your classifier always classifies anyone sitting at a computer as a man, because of bias in the data set. And using interpretability techniques, you could see that, okay when you look at this picture, the AI system is looking primarily at the pixels that represent the computer, as opposed to the pixels that represent the human. And making its decision to label this person as a man, based on that, and you’re like, no, that’s clearly the wrong thing to do. The classifier should be paying attention to the human, not to the laptop.

So I think a lot of interpretability research right now is you take a particular short term problem and figure out how you can make that problem easier to solve. Though a lot of it is also what would be the best way to understand what our model is doing? So I think a lot of the work that Chris Olah doing, for example, is in this vein, and then as we do this exploration, finding some sort of bias in the classifiers that you’re studying.

So, Comprehensive AI Services, an attempt to predict what the feature of AI development will look like, and the hope is that, by doing this, we can figure out what sort of technical safety things we will need to do. Or, strategically, what sort of things we should push for in the AI research community in order to make those systems safer.

There’s a big difference between, we are going to build a single unified AGI agent and it’s going to be generally intelligent to optimize the world according to a utility function versus we are going to build a bunch of disparate, separate, narrow AI systems that are going to interact with each other quite a lot. And because of that, they will be able to do a wide variety of tasks, none of them are going to look particularly like expected utility maximizers. And the safety research you want to do is different in those two different worlds. And CAIS is basically saying “We’re in the second of those worlds, not the first one.”

Lucas: Can you go ahead and tell us about ambitious value learning?

Rohin: Yeah, so with ambitious value learning, this is also an approach to how do we make an aligned AGI solve the entire problem in some sense? Which is look at not just human behavior, but also human brains of the algorithm that they implement, and use that to infer an adequate utility function, the one that we would be okay with the behavior that results from that.

Infer this utility function, I’m going to plug it into an expected utility maximizer. Now, of course, we do have to solve problems with even once we have the utility function, how do we actually build a system that maximizes that utility function, which is not a solved problem yet? But it does seem to be capturing from the main difficulties, if you could actually solve the problem. And so that’s an approach I associate most with Stuart Armstrong.

Lucas: Alright, and so you were saying earlier, in terms of your own view, it’s sort of an amalgamation of different credences that you have in the potential efficacy of all these different approaches. So, given all of these and all of their broad missions, and interests, and assumptions that they’re willing to make, what are you most hopeful about? What are you excited about? How do you, sort of, assign your credence and time here?

Rohin: I think I’m most excited about the concept of corrigibility. That seems like the right thing to aim for, it seems like it’s a thing we can achieve, it seems like if we achieve it, we’re probably okay, nothing’s going to go horribly wrong and probably will go very well. I am less confident on which approach to corrigibility I am most excited about. Iterated amplification and debate seem like if we were to implement them, they will probably lead to incorrigible behavior. But I am worried that either of those will be … Either we won’t actually be able to build generally intelligent agents, in which case both of those approaches don’t really work. Or another worry that I have is that those approaches might be too expensive to actually do in that other systems are just so much more computationally efficient that we just use those instead.

Due to economic pressures, Paul does not seem to be worried by either of these things. He’s definitely aware of both these issues, in fact, he was the one I think who listed computational efficiency as a desideratum, and he still is optimistic about them. So, I would not put a huge amount of credence in this view of mine.

If I were to say what I was excited about for portability instead of that, it would be something like take the research that we’re currently doing on how to get current AI systems to work, which often called ‘narrow value learning’. If you take that research, it seems plausible that this research, extended into the future, will give us some method of creating an AI system that’s implicitly learning our narrow values, and is corrigible as a result of that, even if it is not generally intelligent.

This is sort of a very hand wavey speculative intuition, certainly not as concrete as the hope that we have with iterated amplification. But I’m somewhat optimistic about it, and less optimistic about limiting AI systems, it seems like even if you succeed in finding a nice, simple rule that eliminates all catastrophic behaviors, which plausibly you could do, it seems hard to find one that both does that and also lets you do all of the things that you do want to do.

If you’re talking about impact metrics, for example, if you require AI to be a low impact, I expect that that would prevent you from doing many things that we actually want to do, because many things that we want to do are actually quite high impact. Now, Alex Turner disagrees with me on this, and he developed attainable utility preservation. He is explicitly working on this problem and disagree with me, so again I don’t know how much credence to put in this.

I don’t know if Vika agrees with me on this or not, she also might disagree with me and she is also directly working with this problem. So, yeah, seems hard to put a limit that also lets us do and things that we want. And in that case, it seems like due to economic pressures, we’d end up doing the things that don’t limit our AI systems from doing what they want.

I want to keep emphasizing my extreme uncertainty over all of this given that other people disagree with me on this, but that’s my current opinion. Similarly with boxing, it seems like it’s going to just make it very hard to actually use the AI system. Robustness and interpretability seems very broadly useful and supportive of most research on interpretability; maybe with an eye towards long term concerns, just because it seems to make every other approach to AI safety a lot more feasible and easier to solve.

I don’t think it’s a solution by itself, but given that it seems to improve almost every story I have for making an aligned AGI, seems like it’s very much worth getting a better understanding of it. Robustness is an interesting one, it’s not clear to me, if it is actually necessary. I kind of want to just voice lots of uncertainty about robustness and leave it at that. It’s certainly good to do in that it helps us be more confident in our AI systems, but maybe everything would be okay even if we just didn’t do anything. I don’t know, I feel like I would have to think a lot more about this and also see the techniques that we actually used to build AGI in order to have a better opinion on that.

Lucas: Could you give a few examples of where your intuitions here are coming from that don’t see robustness as an essential part of the AI alignment?

Rohin: Well, one major intuition, if you look at humans, they’re at least some human where I’m like, “Okay, I could just make this human a lot smarter, a lot faster, have them think for many, many years, and I still expect that they will be robust and not lead to some catastrophic outcome. They may not do exactly what I would have done, because they’re doing what they want. But they’re probably going to do something reasonable, they’re not going to do something crazy or ridiculous.

I feel like humans, some humans, the sufficiently risk averse and uncertain ones seem to be reasonably robust. I think that if you know that you’re planning over a very, very, very long time horizon, so imagine that you know you’re planning over billions of years, then the rational response to this is, “I really better make sure not to screw up right now, since there is just so much reward in the future, I really need to make sure that I can get it.” And so you get very strong pressures for preserving option value or not doing anything super crazy. So I think you could, plausibly, just get the reasonable outcomes from those effects. But again, these are not well thought out.

Lucas: All right, and so I just want to go ahead and guide us back to your general views, again, on the approaches. Is there anything that you’d like to add their own the approaches?

Rohin: I think I didn’t talk about CAIS yet. I guess my general view of CAIS, I broadly agree with it, that this does seem to be the most likely development path, meaning that it’s more likely than any other specific development path, but not more likely to have any other development path.

So I broadly agree with the worldview presented, I’m still trying to figure out what implications it has for technical safety research. I don’t agree with all of it, in particular, I think that you are likely to get AGI agents at some point, probably, after the CAIS soup of services happens. Which, I think, again, Drexler disagrees with me on that. So, put a bunch of uncertainty on that, but I broadly agree with that worldview that CAIS is proposing.

Lucas: In terms of this disagreement between you and Eric Drexler, are you imagining agenty AGI or super-intelligence which comes after the CAIS soup? Do you see that as an inevitable byproduct of CAIS or do you see that as an inevitable choice that humanity will make? And is Eric pushing the view that the agenty stuff doesn’t necessarily come later, it’s a choice that human beings would have to make?

Rohin: I do think it’s more like saying that this will be a choice that humans will make at some point. I’m sure that Eric, to some extent, is saying, “Yeah, just don’t do that.” But I think Eric and I do, in fact, have a disagreement on how much more performance you can get from an AGI agent, than a CAIS super of services. My argument is something like there is efficiency to be gained from going to an AGI agent, and Eric’s position as best I understand it, is that there is actually just not that much economic incentive to go to an AGI agent.

Lucas: What are your intuition pumps for why you think that you will gain a lot of computational efficiency from creating sort of an AGI agent? We don’t have to go super deep, but I guess a terse summary or something?

Rohin: Sure, I guess the main intuition pump is that in all of the past cases that we have of AI systems, you see that in speech recognition, in deep reinforcement learning, in image classification, we had all of the hand-built systems that separated these out into a few different modules that interacted with each other in a vaguely CAIS-like way. And then, at some point, we got enough computer and large enough data sets that we just threw deep learning at it, and deep learning just blew those approaches out of the water.

So there’s the argument from empirical experience, and there’s also the argument of if you try to modularize your systems yourself, you can’t really optimize the communication between them, you’re less integrated and you can’t make decisions based on global information, you have to make it based off of local information. And so the decisions tend to be a little bit worse. This could be taken as an explanation for the empirical observation that I made that we can already make; so that’s another intuition pump there.

Eric’s response would probably be something like, “Sure, this seems true for these narrow tasks, for narrow tasks.” You can get a lot of efficiency gains by integrating everything together and throwing deep learning and [inaudible 00:54:10] training at all of it. But for a sufficiently high level tasks, there’s not really that much to be gained by doing global information instead of local information, so you don’t actually lose much by having these separate systems, and you do get a lot of computational deficiency in generalization bonuses by modularizing. He had a good example of this that I’m not replicating and I don’t want to make my own example, because it’s not going to be as convincing; but that’s his current argument.

And then my counter-argument is that’s because humans have small brains, so given the size of our brains and the limits of our data, and the limits of the compute that we have, we are forced to do modularity and systematization to break tasks apart into modular chunks that we can then do individually. Like if you are running a corporation, you need each person to specialize in their own task without thinking about all the other tasks, because we just do not have the ability to optimize for everything all together because we have small brains, relatively speaking; or limited brains, is what I should say.

But this is not a limit that AI systems will have. An AI system would just vastly more computer than the human brain, vastly more data will, in fact, just be able to optimize all of this with global information and get better results. So that’s one thread of the argument taken down to two or three levels of arguments and counter-arguments. There are other threads of that debate, as well.

Lucas: I think that that serves a purpose for illustrating that here. So are there any other approaches here that you’d like to cover, or is that it?

Rohin: I didn’t talk about factored cognition very much. But I think it’s worth highlighting separately from iterated amplification in that it’s testing an empirical hypothesis of can humans decompose tasks into chunks of some small amount of time? And can we do arbitrarily complex tasks using these humans? I am particularly excited about this sort of work that’s trying to figure out what humans are capable of doing and what supervision they can give to AI systems.

Mostly because going back to a thing I said way back in the beginning, what we’re aiming for is a human AI system to be collectively rational as opposed to an AI system as individually rational. Part of the human-AI-system is the human, you want to be able to know what the human can do, what sort of policies they can implement, what sort of feedback they can be giving to the AI system. And something like factory cognition is testing a particular aspect of that; and I think that seems great and we need more of it.

Lucas: Right. I think that this seems to be the sort of emerging view of where social science or scientists are needed in AI alignment in order to, again as you said, sort of understand what human beings are capable in terms of supervised learning and analyzing the human component of the AI alignment problem as it requires us to be collectively rational with AI systems.

Rohin: Yeah, that seems right. I expect more writing on this in the future.

Lucas: All right, so there’s just a ton of approaches here to AI alignment, and our heroic listeners have a lot to take in here. In terms of getting more information, generally, about these approaches or if people are still interested in delving into all these different views that people take at the problem and methodologies of working on it, what would you suggest that interested persons look into or read into?

Rohin: I cannot give you a overview of everything, because that does not exist. To the extent that it exists, it’s either this podcast or the talk that I did at Beneficial AGI. I can suggest resources for individual items, so for embedded agency there’s the embedded agency sequence on the Alignment Forum; far and away the best thing for read for that.

For CAIS, Comprehensive AI Services, there was a 200 plus page tech report published by Eric Drexler at the beginning of this month, if you’re interested, you should go read the entire thing; it is quite good. But I also wrote a summary of it on the Alignment Forum, which is much more readable, in the sense that it’s shorter. And then there are a lot of comments on there that analyze it a bit more.

There’s also another summary written by Richard Ngo, also on the Alignment Forum. Maybe it’s only on Lesswrong, I forget; it’s probably on the Alignment Forum. But that’s a different take on comprehensive AI services, so I’d recommend reading that too.

For limited AGI, I have not really been keeping up with the literature on boxing, so I don’t have a favorite to recommend. I know that a couple have been written by, I believe, Jim Babcock and Roman Yampolskiy.

For impact measures, you want to read Vika’s paper on relative reachability. There’s also a blog post about it if you don’t want to read the paper. And Alex Turner’s blog posts on attainable utility preservation, I think it’s called ‘Towards A New Impact Measure’, and this is on the Alignment Forum.

For robustness, I would read Paul Christiano’s post called ‘Techniques For Optimizing Worst Case Performance’. This is definitely specific to how robustness will help under Paul’s conception of the problem and, in particular, his thinking of robustness in the setting where you have a very strong overseer for your AI system. But I don’t know of any other papers or blog post that’s talking about robustness, generally.

For AI systems that do what we want, there’s my value learning sequence that I mentioned before on the Alignment Forum. There’s CIRL or Cooperative Inverse Reinforcement Learning which is a paper by Dylan and others. There’s Deep Reinforcement Learning From Human Preferences and Recursive Reward Modeling, these are both papers that are particular instances of work in this field. I also want to recommend Inverse Reward Design, because I really like that paper; so that’s also a paper by Dylan, and others.

For corrigibility and iterated amplification, the iterated amplification sequence on the Alignment Forum or half of what Paul Christiano has written. If you want to read not an entire sequence of blog posts, then I think Clarifying AI alignment is probably the post I would recommend. It’s one of the posts in the sequence and talks about this distinction of creating an AI system that is trying to do what you want, as opposed to actually doing what you want and why we might want to aim for only the first one.

For iterated amplification, itself, that technique, there is a paper that I believe is called something like Supervising Strong Learners By Amplifying Weak Experts, which is a good thing to read and there’s also corresponding OpenAI blog posts, whose name I forget. I think if you search iterated amplification, OpenAI blog you’ll find it.

And then for debate, there’s AI Safety via Debate, which is a paper, there’s also a corresponding OpenAI blog post. For factory cognition, there’s a post called Factored Cognition, on the Alignment Forum; again, in the iterated amplification sequence.

For interpretability, there isn’t really anything talking about interpretability, from the strategic point of view of why we want it. I guess that same post I recommend before of techniques for optimizing worst case performance talks about it a little bit. For actual interpretability techniques, I recommend the distill articles, the building blocks of interpretability and feature visualization, but these are more about particular techniques for interpretability, as opposed to why we wanted interpretability.

And on ambitious value learning, the first chapter of my sequence on value learning talks exclusively about ambitious value learning; so that’s one thing I’d recommend. But also Stuart Armstrong has so many posts, I think there’s one that’s about resolving human values adequately and something else, something like that. That one might be one worth checking out, it’s very technical though; lots of math.

He’s also written a bunch of posts that convey the intuitions behind the ideas. They’re all split into a bunch of very short posts, so I can’t really recommend any one particular one. You could go to the alignment newsletter database and just search Stuart Armstrong, and click on all of those posts and read them. I think that was everything.

Lucas: That’s a wonderful list. So we’ll go ahead and link those all in the article which goes along with this podcast, so that’ll all be there organized in nice, neat lists for people. This is all probably been fairly overwhelming in terms of the number of approaches and how they differ, and how one is to adjudicate the merits of all of them. If someone is just sort of entering the space of AI alignment, or is beginning to be interested in sort of these different technical approaches, do you have any recommendations?

Rohin: Reading a lot, rather than trying to do actual research. This was my strategy, I started back in September of 2017 and I think for the first six months or so, I was reading about 20 hours a week, in addition to doing research; which was why it was only 20 hours a week, it wasn’t a full time thing I was doing.

And I think that was very helpful for actually forming a picture of what everyone was doing. Now, it’s plausible that you don’t want to actually learn about what everyone is doing, and you’re okay with like, “I’m fairly confident that this thing, this particular problem is an important piece of the problem and we need to solve it.” And I think it’s very easy to get that wrong, so I’m a little wary of recommending that but it’s a reasonable strategy to just say, “Okay, we probably will need to solve this problem, but even if we don’t, the intuitions that we get from trying to solve this problem will be useful.

Focusing on that particular problem, reading all of the literature on that, attacking that problem, in particular, lets you start doing things faster, while still doing things that are probably going to be useful; so that’s another strategy that people could do. But I don’t think it’s very good for orienting yourself in the field of AI safety.

Lucas: So you think that there’s a high value in people taking this time to read, to understand all the papers and the approaches before trying to participate in particular research questions or methodologies. Given how open this question is, all the approaches make different assumptions and take for granted different axioms which all come together to create a wide variety of things which can both complement each other and have varying degrees of efficacy in the real world when AI systems start to become more developed and advanced.

Rohin: Yeah, that seems right to me. Part of the reason I’m recommending this is because it seems to be that no one does this. I think, on the margin, I want more people who do this in a world where 20 percent of the people were doing this, and the other 80 percent were just taking particular piece of the problem and working on those. That might be the right balance, somewhere around there, I don’t know, it depends on how you count who is actually in the field. But somewhere between one and 10 percent of the people are doing this; closer to the one.

Lucas: Which is quite interesting, I think, given that it seems like AI alignment should be in a stage of maximum exploration just given the conceptually mapping the territory is very young. I mean, we’re essentially seeing the birth and initial development of an entirely new field and specific application of thinking. And there are many more mistakes to be made, and concepts to be clarified, and layers to be built. So, seems like we should be maximizing our attention in exploring the general space, trying to develop models, the efficacy of different approaches and philosophies and views of AI alignment.

Rohin: Yeah, I agree with you, that should not be surprising given that I am one of the people doing this, or trying to do this. Probably the better critique will come from people who are not doing this, and can tell both of us why we’re wrong about this.

Lucas: We’ve covered a lot here in terms of the specific approaches, your thoughts on the approaches, where we can find resources on the approaches, why setting the approaches matters. Are there any parts of the approaches that you feel deserve more attention in terms of these different sections that we’ve covered?

Rohin: I think I would want more work on looking at the intersection between things that are supposed to be complimentary, how interpretability can help you have AI systems that have the right goals, for example, would be a cool thing to do. Or what you need to do in order to get verification, which is a sub-part of robustness, to give you interesting guarantees on AI systems that we actually care about.

Most of the work on verification right now is like, there’s this nice specification that we have for adversarial examples, in particular, is there an input that is within some distance from a training data point, such that it gets classified differently from that training data point. And those are the nice formal specification and most of the work in verification takes this specification as given and that figures out more and more computationally efficient ways to actually verify that property, basically.

That does seem like a thing that needs to happen, but the much more urgent thing, in my mind, is how do we come up with these specifications in the first place? If I want to verify that my AI system is corrigible, or I want to verify that it’s not going to do anything catastrophic, or that it is going to not disable my value learning system, or something like that; how do I specify this at all in any way that lets me do something like a verification technique even given infinite computing power? It’s not clear to me how you would do something like that, and I would love to see people do more research on that.

That particular thing is my current reason for not being very optimistic about verification, in particular, but I don’t think anyone has really given it a try. So it’s plausible that there’s actually just some approach that could work that we just haven’t found yet because no one’s really been trying. I think all of the work on limited AGI is talking about, okay, does this actually eliminate all of the catastrophic behavior? Which, yeah, that’s definitely an important thing, but I wish that people would also do research on, given that we put this penalty or this limit on the AGI system, what things is it still capable of doing?

Have we just made it impossible for it to do anything of interest whatsoever, or can it actually still do pretty powerful things, even though we’ve placed these limits on it? That’s the main thing I want to see. From there, let’s have AI systems that do what we want, probably the biggest thing I want to see there, and I’ve been trying to do some of this myself, some conceptual thinking about how does this lead to good outcomes in the long term? So far, we’ve not been dealing with the fact that the human doesn’t actually know, doesn’t actually have a nice consistent utility function that they know and that can be optimized. So, once you relax that assumption, what the hell do you do? And then there’s also a bunch of other problems that would benefit from more conceptual clarification, maybe I don’t need to go into all of them right now.

Lucas: Yeah. And just to sort of inject something here that I think we haven’t touched on and that you might have some words about in terms of approaches. We discussed sort of agential views of advanced artificial intelligence, a services-based conception, though I don’t believe that we have talked about aligning AI systems that simply function as oracles or having a concert of oracles. You can get rid of the services thing, and the agency thing if the AI just tells you what is true, or answers your questions in a way that is value aligned.

Rohin: Yeah, I mostly want to punt on that question because I have not actually read all the papers. I might have read a grand total of one paper on the oracles, and also super intelligence which talks about oracles. So I feel like I know so little about the state of the art on oracles, that it should not actually say anything about them.

Lucas: Sure. So then just as a broad point to point out to our audience is that in terms of conceptualizing these different approaches to AI alignment, it’s important and crucial to consider the kind of AI system that you’re thinking about the kinds of features and properties that it has, and oracles are another version here that one can play with in one’s AI alignment thinking?

Rohin: I think the canonical paper there is something like Good and Safe Pieces of Oracles, but I have not actually read it. There is a list of things I want to read, it is on that list. But that list also has, I think, something like 300 papers on it, and apparently I have not gotten to oracles yet.

Lucas: And so for the sake of this whole podcast being as comprehensive as possible, are there any conceptions of AI, for example, that we have omitted so far adding on to this agential view, the CAIS view of it actually just being a lot of distributed services, or an oracle view?

Rohin: There’s also the Tool AI View. This is different from the services view, but it’s somewhat akin to the view you were talking about at the beginning of this podcast where you’ve got AI systems that have a narrowly defined input/output space, they’ve got a particular thing that they do with limit, and they just sort of take in their inputs and do some computation, they spit out their outputs and that’s it, that’s all that they do. You can’t really model them as having some long term utility function that they’re optimizing, they’re just implementing a particular input-output relation and it’s all they’re trying to do.

Even saying something like, “They are trying to do X.” Is basically using a bad model for them. I think the main argument against expecting tool AI systems is that they’re probably not going to be as useful as other services or agential AI, because tool AI systems would have to be programmed in a way where we understood what they were doing and why they were doing it. Whereas agential AI systems or services would be able to consider new possible ways of achieving goals that we hadn’t thought about and enact those plans.

And so they could get super human behavior by considering things that we wouldn’t consider. Whereas, true Ais … Like Google Maps is super human in some sense, but it’s super human only because it has a compute advantage over us. If we were given all of the data and all of the time, in human real time, that Google Maps had, we could implement a similar sort of algorithm as Google Maps and compute the optimal route ourselves.

Lucas: There seems to be this duality that is constantly being formed in our conception of AI alignment, where the AI system is this tangible external object which stands in some relationship to the human and is trying to help the human to achieve certain things.

Are there conceptions of value alignment which, however the procedure or methodology is done, changes or challenges the relationship between the AI system and the human system where it challenges what it means to be the AI or what it means to be human, whereas, there’s potentially some sort of merging or disruption of this dualistic scenario of the relationship?

Rohin: I don’t really know, I mean, it sounds like you’re talking about things like brain computer interfaces and stuff like that. I don’t really know of any intersection between AI safety research and that. I guess, this did remind me, too, that I want to make the point that all of this is about the relatively narrow, I claim, problem of aligning an AI system with a single human.

There is also the problem of, okay what if there are multiple humans, what if there are multiple AI systems, what if you’ve got a bunch of different groups of people and each group is value aligned within themselves, they build an AI that’s value aligned with them, but lots of different groups do this now what happens?

Solving the problem that I’ve been talking about does not mean that you have a good outcome in the long term future, it is merely one piece of a larger overall picture. I don’t think any of that larger overall picture removes the dualistic thing that you were talking about, but they dualistic part reminded me of the fact that I am talking about a narrow problem and not the whole problem, in some sense.

Lucas: Right and so just to offer some conceptual clarification here, again, the first problem is how do I get an AI system to do what I want it to do when the world is just me and that AI system?

Rohin: Me and that AI system and the rest of humanity, but the rest of humanity is treated as part of the environment.

Lucas: Right, so you’re not modeling other AI systems or how some mutually incompatible preferences and trained systems would interact in the world or something like that?

Rohin: Exactly.

Lucas: So the full AI alignment problem is… It’s funny because it’s just the question of civilization, I guess. How do you get the whole world and all of the AI systems to make a beautiful world instead of a bad world?

Rohin: Yeah, I’m not sure if you saw my lightning talk at Beneficial AGI, but I talked a bit about those. I think I called that top level problem, make AI related features stuff go well, very, very, very concrete, obviously.

Lucas: It makes sense. People know what you’re talking about.

Rohin: I probably wouldn’t call that broad problem the AI alignment problem. I kind of wonder is there a different alignment for the narrower trouble? We could maybe call it the ‘AI Safety Problem’ or the ‘AI Future Problem’, I don’t know. ‘Beneficially AI’ problem actually, I think that’s what I used last time.

Lucas: That’s a nice way to put it. So I think that, conceptually, leave us at a very good place for this first section.

Rohin: Yeah, seems pretty good to me.

Lucas: If you found this podcast interesting or useful, please make sure to check back for part two in a couple weeks where Rohin and I go into more detail about the strengths and weaknesses of specific approaches.

We’ll be back again soon with another episode in the AI Alignment podcast.

[end of recorded material]

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women’s International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions — especially for the military — and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

AI Alignment Podcast: AI Alignment through Debate with Geoffrey Irving

“To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information…  In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment.” AI safety via debate

Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.

On today’s episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. 

We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • What debate is and how it works
  • Experiments on debate in both machine learning and social science
  • Optimism and pessimism about debate
  • What amplification is and how it fits in
  • How Geoffrey took inspiration from amplification and AlphaGo
  • The importance of interpretability in debate
  • How debate works for normative questions
  • Why AI safety needs social scientists
You can find out more about Geoffrey Irving at his website. Here you can find the debate game mentioned in the podcast. Here you can find Geoffrey Irving, Paul Christiano, and Dario Amodei’s paper on debate. Here you can find an Open AI blog post on AI Safety via Debate. You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Geoffrey Irving about AI safety via Debate. We discuss how debate fits in with the general research directions of OpenAI, what amplification is and how it fits in, and the relation of all this with AI alignment. As always, if you find this podcast interesting or useful, please give it a like and share it with someone who might find it valuable.

Geoffrey Irving is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. Without further ado, I give you Geoffrey Irving.

Thanks again, Geoffrey, for coming on the podcast. It’s really a pleasure to have you here.

Geoffrey: Thank you very much, Lucas.

Lucas: We’re here today to discuss your work on debate. I think that just to start off, it’d be interesting if you could provide for us a bit of framing for debate, and how debate exists at OpenAI, in the context of OpenAI’s general current research agenda and directions that OpenAI is moving right now.

Geoffrey: I think broadly, we’re trying to accomplish AI safety by reward learning, so learning a model of what humans want and then trying to optimize agents that achieve that model, so do well according to that model. There’s sort of three parts to learning what humans want. One part is just a bunch of machine learning mechanics of how to learn from small sample sizes, how to ask basic questions, how to deal with data quality. There’s a lot more work, then, on the human side, so how do humans respond to the questions we want to ask, and how do we sort of best ask the questions?

Then, there’s sort of a third category of how do you make these systems work even if the agents are very strong? So stronger than human in some or all areas. That’s sort of the scalability aspect. Debate is one of our techniques for doing scalability. Amplification being the first one and Debate is a version of that. Generally want to be able to supervise a learning agent, even if it is smarter than a human or stronger than a human on some task or on many tasks.

Debate is you train two agents to play a game. The game is that these two agents see a question on some subject, they give their answers. Each debater has their own answer, and then they have a debate about which answer is better, which means more true and more useful, and then a human sees that debate transcript and judges who wins based on who they think told the most useful true thing. The result of the game is, one, who won the debate, and two, the answer of the person who won the debate.

You can also have variants where the judge interacts during the debate. We can get into these details. The general point is that, in my tasks, it is much easier to recognize good answers than it is to come up with the answers yourself. This applies at several levels.

For example, at the first level, you might have a task where a human can’t do the task, but they can know immediately if they see a good answer to the task. Like, I’m bad at gymnastics, but if I see someone do a flip very gracefully, then I can know, at least to some level of confidence, that they’ve done a good job. There are other tasks where you can’t directly recognize the answer, so you might see an answer, it looks plausible, say, “Oh, that looks like a great answer,” but there’s some hidden flaw. If an agent were to point out that flaw to you, you’d then think, “Oh, that’s actually a bad answer.” Maybe it was misleading, maybe it was just wrong. You need two agents doing a back and forth to be able to get at the truth.

Then, if you apply this recursively through several levels, you might have tasks where you can’t recognize whether an answer is good directly. You can’t even recognize whether a counterargument is good. Maybe a counter-counterargument, then you could recognize it. If you do sort of three steps of Debate, back and forth with two agents, you get to the truth.

Depending on the task, you need some number of these steps to pin down what the true answer is. Broadly, what’s happening is that the debaters, when they’re giving these counterarguments, they’re skipping over a lot of the space where maybe the human doesn’t understand what would happen, but they know that it would go a certain way. Like, if you’d see two expert players playing a game of Go, they’re constantly ignoring a larger number of moves which are just very bad and would waste a lot of time if you try to analyze them, and they’re just looking at kind of the most important part of the arguments about what is the best move in the game.

For all of the experiments, there are simple image tasks, like MNIST. MNIST is a dataset of digits, and then also some preliminary kind of human experiments with just human debaters. On MNIST, an example is you have a sparse classifier that learns to recognize MNIST from just seeing a few pixels, so you might train it by picking random masks of just a few pixels of an MNIST image. This classifier is very bad, because it only sees a small part of the image. It might get accuracies below 50%. I forget exactly the number, I could look them up.

Then, you train debaters that choose which parts of the image to show to the judge. You fix the judge at this kind of weak training process, and then the debaters will play a game where they try to pick pixels that convince the judge of what they want to argue for. There’s kind of two ways to decide … In the first paper, we had one debater that was incentivized to tell the truth, so it knew the correct class. Like say, we’re looking at a seven, and the other debater had to pre-specify which digit it was going to lie for. It might say, “I’m going to lie and say things were an eight.”

The game would be each of these agents pick a single pixel, and you sort of iterate for, let’s say six rounds, picking one pixel after another, and then the judge sees those pixels, and chooses who wins the debate. The judge has seen kind of a small part of the evidence. The result of that is that you can boost a weak classifier up to sort of higher performance, even if the agents are incentivized only to win the game. In this case, if you pick one debater to be the honest player and one debater to be the liar, the honest player wins a significant majority of the time, much more than the classifier would win on random play.

Lucas: The judge here, sorry, was not a human?

Geoffrey: The judge here is not a human. It’s just a classifier trained to do a bad job at MNIST, because it sees only a little bit of information. It’s trained to convergence, but its input is just a few pixels of an image.

Lucas: Then, so the pixels that are then revealed by the debaters, each pixel is sort of the argument.

Geoffrey: Each pixel is the argument.

Lucas: Just to provide a little bit more framing, there’s this feature of debate, you can apply it to a very large domain of things that you’d be surprised about if you expand the notion of what it means to debate to showing pixels or something like this.

Geoffrey: It’s actually more important to debate in natural language. The end goal here is we want to extract a strengthened, kind of improved version of human performance at a task. The way we go about this, either in amplification or in debate, is we sort of factor through reasoning. Instead of trying to train directly on the task, like the answers to the task, you might have some questions and some answers, and you could train directly on question/answer pairs, we’re going to build a task which includes all possible human reasoning in the form of, say, in this case, debates, and then we’ll train the agents to do well in this space of reasoning, and then well pick out the answers at the very end. Once we’re satisfied that the reasoning all works out.

Because humans, sort of the way we talk about higher level concepts, especially abstract concepts, and say subtle moral concepts, is natural language, the most important domain here, in the human case, is natural language. What we’ve done so far, in all experiments for Debate, is an image space, because it’s easier. We’re trying now to move that work into natural language so that we can get more interesting settings.

Lucas: Right. In terms of natural language, do you just want to unpack a little bit about how that would be done at this point in natural language? It seems like our natural language technology is not at a point where I really see robust natural language debates.

Geoffrey: There’s sort of two ways to go. One way is human debates. You just replace the ML agents with human debaters and then a human judge, and you see whether the system works in kind of an all-human context. The other way is machine learning natural language is getting good enough to do interestingly well on sample question/answer datasets, and Debate is already interesting if you do a very small number of steps. In the general debate, you sort of imagine that you have this long transcript, dozens of statements long, with points and counterpoints and counterpoints, but if you already do just two steps, you might do question, answer, and then single counterargument. For some tasks, at least in theory, it already should be stronger than the baseline of just doing direct question/answer, because you have this ability to focus in on a counterargument that is important.

An example might be you see a question and an answer and then another debater just says, “Which part of the answer is problematic?” They might point to a word or to a small phrase, and say, “This is the point you should sort of focus in on.” If you learn how to self critique, then you can boost the performance by iterating once you know how to self critique.

The hope is that even if we can’t do general debates on the machine learning side just yet, we can do shallow debates, or some sort of simple first step in this direction, and then work up over time.

Lucas: This just seems to be a very fundamental part of AI alignment where you’re just breaking things down into very simple problems and then trying to succeed in those simple cases.

Geoffrey: That’s right.

Lucas: Just provide a little bit more illustration of debate as a general concept, and what it means in the context of AI alignment. I mean, there are open questions here, obviously, about the efficacy of debate, how debate exists as a tool within the space, so epistemological things that allow us to arrive at truth, and I guess, infer other people’s preferences. Sorry, again, in terms of reward learning, and AI alignment, and debate’s place in all of this, just contextualize, I guess, its sort of role in AI alignment, more broadly.

Geoffrey: It’s focusing, again, on the scalability aspect. One way to formulate that is we have this sort of notion of, either from a philosophy side, reflective equilibrium, or kind of from the AI alignment literature, coherent extrapolated volition, which is sort of what a human would do if we had thought very carefully for a very long time about a question, and sort of considered all the possible nuances, and counterarguments, and so on, and kind of reached the conclusion that is sort of free of inconsistencies.

Then, we’d like to take this kind of vague notion of, what happens when a human thinks for a very long time, and compress it into something we can use as an algorithm in a machine learning context. It’s also a definition. This vague notion of, let a human think for a very long time, that’s sort of a definition, but it’s kind of a strange one. A single human can’t think for a super long time. We don’t have access to that at all. You sort of need a definition that is more factored, where either a bunch of humans think for a long time, we sort of break up tasks, or you sort of consider only parts of the argument space at a time, or something.

You go from there to things that are both definitions of what it means to simulate thinking for long time and also algorithms. The first one of these is Amplification from Paul Christiano, and there you have some questions, and you can’t answer them directly, but you know how to break up a question into subquestions that are hopefully somewhat simpler, and then you sort of recursively answer those subquestions, possibly breaking them down further. You get this big tree of all possible questions that descend from your outer question. You just sort of imagine that you’re simulating over that whole tree, and you come up with an answer, and then that’s the final answer for your question.

Similarly, Debate is a variant of that, in the sense that you have this kind of tree of all possible arguments, and you’re going to try to simulate somehow what would happen if you considered all possible arguments, and picked out the most important ones, and summarized that into an answer for your question.

The broad goal here is to give a practical definition of what it means for people to take human input and push it to its inclusion, and then hopefully, we have a definition that also works as an algorithm, where we can do practical ML training, to train machine learning models.

Lucas: Right, so there’s, I guess, two thoughts that I sort of have here. The first one is that there is just sort of this fundamental question of what is AI alignment? It seems like in your writing, and in the writing of others at OpenAI, it’s to get AI to do what we want them to do. What we want them to do is … either it’s what we want them to do right now, or what we would want to do under reflective equilibrium, or at least we want to sort of get to reflective equilibrium. As you said, it seems like a way of doing that is compressing human thinking, or doing it much faster somehow.

Geoffrey: One way to say it is we want to do what humans want, even if we understood all of the consequences. It’s some kind of, Do what humans want, plus some side condition of: ‘imagine if we knew everything we needed to know to evaluate their question.”

Lucas: How does Debate scale to that level of compressing-

Geoffrey: One thing we should say is that everything here is sort of a limiting state or a goal, but not something we’re going to reach. It’s more important that we have closure under the relative things we might not have thought about. Here are some practical examples from kind of nearer-term misalignment. There’s an experiment in social science where they send out a bunch of resumes to job applications to classified ads, and the resumes were paired off into pairs that were identical except that the name of the person was either white sounding or black sounding, and the result was that you got significantly higher callback rates if the person sounded white, and even if they had an entirely identical resume to the person sounding black.

Here’s a situation where direct human judgment is bad in the way that we could clearly know. You could imagine trying to push that into the task by having an agent say, “Okay, here is a resume. We’d like you to judge it.” Either pointing explicitly to what they should judge, or pointing out, “You might be biased here. Try to ignore the name of the resume, and focus on this issue, like say their education or their experience.” You sort of hope that if you have a mechanism for surfacing concerns or surfacing counterarguments, you can get to a stronger version of human decision making. There’s no need to wait for some long term very strong agent case for this to be relevant, because we’re already pretty bad at making decisions in simple ways.

Then, broadly, I sort of have this sense that there’s not going to be magic in decision making. If I go to some very smart person, and they have a better idea for how to make a decision, or how to answer a question, I expect there to be some way they could explain their reasoning to me. I don’t expect I just have to take them on faith. We want to build methods that surface the reasons they might have to come to a conclusion.

Now, it may be very difficult for them to explain the process for how they came to those arguments. There’s some question about whether the arguments they’re going to make is the same as the reasons they’re giving the answers. Maybe they’re sort of rationalizing and so on. You’d hope that once you sort of surface all the arguments around the question that could be relevant, you get a better answer than if you just ask people directly.

Lucas: As we move out of debate in simple cases of image classifiers or experiments in similar environments, what does debate look like … I don’t really understand the ways in which the algorithms can be trained to elucidate all of these counterconcerns, and all of these different arguments, in order to help human beings arrive at the truth.

Geoffrey: One case we’re considering, especially on kind of the human experiment side, or doing debates with humans, is some sort of domain expert debate. The two debaters are maybe an expert in some field, and they have a bunch of knowledge, which is not accessible to the judge, which is maybe a reasonably competent human, but doesn’t know the details of some domain. For example, we did a debate where there were two people that knew computer science and quantum computing debating a question about quantum computing to a person who has some background, but nothing in that field.

The idea is you start out, there’s a question. Here, the question was, “Is the complexity class BQP equal to NP, or does it contain NP?” One point is that you don’t have to know what those terms mean for that to be a question you might want to answer, say in the course of some other goal. The first steps, things the debaters might say, is they might give short, intuitive definitions for these concepts and make their claims about what the answer is. You might say, “NP is the class of problems where we can verify solutions once we’ve found them, and BQP is the class of things that can run on a quantum computer.”

Now, you could have a debater that just straight up lies right away and says, “Well, actually NP is the class of things that can run on fast randomized computers.” That’s just wrong, and so what would happen then is that the counter debater would just immediately point to Wikipedia and say, “Well, that isn’t the definition of this class.” The judge can look that up, they can read the definition, and realize that one of the debaters has lied, and the debate is over.

You can’t immediately lie in kind of a simple way or you’ll be caught out too fast and lose the game. You have to sort of tell the truth, except maybe you kind of slightly veer towards lying. This is if you want to lie in your argument. At every step, if you’re an honest debater, you can try to pin the liar down to making sort of concrete statements. In this case, if say someone claims that quantum computers can solve all of NP, you might say, “Well, you must point me to an algorithm that does that.” The debater that’s trying to lie and say that quantum computers can solve all of NP might say, “Well, I don’t know what the algorithm is, but meh, maybe there’s an algorithm,” and then they’re probably going to lose, then.

Maybe they have to point to a specific algorithm. There is no algorithm, so they have to make one up. That will be a lie, but maybe it’s kind of a subtle complicated lie. Then, you could kind of dig into the details of that, and maybe you can reduce the fact that that algorithm is a lie to some kind of simple algebra, which either the human can check, maybe they can ask Mathematica or something. The idea is you take a complicated question that’s maybe very broad and covers a lot of the knowledge that the judge doesn’t know and you try to focus in closer and closer on details of arguments that the judge can check.

What the judge needs to be able to do is kind of follow along in the steps until they reach the end, and then there’s some ground fact that they can just look up or check and see who wins.

Lucas: I see. Yeah, that’s interesting. A brief passing thought is thinking about double cruxes and some tools and methods that CFAR employs, like how they might be interesting or used in debate. I think I also want to provide some more clarification here. Beyond debate being a truth-seeking process or a method by which we’re able to see which agent is being truthful, or which agent is lying, and again, there’s sort of this claim that you have in your paper that seems central to this, where you say, “In the debate game, it is harder to lie than to refute a lie.” This asymmetry in debate between the liar and the truth-seeker should hopefully, in general, bias towards people more easily seeing who is telling the truth.

Geoffrey: Yep.

Lucas: In terms of AI alignment again, in the examples that you’ve provided, it seems to help human beings arrive at truth for complex questions that are above their current level of understanding. How does this, again, relate directly to reward learning or value learning?

Geoffrey: Let’s assume that in this debate game, it is the case that it’s very hard to liar, so the winning move is to say the truth. What we want to do then is train kind of two systems. One system will be able to reproduce human judgment. That system would be able to look at the debate transcript and predict what the human would say is the correct winner of the debate. Once you get that system trained, so that’s sort of you’re learning not direct toward, but again, some notion of predicting how humans deal with reasoning. Once you learn that bit, then you can train an agent to play this game.

Then, we have a zero sum game, and then we can sort of apply any technique used to play a zero sum game, like Monte Carlo tree search in AlphaGo, or just straight up RL algorithms, as in some of OpenAI’s work. The hope is that you can train an agent to play this game very well, and therefore, it will be able to predict where counter-arguments exist that would help it win debates, and therefore, if it plays the game well, and the best way to play the game is to tell the truth, then you end up with a value aligned system. Those are large assumptions. You should be cautious if those are true.

Lucas: There’s also all these issues that we can get into about biases that humans have, and issues with debate. Whether or not you’re just going to be optimizing the agents for exploiting human biases and convincing humans. Definitely seems like, even just looking at how human beings value align to each other, debate is one thing in a large toolbox of things, and in AI alignment, it seems like potentially Debate will also be a thing in a large toolbox of things that we use. I’m not sure what your thoughts are about that.

Geoffrey: I could give them. I would say that there’s two ways of approaching AI safety and AI alignment. One way is to try to propose, say, methods that do a reasonably good job at solving a specific problem. For example, you might tackle reversibility, which means don’t take actions that can’t be undone, unless you need to. You could try to pick that problem out and solve it, and then imagine how we’re going to fit this together into a whole picture later.

The other way to do it is try to propose algorithms which have at least some potential to solve the whole problem. Usually, they won’t, and then you should use them as a frame to try to think about how different pieces might be necessary to add on.

For example, in debate, the biggest thing in there is that it might be the case that you train a debate agent that gets very good at this task, the task is rich enough that it just learns a whole bunch of things about the world, and about how to think about the world, and maybe it ends up having separate goals, or it’s certainly not clearly aligned because the goal is to win the game. Maybe winning the game is not exactly aligned.

You’d like to know sort of not only what it’s saying, but why it’s saying things. You could imagine sort of adding interpret ability techniques to this, which would say, maybe Alice and Bob are debating. Alice says something and Bob says, “Well, Alice only said that because Alice is thinking some malicious fact.” If we add solid interpret ability techniques, we could point into Alice’s thoughts at that fact, and pull it out, and service that. Then, you could imagine sort of a strengthened version of a debate where you could not only argue about object level things, like using language, but about thoughts of the other agent, and talking about motivation.

It is a goal here in formulating something like debate or amplification, to propose a complete algorithm that would solve the whole problem. Often, not to get to that point, but we have now a frame where we can think about the whole picture in the context of this algorithm, and then fix it as required going forwards.

I think, in the end, I do view debate, if it succeeds, as potentially the top level frame, which doesn’t mean it’s the most important thing. It’s not a question of importance. More of just what is the underlying ground task that we want to solve? If we’re training agents to either play video games or do question/answers, here the proposal is train agents to engage in these debates and then figure out what parts of AI safety and AI alignment that doesn’t solve and add those on in that frame.

Lucas: You’re trying to achieve human level judgment, ultimately, through a judge?

Geoffrey: The assumption in this debate game is that it’s easier to be a judge than a debater. If it is the case, though, that you need the judge to get to human level before you can train a debater, then you have a problematic bootstrapping issue where, first you must solve value alignment for training the judge. Only then do you have value alignment for training the debater. This is one of the concerns I have. I think the concern sort of applies to some of other scalability techniques. I would say this is sort of unresolved. The hope would be that it’s not actually sort of human level difficult to be a judge on a lot of tasks. It’s sort of easier to check consistency of, say, one debate statement to the next, than it is to do long, reasoning processes. There’s a concern there, which I think is pretty important, and I think we don’t quite know how it plays out.

Lucas: The view is that we can assume, or take the human being to be the thing that is already value aligned, and the process by which … and it’s important, I think, to highlight the second part that you say. You say that you’re pointing out considerations, or whichever debater is saying that which is most true and useful. The useful part, I think, shouldn’t be glossed over, because you’re not just optimizing debaters to arrive at true statements. The useful part smuggles in a lot issues with normative things in ethics and metaethics.

Geoffrey: Let’s talk about the useful part.

Lucas: Sure.

Geoffrey: Say we just ask the question of debaters, “What should we do? What’s the next step that I, as an individual person, or my company, or the whole world should take in order to optimize total utility?” The notion of useful, then, is just what is the right action to take? Then, you would expect a debate that is good to have to get into the details of why actions are good, and so that debate would be about ethics, and metaethics, and strategy, and so on. It would pull in all of that content and sort of have to discuss it.

There’s a large sea of content you have to pull in. It’s roughly kind of all of human knowledge.

Lucas: Right, right, but isn’t there this gap between training agents to say what is good and useful and for agents to do what is good and useful, or true and useful?

Geoffrey: The way in which there’s a gap is this interpretability concern. You’re getting at a different gap, which I think is actually not there. I like giving game analogies, so let me give a Go analogy. You could imagine that there’s two goals in playing the game of Go. One goal is to find the best moves. This is a collaborative process where all of humanity, all of sort of Go humanity, say, collaborates to learn, and explore, and work together to find the best moves in Go, defined by, what are the moves that most win this game? That’s a non-zero sum game, where we’re sort of all working together. Two people competing on the other side of the Go board are working together to get at what the best moves are, but within a game, it’s a zero sum game.

You sit down, and you have two players, two people playing a game of Go, one of them’s going to win, zero sum. The fact that that game is zero sum doesn’t mean that we’re not learning some broad thing about the world, if you’ll zoom out a bit and look at the whole process.

We’re training agents to win this debate game to give the best arguments, but the thing we want to zoom out and get is the best answers. The best answers that are consistent with all the reasoning that we can bring into this task. There’s huge questions to be answered about whether the system actually works. I think there’s an intuitive notion of, say, reflective equilibrium, or coherent extrapolated volition, and whether debate achieves that is a complicated question that’s empirical, and theoretical, and we have to deal with, but I don’t think there’s quite the gap you’re getting at, but I may not have quite voiced your thoughts correctly.

Lucas: It would be helpful if you could unpack how the alignment that is gained through this process is transferred to new contexts. If I take an agent trained to win the Debate game outside of that context.

Geoffrey: You don’t. We don’t take it out of the context.

Lucas: Okay, so maybe that’s why I’m getting confused.

Geoffrey: Ah. I see. Okay, this [inaudible 00:26:09]. We train agents to play this debate game. To use them, we also have them play the debate game. By training time, we give them kind of a rich space of questions to think about, or concerns to answer, like a lot of discussion. Then, we want to go and answer a question in the world about what we should do, what the answer to some scientific question is, is this theorem true, or this conjecture true? We state that as a question, and we have them debate, and then whoever wins, they gave the right answer.

There’s a couple of important things you can add to that. I’ll give like three levels of kind of more detail you can go. One thing is the agents are trained to look at state in the debate game, which could be I’ve just given the question, or there’s a question and there’s a partial transcript, and they’re trained to say the next thing, to make the next move in the game. The first thing you can do is you have a question that you want to answer, say, what should the world do, or what should I do as a person? You just say, “Well, what’s the first move you’d make?” The first move they’d make is to give an answer, and then you just stop there, and you’re done, and you just trust that answer is correct. That’s not the strongest thing you could do.

The next thing you can do is you’ve trained this model of a judge that knows how to predict human judgment. You could have them, from the start of this game, play a whole bunch of games, play 1,000 games of debate, and from that learn with more accuracy what the answer might be. Similar to how you’d, say if you’re playing a game of Go, if you want to know the best move, you would say, “Well, let’s play 1,000 games of Go from this state. We’ll get more evidence and we’ll know what the best move is.”

The most interesting thing you can do, though, is you yourself can act as a judge in this game to sort of learn more about what the relevant issues are. Say there’s a question that you care a lot about. Hopefully, “What should the world do,” is a question you care a lot about. You want to not only see what the answer is, but why. You could act as a judge in this game, and you could, say, play a few debates, or explore part of this debate tree, the tree of all possible debates, and you could do the judgment yourself. There, the end answer will still be who you believe is the right answer, but the task of getting to that answer is still playing this game.

The bottom line here is, at test time, we are also going to debate.

Lucas: Yeah, right. Human beings are going to be participating in this debate process, but does or does not debate translate into systems which are autonomously deciding what we ought to do, given that we assume that their models of human judgment on debate are at human level or above?

Geoffrey: Yeah, so if you turn off the human in the loop part, then you get an autonomous agent. If the question is, “What should the next action be in, say, an environment?” And you don’t have humans in the loop at test time, then you can get an autonomous agent. You just sort of repeatedly simulate debating the question of what to do next. Again, you can cut this process short. Because the agents are trained to predict moves in debate, you can stop them after they’ve predicted the first move, which is what the answer is, and then just take that answer directly.

If you wanted the maximally efficient autonomous agent, that’s the case you would do. At OpenAI, my view, our goal is I don’t want to take AGI and immediately deploy it in the most fast twitch tasks. Something like self-driving a car. If we get to human level intelligence, I’m not going to just replace all the self-driving cars with AGI and let them do their thing. We want to use this for the paths where we need very strong capabilities. Ideally, those tasks are slower and more deliberative, so we can afford to, say, take a minute to interact with the system, or take a minute to have the system engage in its own internal debates to get more confidence in these answers.

The model here is basically the Oracle AI model, that rather than the autonomous agent operating at an NDP model.

Lucas: I think that this is a very important part to unpack a bit more. This distinction here that it’s more like an oracle and less like an autonomous agent going around optimizing everything. What does a world look like right before, during, after AGI given debate?

Geoffrey: The way I think about this is that, an oracle here is a question/answer system of some complexity. You asked it questions, possibly with a bunch of context attached, and it gives you answers. You can reduce pretty much anything to an oracle, if oracle is sort of general enough. If your goal is to take actions in an environment, you can ask the oracle, “What’s the best action to take, and the next step?” And just iteratively ask that oracle over and over again as you take the steps.

Lucas: Or you could generate the debate, right? Over the future steps?

Geoffrey: The most direct way to do an NDP with Debate is to engage in a debate at every step, restart the debate process, showing all the history that’s happened so far, and say, the question at hand, that we’re debating, is what’s the best action to take next? I think I’m relatively optimistic that when we make AGI, for a while after we make it, we will be using it in ways that aren’t extremely fine grain NDP-like in the sense of we’re going to take a million actions in a row, and they’re all actions that hit the environment.

We’d mainly use this full direct reduction. There’s more practical reductions for other questions. I’ll give an example. Say you want to write the best book on, say, metaethics, and you’d like debaters to produce this books. Let’s say that debaters are optimal agents so they know how to do debates on any subject. Even if the book is 1,000 pages long, or say it’s a couple hundred pages long, that’s a more reasonable book, you could do it in a single debate as follows. Ask the agents to write the book. Each agent writes its own book, say, and you ask them to debate which book is better, and that debate all needs to point at small parts of the book.

One of the debaters writes a 300 page book and buried in the middle of it is a subtle argument, which is malicious and wrong. The other debater need only point directly at the small part of the book that’s problematic and say, “Well, this book is terrible because of the following malicious argument, and my book is clearly better.” The way this works is, if you are able to point to problematic parts of books in a debate, and therefore win, the best first move in the debate is to write the best book, so you can do it in one step, where you produce this large object with a single debate, or a single debate game.

The reason I mention this is that’s a little better in terms of practicality, then, writing the book. If the book is like 100,000 words, you wouldn’t want to have a debate about each word, one after another. That’s sort of a silly, very expensive process.

Lucas: Right, so just to back up here, and to provide a little bit more framing, there’s this beginning at which we can see we’re just at a very low level trying to optimize agents for debate, and there’s going to be an asymmetry here that we predict, that it should, in general, usually be easier to tell who’s telling the truth over who’s not, because it’s easier to tell the truth than to lie, and lie in convincing ways. Scaling from there, it seems that what we ultimately really want is to then be able to train a judge, right?

Geoffrey: The goal is to train … You need both.

Lucas: Right. You need both to scale up together.

Geoffrey: Yep.

Lucas: Through doing so, we will have oracles that will be able to assist in decision making, because what we’re doing is amplifying human explicit deliberative processes. Again, this tying into coherent extrapolated volition and reflective equilibrium. What we want is to be able to amplify human decision making, which I guess is, in a broad sense, value alignment.

Geoffrey: Right.

Lucas: Are there any other technical motivations behind debate that you would like to unpack for why within the set of all these different AI alignment methodologies, why debate is so important?

Geoffrey: Sure. I think maybe the right way to talk about that is to contrast amplification and debate. I got to Debate by thinking a bunch about Amplification, talking a bunch with Paul Christiano. To summarize it for listeners again, Amplification is take questions, decompose them into subquestions that are hopefully easier to answer, repeat that decomposition until we get to things we can check, and use that whole process to answer complicated questions.

The stock amplification, the human is doing the task of decomposing the questions. The human decomposition has to be complete. For example, you might have a question, going back to this resume example, we’re having this debate about whether someone whose name sounds white or black is a good candidate. For whatever reason, no one here wants to be racist, but they’re all just completely oblivious to the idea they could be racist. They’re going through and discussing whether the candidate is applicable, they’re kind of breaking these questions down. Just subtly at every point, racism is kind of inching its way into the answers, and they just never thought to kind of investigate.

If at some point they had asked, “Well, am I deciding this because the person has a white sounding name?” They would have realized their mistake, but they never think of that. In debate, you are relying on the agents to be able to do the decomposition, so the agents can, even if it’s hard for a human to do it, point out, “Maybe you’re being racist here,” and sort of get the correction. The advantage of debate there is you get some help doing this combination of tasks for you.

The other thing that happened, frankly, which is one of the reasons I thought of debate, was AlphaGo. In thinking about amplification, I’ve been sort of concerned. “Is this process going to be scalable? Are we going to lose a bunch of efficiency in doing this complicated decomposition process?” I was sort of concerned that we would lose a bunch of efficiency and therefore be not competitive with unsafe techniques to getting to AGI.

Then, AlphaGo came out, and AlphaGo got very strong performance, and it did it by doing an explicit tree search. As part of AlphaGo, it’s doing this kind of deliberative process, and that was not only important for performance at test time, but was very important for getting the training to work. What happens is, in AlphaGo, at training time, it’s doing a bunch of tree search through the game of Go in order to improve the training signal, and then it’s training on that improved signal. That was one thing kind of sitting in the back of my mind.

I was kind of thinking through, then, the following way of thinking about alignment. At the beginning, we’re just training on direct answers. We have these questions we want to answer, an agent answers the questions, and we judge whether the answers are good. You sort of need some extra piece there, because maybe it’s hard to understand the answers. Then, you imagine training an explanation module that tries to explain the answers in a way that humans can understand. Then, those explanations might be kind of hard to understand, too, so maybe you need an explanation explanation module.

For a long time, it felt like that was just sort of ridiculous epicycles, adding more and more complexity. There was no clear end to that process, and it felt like it was going to be very inefficient. When AlphaGo came out, that kind of snapped into focus, and it was like, “Oh. If I train the explanation module to find flaws, and I train the explanation explanation module to find flaws in flaws, then that becomes a zero-sum game. If it turns out that ML is very good at solving zero-sum games, and zero-sum games were a powerful route to drawing performance, then we should take advantage of this in safety.” Poof. We have, in this answer, explanation, explanation, explanation route, that gives you the zero-sum game of Debate.

That’s roughly sort of how I got there. It was a combination of thinking about Amplification and this kick from AlphaGo, that zero-sum games and search are powerful.

Lucas: In terms of the relationship between debate and amplification, can you provide a bit more clarification on the differences, fundamentally, between the process of debate and amplification? In terms of amplification, there’s a decomposition process, breaking problems down into subproblems, eventually trying to get the broken down problems into human level problems. The problem has essentially doubled itself many items over at this point, right? It seems like there’s going to be a lot of questions for human beings to answer. I don’t know how interrelated debate is to decompositional argumentative process.

Geoffrey: They’re very similar. Both Amplification and Debate operate on some large tree. In amplification, it’s the tree of all decomposed questions. Let’s be concrete and say the top level question in amplification is, “What should we do?” In debate, again, the question at the top level is, “What should we do?” In amplification, we take this question. It’s a very broad open-ended question, and we kind of break it down more and more and more. You sort of imagine this expanded tree coming out from that question. Humans are constructing this tree, but of course, the tree is exponentially large, so we can only ever talk about a small part of it. Our hope is that the agents learn to generalize across the tree, so they’re learning the whole structure of the tree, even given finite data.

In the debate case, similarly, you have top level question of, “What should we do,” or some other question, and you have the tree of all possible debates. Imagine every move in this game is, say, saying a sentence, and at every point, you have maybe an exponentially large number of sentences, so the branching factor, now in the tree, is very large. The goal in debate is kind of see this whole tree.

Now, here is the correspondence. In amplification, the human does the decomposition, but I could instead have another agent do the decomposition. I could say I have a question, and instead of a human saying, “Well, this question breaks down into subquestions X, Y, and Z,” I could have a debater saying, “The subquestion that is most likely to falsify this answer is Y.” It could’ve picked at any other question, but it picked Y. You could imagine that if you replace a human doing the decomposition with another agent in debate pointing at the flaws in the arguments, debate would kind of pick out a path through this tree. A single debate transcript, in some sense, corresponds to a single path through the tree of amplification.

Lucas: Does the single path through the tree of amplification elucidate the truth?

Geoffrey: Yes. The reason it does is it’s not an arbitrarily chosen path. We’re sort of choosing the path that is the most problematic for the arguments.

Lucas: In this exponential tree search, there’s heuristics and things which are being applied in general to the tree search in order to collapse onto this one branch or series?

Geoffrey: Let’s say, in amplification, we have a question. Our decomposition is, “Well, this decomposes into X, Y, and Z,” and then we recursively call the agent, and it says, “The answers are AX, AY, AZ, for these questions.” Now, if I trusted those subanswers, I could do the reconstruction of the answers to the original question. If I don’t trust the subanswers, I might say, “Well, which subanswer is most likely to be false?” The correspondence with debate is the other debating agent would point to which one of the subanswers is probably false, and then you recourse down just that tree.

Geoffrey: Here’s the way it works. You can trust amplification if all of the subanswers are correct. If at least one of them is false, a strong debater can find which of those subanswers is false and recourse down to that. That’s why, if it turns out that a single path chosen by strong debaters never finds a false subanswer, then you will have established that the whole tree is true.

Lucas: I see. I think this picture of, “What should we do,” at the top, and then the trees would fall from that in terms of one version of amplification and one version of debate is pretty helpful here.

Geoffrey: The most important thing here is that in Amplification, the branching factor is small. I’m a human, I see a question, I sort of decompose that into a relatively small number of questions. Say, two to 10. In debate, the branching factor is very large. A debate agent could make any statement as the next countermove. What this means is that if it works for strong play, because the branching factor is so large, a shallow debate tree is more powerful than a shallow amplification tree, because you can cover more terrain with a couple steps of debate that are a proxy for a large tree.

The problem with that statement, maybe the biggest advantage to amplification, is you have to really be assuming that you know the agents are strong enough to cover the tree. Say I’m debating whether there’s a black swan, and there in fact is a black swan, but there’s only one of them, and there’s only one in the entire world, and the correct countermove would be pointing at that black swan, finding it out of the entire world. A debater who wanted to say, “There are no black swans,” to just claim, “There are no black swans. Trust me. The other agent would be able to point to one if it exists.” Maybe it’s the case that a black swan does exist, but the other agent is just too weak to point at the black swan, and so that debate doesn’t work.

This argument that shallow debates are powerful leans a whole lot on debaters being very strong, and debaters in practice will not be infinitely strong, so there’s a bunch of subtlety there that we’re going to have to wrestle.

Lucas: It would also be, I think, very helpful if you could let us know how you optimize for strong debaters, and how is amplification possible here if human beings are the ones who are pointing out the simplifications of the questions?

Geoffrey: Whichever one we choose, whether it’s amplification, debate, or some entirely different scheme, if it depends on humans in one of these elaborate ways, we need to do a bunch of work to know that humans are going to be able to do this. At amplification, you would expect to have to train people to think about what kinds of decompositions are the correct ones. My sort of bias is that because debate gives the humans more help in pointing out the counterarguments, it may be cognitively kinder to the humans, and therefore, that could make it a better scheme. That’s one of the advantages of debate.

The technical analogy there is a shallow debate argument. The human side is, if someone is pointing out the arguments for you, it’s cognitively kind. In amplification, I would expect you’d need to train people a fair amount to have the decomposition be reliably complete. I don’t know that I have a lot of confidence that you can do that. One way you can try to do it is, as much as possible, systematize the process on the human side.

In either one of these schemes, we can give the people involved an arbitrary amount of training and instruction in whatever way we think is best, and we’d like to do the work to understand what forms of instruction and training are most truth seeking, and try to do that as early as possible so you have a head start.

I would say I’m not going to be able to give you a great argument for optimism about amplification. This is a discussion that Paul, and Andreas Stuhlmueller, and I have, where I think Paul and Andreas, they kind of lean towards these metareasoning arguments, where if you wanted to answer the question, “Where should I go on vacation,” the first subquestion is, “What would be a good way to decide where to go on vacation?” Quickly go meta, and maybe you go meta, meta, like it’s kind of a mess. Whereas, the hope is that because debate, you have sort of have help pointing to things, you can do much more object level, where the first step in a debate about where to go on vacation is just Bali or Alaska. You give the answer and then you focus in on more …

For a broader class of questions, you can stay at object level reasoning. Now, if you want to get to metaethics, you would have to bring in the kind of reasoning. It should be a goal of ours to, for a fixed task, try to use the simplest kind of human reasoning possible, because then we should expect to get better results out of people.

Lucas: All right. Moving forward. Two things. The first that would be interesting would be if you could unpack this process of training up agents to be good debaters, and to be good predictors of human decision making regarding debates, what that’s actually going to look like in terms of your experiments, currently, and your future experiments. Then, also just pivoting into discussing reasons for optimism and pessimism about debate as a model for AI alignment.

Geoffrey: On the experiment side, as I mentioned, we’re trying to get into the natural language domain, because I think that’s how humans debate and reason. We’re doing a fair amount of work at OpenAI on core ML language modeling, so natural language processing, and then trying to take advantage of that to prototype these systems. At the moment, we’re just doing what I would call zero step debate, or one step debate. It’s just a single agent answering a question. You have question, answer, and then you have a human kind of judging whether the answer is good.

The task of predicting an answer is just read a bunch of text and predict a number. That is essentially just a standard NLP type task, and you can use standard methods from NLP on that problem. The hope is that because it looks so standard, we can sort of just paste the development on the capability side in natural language processing on the safety side. Predicting the result is just sort of use whatever most powerful natural language processing architecture is, and apply it to this task. Architecture and method.

Similarly, on the task of answering questions, that’s also a natural language task, just a generative one. If you’re answering questions, you just read a bunch of text that is maybe the context of the question, and you produce an answer, and that answer is just a bunch of words that you spit out via a language model. If you’re doing, say, a two step debate, where you have question, answer, counterargument, then similarly, you have a language model that spits out an answer, and a language model that spits out the counterargument. Those can in fact be the same language model. You just flip the reward at some point. An agent is rewarded for answering and winning, and answering well while it’s spitting out the answer, and then when it’s spitting out the counteranswer, you just reward it for falsifying the answer. It’s still just degenerative language task with some slightly exotic reward.

Going forwards, we expect there to need to be something like … This is not actually high confidence. Maybe there’s things like AlphaGo zero style tree search that are required to make this work very well on the generative side, and we will explore those as required. Right now, we need to falsify the statement that we can just do it with stock language modeling, which we’re working on. Does that cover the first part?

Lucas: I think that’s great in terms of the first part, and then again, the second part was just places to be optimistic and pessimistic here about debate.

Geoffrey: Optimism, I think we’ve covered a fair amount of it. The primary source of optimism is this argument that shallow debates are already powerful, because you can cover a lot of terrain in argument space with a short debate, because of the high branching factor. If there’s an answer that is robust to all possible counteranswers, then it hopefully is a fairly strong answer, and that gets stronger as you increase the number of steps. This assumes strong debaters. That would be a reason for pessimism, not optimism. I’ll get to that.

The top two is that one, and then the other part is that ML is pretty good at zero-sum games, particularly zero-sum perfect information games. There have been these very impressive headline results from AlphaGo, DeepMind, and Dota at OpenAI, and a variety of other games. In general, zero-sum, close to perfect information games, we roughly know how to do them, at least in this not too high branching factor case. There’s an interesting thing where if you look at the algorithms, say for playing poker, or for playing more than two player games, where poker is zero-sum two player, but is imperfect information, or the algorithm for playing, say, 10 player games, they’re just much more complicated. They don’t work as well.

I like the fact that debate is formulated as a two player zero-sum perfect information game, because we seem to have better algorithms to play them with ML. This is both practically true, it is in practice easier to play them, and also there’s a bunch of theory that says that two player zero-sum is a different complexity class than, say, two player non-zero-sum, or N player. The complexity class gets harder, and you need nastier algorithms. Finding a Nash equilibrium in a general game, that’s either non-zero-sum or more than two players is PPAD-complete, in a tabular case, in a small game, with two player zero-sum, that problem is convex and has a polynomial-time solution. It’s a nicer class. I expect there to continue to be better algorithms to play those games. I like formulating safety as that kind of problem.

Those are kind of the reasons for optimism that I think are most important. I think going into more of those is kind of less important and less interesting than worrying about stuff. I’ll list three of those, or maybe four. Try to be fast so we can circle back. As I mentioned, I think interpretability has a large role to play here. I would like to be able to have an agent say … Again, Alice and Bob are debating. Bob should be able to just point directly into Alice’s thoughts and say, “She really thought X even though she said Y.” The reason you need an interpretability technique for that is, in this conversation, I could just claim that you, Lucas Perry, are having some malicious thought, but that’s not a falsifiable statement, so I can’t use it in a debate. I could always make statement. Unless I can point into your thoughts.

Because we have so much control over machine learning, we have the potential ability to do that, and we can take advantage of it. I think that, for that to work, we need probably a deep hybrid between the two schemes, because an advanced agent’s thoughts will probably be advanced, and so you may need some kind of strengthened thing like amplification or debate just to be able to describe the thoughts, or to point at them in a meaningful way. That’s a problem that we have not really solved. Interpretability is coming along, but it’s definitely not hybridized with these fancy alignment schemes, and we need to solve that at some point.

Another problem is there’s no point in this kind of natural language debate where I can just say, for example, “You know, it’s going to rain tomorrow, and it’s going to rain tomorrow just because I’ve looked at all the weather in the past, and it just feels like it’s going to rain tomorrow.” Somehow, debate is missing this just straight up pattern matching ability of machine learning where I can just read a dataset and just summarize it very quickly. The theoretical side of this is if I have a debate about, even something as simple as, “What’s the average height of a person in the world?” In the debate method I’ve described so far, that debate has to have depth, at least logarithmic in the number of people. I just have to subdivide by population. Like, this half of the world, and then this half of that half of the world, and so on.

I can’t just say, “You know, on average it’s like 1.6 meters.” We need to have better methods for hybridizing debate with pattern matching and statistical intuition, and that’s something that is, if we don’t have that, we may not be competitive with other forms of ML.

Lucas: Why is that not just an intrinsic part of debate? Why is debating over these kinds of things different than any other kind of natural language debate?

Geoffrey: It is the same. The problem is just that for some types of questions, and there are other forms of this in natural language, there aren’t short deterministic arguments. There are many questions where the shortest deterministic argument is much longer than the shortest randomized argument. For example, if you allow randomization, I can say, “I claim the average height of a person is 1.6 meters.” Well, pick a person at random, and you’ll score me according to the square difference between those two numbers. My claim and the height of this particular person you’ve chosen. The optimal move to make there is to just say the average height right away.

The thing I just described is a debate using randomized steps that is extremely shallow. It’s only basically two steps long. If I want to do a deterministic debate, I have to deterministically talk about the average height of a person in North America is X, and in Asia, it’s Y. The other debater could say, “I disagree about North America,” and you sort of recourse into that.

It would be super embarrassing if we propose these complicated alignment schemes, “This is how we’re going to solve AI safety,” and they can’t quickly answer a trivial statistical questions. That would be a serious problem. We kind of know how to solve that one. The harder case is if you bring in this more vague statistical intuition. It’s not like I’m computing a mean over some dataset. I’ve looked at the weather and, you know, it feels like it’s going to rain tomorrow. Getting that in is a bit trickier, but we have some ideas there. They’re unresolved.

The thing which I am optimistic about, but we need to work on, that’s one. The most important reason to be concerned is just that humans are flawed in a variety of ways. We have all these biases, ethical inconsistencies, and cognitive biases. We can write down some toy theoretical arguments. The debate works with a limited but reliable judge, but does it work in practice with a human judge? I think there’s some questions you can kind of reason through there, but in the end, a lot of that will be determined by just trying it, and seeing whether debate works with people. Eventually, when we start to get agents that can play these debates, then we can sort of check whether it worked with two ML agents and a human judge. For now, when language modeling is not that far along, we may need to try it out first with all humans.

This would be, you play the same debate game, but both the debaters are also people, and you set it up so that somehow it’s trying to model this case where the debaters are better than the judge at some task. The debaters might be experts at some domain, they might have access to some information that the judge doesn’t have, and therefore, you can ask whether a reasonably short debate is truth seeking if the humans are playing to win.

The hope there would be that you can test out debate on real people with interesting questions, say complex scientific questions, and questions about ethics, and about areas where humans are biased in known ways, and see whether it works, and also see not just whether it works, but which forms of debate are strongest.

Lucas: What does it mean for debate to work or be successful for two human debaters and one human judge if it’s about normative questions?

Geoffrey: Unfortunately, if you want to do this test, you need to have a source of truth. In the case of normative questions, there’s two ways to go. One way is you pick a task where we may not know the entirety of the answer, but we know some aspect of it with high confidence. An example would be this resume case, where two resumes are identical except for the name at the top, and we just sort of normatively … we believe with high confidence that the answer shouldn’t depend on that. If it turns out that a winning debater can maliciously and subtly take advantage of the name to spread fear into the judge, and make a resume with a black name sound bad, that would be a failure.

We sort of know that because we don’t know in advance whether a resume should be good or bad overall, but we know that this pair of identical resumes shouldn’t depend on the name. That’s one way just we have some kind of normative statement where we have reasonable confidence in the answer. The other way, which is kind of similar, is you have two experts in some area, and the two experts agree on what the true answer is, either because it’s a consensus across the field, or just because maybe those two experts agree. Ideally, it should be a thing that’s generally true. Then, you force one of the experts to lie.

You say, “Okay, you both agree that X is true, but now we’re going to flip a coin and now one of you only wins if you lie, and we’ll see whether that wins or not.”

Lucas: I think it also … Just to plug your game here, you guys do have a debate game. We’ll put a link to that in the article that goes along with this podcast. I suggest that people check that out if you would like a little bit more tangible and fun way to understand debate, and I think it’ll help elucidate what the process looks like, and the asymmetries that go on, and the key idea here that it is harder to lie than to refute a lie. It seems like if we could deploy some sort of massive statistical analysis over many different iterated debates across different agents, that we would be able to come down on the efficacy of debate in different situations where the judge and the debaters are all AI, mixed situations, or all human debates. I think it’d be interesting to see the varying results there.

Geoffrey: This is going to be a noisy enough process for a variety of reasons, that we will probably do this a lot to know. So far, we’ve just done a very small, informal number of these human, human, human debates. Say, if you’re doing expert debate, we’ve already learned a fair amount at a qualitative level, just in those few things. I’ll give an example. In this debate about CS theory question, there was a judge present while the debate was going on, but they were interacting only fairly minimally.

Early in the process of the debate, the debaters … I was the debater telling the truth here. I stated my formulation of the question, and unpacked it intuitively, and the judge perfectly understandably had a subtle misunderstanding of how I’d framed the question, and therefore, throughout the debate, this misunderstanding was not corrected, so there was a misunderstanding of just what the topic was about that was never fixed, and therefore, it was much harder for the honest player to win, because it seemed like the honest player had a harder case to make.

That sort of taught us that having judge interaction is potentially quite important so that the debaters have a detailed understanding of what the judge is thinking. If your goal is to model debate as a perfect information game, the closer to that you can get, the more information that debaters have, the better it should be.

Lucas: Yeah. I mean, that also allows the debaters to exploit cognitive biases in the judge.

Geoffrey: That’s right.

Lucas: You would point that out. Like, “Hey, this person’s exploiting your cognitive bias.”

Geoffrey: Yeah, so I think it’s an open question how exactly to strike that balance, and if there’s a way to strike it that works. Generally, the more information about, say, the judge, that he provides to the debaters, either through judge interaction or just tell the debaters something about the judge, that will make them stronger as players of the game, but it might reveal ways to attack the judge.

Now, if our goal is to be resistant to very strong agents, and it turns out that the only way to make it safe is to hide information from the agents, maybe you shouldn’t use this method. It may not be very resilient. It’s likely that for experiments, we should push as far as we can towards strong play, revealing as much as possible, and see whether it still works in that case.

Lucas: In terms here of the social scientists playing a role here, do you want to go ahead and unpack that a bit more? There’s a paper that you’re working on with Amanda Askell on this.

Geoffrey: As you say, we want to run statistically significant experiments that test whether debate is working and which form of debate are best, and that will require careful experimental design. That is an experiment that is, in some sense, an experiment in just social science. There’s no ML involved. It’s motivated by machine learning, but it’s just a question about how people think, and how they argue and convince each other. Currently, no one at OpenAI has any experience running human experiments of this kind, or at least no one that is involved in this project.

The hope would be that we would want to get people involved in AI safety that have experience and knowledge in how to structure experiments on the human side, both in terms of experimental design, having an understanding of how people think, and where they might be biased, and how to correct away from those biases. I just expect that process to involve a lot of knowledge that we don’t possess at the moment as ML researchers.

Lucas: Right. I mean, in order for there to be an efficacious debate process, or AI alignment process in general, you need to debug and understand the humans as well as the machines. Understanding our cognitive biases in debates, and our weak spots and blind spots in debate, it seems crucial.

Geoffrey: Yeah. I sort of view it as a social science experiment, because it’s just a bunch of people interacting. It’s a fairly weird experiment. It differs from normal experiments in some ways. In thinking about how to build AGI in a safe way, we have a lot of control over the whole process. If it takes a bunch of training to make people good at judging these debates, we can provide that training, pick people who are better or worse at judging. There’s a lot of control that we can exert. In addition to just finding out whether this thing works, it’s sort of an engineering process of debugging the humans, maybe it’s sort of working around human flaws, taking them into account, and making the process resilient.

My highest level hope here is that humans have various flaws and biases, but we are willing to be corrected, and set our flaws aside, or maybe there’s two ways of approaching a question where one way hits the bias and one way doesn’t. We want to see whether we can produce some scheme that picks out the right way, at least to some degree of accuracy. We don’t need to be able to answer every question. If we, for example, learned that, “Well, debate works perfectly well for some broad class of tasks, but not for resolving the final question of what humans should do over the long term future, or resolving all metaethical disagreements or something,” we can afford to say, “We’ll put those aside for now. We want to get through this risky period, make sure AI doesn’t do something malicious, and we can deliberately work through these product questions, take our time doing that.”

The goal includes the task of knowing which things we can safely answer, and the goal should be to structure the debates so that if you give it a question where humans just disagree too much or are too unreliable to reliably answer, the answer should be, “We don’t know the answer to that question yet.” A debater should be able to win a debate by admitting ignorance in that case.

There is an important assumption I’m making about the world that we should make explicit, which is that I believe it is safe to be slow about certain ethical or directional decisions. Y/ou can construct games where you just have to make a decision now, like you’re barreling along in some car with no brakes, you have to dodge left or right around an obstacle, but you can’t just say, “I’m going to ponder this question for a while and sort of hold off.” You have to choose now. I would hope that the task of choosing what we want to do as a civilization is not like that. We can resolve some immediate concerns about serious problems now, and existential risk, but we don’t need to resolve everything,

That’s a very strong assumption about the world, which I think is true, but it’s worth saying that I know that is an assumption.

Lucas: Right. I mean, it’s true insofar as coordination succeeds, and people don’t have incentives just to go do what they think is best.

Geoffrey: That’s right. If you can hold off deciding things until we can deliberate longer.

Lucas: Right. What does this distillization process look for debate, where ensuring alignment is maintained as a system capability is amplified and changed?

Geoffrey: One property of amplification, which is nice, is that you can sort of imagine running it forever. You train on simple questions, and then you train on more complicated questions, and then you keep going up and up and up, and if you’re confident that you’ve trained enough on the simple questions, you can never see them again, freeze that part of the model, and keep going. I think in practice, that’s probably not how we would run it, so you don’t inherit that advantage. In debate, what you would have to do to get to more and more complicated questions is, at some point, and maybe this point is fairly far off, but you have to go to the longer and longer and longer debates.

If you’re just sort of thinking about the long term future, I expect to have to switch over to some other scheme, or at least layer a scheme, embed debate in a larger scheme. An example would be it could be that the question you resolve with debate is, “What is an even better way to build AI alignment?” That, you can resolve with, say, depth 100 debates, and maybe you can handle that depth well. What that spits out to you is an algorithm, you interrogate it enough to know that you trust it, and you can put that one.

You can also imagine eventually needing to hybridize kind of a Debate-like scheme and an Amplification-like scheme, where you don’t get a new algorithm out, but you trust this initial debating oracle enough that you can view it as fixed, and then start a new debate scheme, which can trust any answer that original scheme produces. Now, I don’t really like that scheme, because it feels like you haven’t gained a whole lot. Generally, if you think about, say, the next 1,000 years … It’s useful to think about the long-term. AI alignment going forwards. I expect to need further advances after we get past this AI risk period.

I’ll give a concrete example. You ask your debating agents, “Okay, give me a perfect theorem prover.” Right now, all of our theorem provers have little bugs, probably, so you can’t really trust them to resist superintelligent agent. You say you trust that theorem prover that you get out, and you say, “Okay, now, just I want a proof that AI alignment works.” You bootstrap your way up using this agent as an oracle on sort of interesting, complicated questions, until you’ve got to a scheme that gets you to the next level, and then you iterate.

Lucas: Okay. In terms of practical, short-term world to AGI world maybe in the next 30 years, what does this actually look like? In what ways could we see debate and amplification deployed and used at scale?

Geoffrey: There is the direct approach, where you use them to answer questions, using exactly the structure they’re trained as. Debating agent, you would just engage in debates, and you would use it as an oracle in that way. You can also use it to generate training data. You could, for example, ask a debating agent to spit out the answers to a large number of questions, and then you just train a little module. If you trust all the answers, and you trust supervised learning to work. If you wanted to build a strong self-driving car, you could ask it to train a much smaller network that way. It would not be human level, but it just gives you a way to access data.

There’s a lot you could do with a powerful oracle that gives you answers to questions. I could probably go on at length about fancy schemes you could do with oracles. I don’t know if it’s that important. The more important part to me is what is the decision process we deploy these things into? How we choose which questions to answer and what we do with those answers. It’s probably not a great idea to train an oracle and then give it to everyone in the world right away, unfiltered, for reasons you can probably fill in by yourself. Basically, malicious people exist, and would ask bad questions, and eventually do bad things with the results.

If you have one of these systems, you’d like to deploy it in a way that can help as many people as possible, which means everyone will have their own questions to ask of it, but you need some filtering mechanism or some process to decide which questions to actually ask what to do with the answers, and so on.

Lucas: I mean, can the debate process be used to self-filter out providing answers for certain questions, based off of modeling the human decision about whether or not they would want that question answered?

Geoffrey: It can. There’s a subtle issue, which I think we need to deal with, but haven’t dealt with yet. There’s a commutativity question, which is, say you have a large number of people, there’s a question of whether you reach reflective equilibrium for each person first, and then you would, say, vote across people, or whether you have a debate, and then you vote on the answer to what the judgment should be. Imagine playing a Debate game where you play a debate, and then everyone votes on who wins. There’s advantages on both sides. On the side of voting after reflective equilibrium, you have this problem that if you reach reflective equilibrium for a person, it may be disastrous if you pick the wrong person. That extreme is probably bad. The other extreme is also kind of weird because there are a bunch of standard results where if you take a bunch of rational agents voting, it might be true that A and B implies C, but the agents might vote yes on A, yes on B, and no on C. Votes on statements where every voter is rational are not rational. The voting outcome is irrational.

The result of voting before you take reflective equilibrium is sort of an odd philosophical concept. Probably, you need some kind of hybrid between these schemes, and I don’t know exactly what that hybrid looks like. That’s an area where I think technical AI safety mixes with policy to a significant degree that we will have to wrestle with.

Lucas: Great, so to back up and to sort of zoom in on this one point that you made, is the view that one might want to be worried about people who might undergo an amplified long period of explicit human reasoning, and that they might just arrive at something horrible through that?

Geoffrey: I guess, yes, we should be worried about that.

Lucas: Wouldn’t one view of debate be that also humans, given debate, would also over time come more likely to true answers? Reflective equilibrium will tend to lead people to truth?

Geoffrey: Yes. That is an assumption. The reason I think there is hope there … I think that you should be worried. I think the reason for hope is our ability to not answer certain questions. I don’t know that I trust reflective equilibrium applied incautiously, or not regularized in some way, but I expect that if there’s a case where some definition of reflective equilibrium is not trustworthy, I think it’s hopeful that we can construct debate so that the result will be, “This is just too dangerous too decide. We don’t really know with high confidence the answer.”

Geoffrey: This is certainly true of complicated moral things. Avoiding lock in, for example. I would not trust reflective equilibrium if it says, “Well, the right answer is just to lock our values in right now, because they’re great.” We need to take advantage of the outs we have in terms of being humble about deciding things. Once you have those outs, I’m hopeful that we can solve this, but there’s a bunch of work to do to know whether that’s actually true.

Lucas: Right. Lots more experiments to be done on the human side and the AI side. Is there anything here that you’d like to wrap up on, or anything that you feel like we didn’t cover that you’d like to make any last minute points?

Geoffrey: I think the main point is just that there’s a bunch of work here. OpenAI is hiring people to work on both the ML side of things, also theoretical aspects, if you think you like wrestling with how these things work on the theory side, and then certainly, trying to start on this human side, doing the social science and human aspects. If this stuff seems interesting, then we are hiring.

Lucas: Great, so people that are interested in potentially working with you or others at OpenAI on this, or if people are interested in following you and keeping up to date with your work and what you’re up to, what are the best places to do these things?

Geoffrey: I have taken a break from pretty much all social media, so you can follow me on Twitter, but I won’t ever post anything, or see your messages, really. I think email me. It’s not too hard to find my email address. That’s pretty much the way, and then watch as we publish stuff.

Lucas: Cool. Well, thank you so much for your time, Geoffrey. It’s been very interesting. I’m excited to see how these experiments go for debate, and how things end up moving along. I’m pretty interested and optimistic, I guess, about debate is an epistemic process in its role for arriving at truth and for truth seeking, and how that will play in AI alignment.

Geoffrey: That sounds great. Thank you.

Lucas: Yep. Thanks, Geoff. Take care.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.   

Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

Topics discussed in this episode include:

  • The value of verification, regardless of the challenges
  • The 1979 Sverdlovsk anthrax outbreak
  • The use of “rainbow” herbicides during the Vietnam War, including Agent Orange
  • The Yellow Rain Controversy

Publications and resources discussed in this episode include:

  • The Sverdlovsk anthrax outbreak of 1979, Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Ilona Popova, Alexis Shelokov, and Olga Yampolskaya, Science, 18 November 1994, Vol. 266, pp 1202-1208.
  • Preliminary Report- Herbicide Assessment Commission of the American Association for the Advancement of Science, Matthew Meselson, A. H. Westing, J. D. Constable, and Robert E. Cook, 30 December 1970, private circulation, 8 pp. Reprinted in Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6806-6807.
  • “Background Material Relevant to Presentations at the 1970 Annual Meeting of the AAAS”, Herbicide Assessment Commission of the AAAS, with A.H. Westing and J.D. Constable, December 1970, private circulation, 48 pp. Reprinted in the Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6807-6813.
  • “The Yellow Rain Affair: Lessons from a Discredited Allegation”, with Julian Perry Robinson Terrorism, War, or Disease? eds. A.L. Clunan, P.R. Lavoy, and SB Martin, Stanford University Press, Stanford, California. 2008, pp 72-96.
  • Yellow Rain by Thomas D. Seeley, Joan W. Nowicke, Matthew Meselson, Jeanne Guillemin and Pongthep Akratanakul, Scientific American, September 1985, Vol. 253, pp 128-137.

Click here for Part 1: From DNA to Banning Biological Weapons with Matthew Meselson and Max Tegmark

Four-ship formation on a defoliation spray run. (U.S. Air Force photo)

Ariel: Hi everyone. Ariel Conn here with the Future of Life Institute. And I would like to welcome you to part two of our two-part FLI podcast with special guest Matthew Meselson and special guest/co-host Max Tegmark. You don’t need to have listened to the first episode to follow along with this one, but I do recommend listening to the other episode, as you’ll get to learn about Matthew’s experiment with Franklin Stahl that helped prove Watson and Crick’s theory of DNA and the work he did that directly led to US support for a biological weapons ban. In that episode, Matthew and Max also talk about the value of experiment and theory in science, as well as how to get some of the world’s worst weapons banned. But now, let’s get on with this episode and hear more about some of the verification work that Matthew did over the years to help determine if biological weapons were being used or developed illegally, and the work he did that led to the prohibition of Agent Orange.

Matthew, I’d like to ask about a couple of projects that you were involved in that I think are really closely connected to issues of verification, and those are the Yellow Rain Affair and the Russian Anthrax incident. Could you talk a little bit about what each of those was?

Matthew: Okay, well in 1979, there was a big epidemic of anthrax in the Soviet city of Sverdlovsk, just east of the Ural mountains, in the beginning of Siberia. We learned about this epidemic not immediately but eventually, through refugees and other sources, and the question was, “What caused it?” Anthrax can occur naturally. It’s commonly a disease of bovids, that is cows or sheep, and when they die of anthrax, the carcass is loaded with the anthrax bacteria, and when the bacteria see oxygen, they become tough spores, which can last in the earth for a long, long time. And then if another bovid comes along and manages to eat something that’s got those spores, he might get anthrax and die, and the meat from these animals who died of anthrax, if eaten, can cause gastrointestinal anthrax, and that can be lethal. So, that’s one form of anthrax. You get it by eating.

Now, another form of anthrax is inhalation anthrax. In this country, there were a few cases of men who worked in leather factories with leather that had come from anthrax-affected animals, usually imported, which had live anthrax spores on the leather that got into the air of the shops where people were working with the leather. Men would breathe this contaminated air and the infection in that case was through the lungs.

The question here was, what kind of anthrax was this: inhalational or gastrointestinal? And because I was by this time known as an expert on biological weapons, the man who was dealing with this issue at the CIA in Langley, Virginia — a wonderful man named Julian Hoptman, a microbiologist by training — asked me if I’d come down and work on this problem at the CIA. He had two daughters who were away at college, and so he had a spare bedroom, so I actually lived with Julian and his wife. And in this way, I was able to talk to Julian night and day, both at the breakfast and dinner table, but also in the office. Of course, we didn’t talk about classified things except in the office.

Now, we knew from the textbooks that the incubation period for inhalation anthrax was thought to be four, five, six, seven days; Between the time you inhale it, four, five days later, if you hadn’t yet come down with it, you probably wouldn’t. Well, we knew from classified sources that people were dying of this anthrax over a period of six weeks, April all the way into the middle of May 1979. So, if the incubation period was really that short, you couldn’t explain how that would be airborne because a cloud goes by right away. Once it’s gone, you can’t inhale it anymore. So that made the conclusion that it was airborne difficult to reach. You could still say, well maybe it got stirred up again by people cleaning up the site, maybe the incubation period is longer than we thought, but there was a problem there.

And so the conclusion of our working group was that it was probable that it was airborne. In the CIA, at that time at least, in a conclusion that goes forward to the president, you couldn’t just say, “Well maybe, sort of like, kind of like, maybe if …” Words like that just didn’t work, because the poor president couldn’t make heads nor tails. Every conclusion had to be called “possible,” “probable,” or “confirmed.” Three levels of confidence.

So, the conclusion here was that it was probable that it was inhalation, and not ingestion. The Soviets said that it was bad meat, but I wasn’t convinced, mainly because of this incubation period thing. So I decided that the best thing to do would be to go and look. Then you might find out what it really was. Maybe by examining the survivors or maybe by talking to people — just somehow, if you got over there, with some kind of good luck, you could figure out what it was. I had no very clear idea, but when I would meet any high level Soviet, I’d say, “Could I come over there and bring some colleagues and we would try to investigate?”

The first time that happened was with a very high-level Soviet who I met in Geneva, Switzerland. He was a member of what’s called the Military Industrial Commission in the Soviet Union. They decided on all technical issues involving the military, and that would have included their biological weapons establishments, and we knew that they had a big biological laboratory in the city of Sverdlovsk, there was no doubt about that. So, I told them, “I want to go in and inspect. I’ll bring some friends. We’d like to look.” And he said, “No problem. Write to me.”

So, I wrote to him, and I also went to the CIA and said, “Look, I got to have a map because maybe they’d let me go there and take me to the wrong place, and I wouldn’t know it’s the wrong place, and I wouldn’t learn anything. So, the CIA gave me a map — which turned out to be wrong, by the way — but then I got a letter back from this gentleman saying no, actually they couldn’t let us go because of the shooting down of the Korean jet #007, if any of you remember that. A Russian fighter plane shot down a Korean jet — a lot of passengers on it and they all got killed. Relations were tense. So, that didn’t happen.

Then the second time, an American and the Russian Minister of Health got a Nobel prize. The winner over there was the minister of health named Chazov, and the fellow over here was Bernie Lown in our medical school, who I knew. So, I asked Bernie to take a letter when he went next time to see his friend Chazov in Moscow, to ask him if he could please arrange that I could take a team to Sverdlovsk, to go investigate on site. And when Bernie came back from Moscow, I asked him and he said, “Yeah. Chazov says it’s okay, you can go.” So, I sent a telex — we didn’t have email — to Chazov saying, “Here’s the team. We want to go. When can we go?” So, we got back a telex saying, “Well, actually, I’ve sent my right-hand guy who’s in charge of international relations to Sverdlovsk, and he looked around, and there’s really no evidence left. You’d be wasting your time,” which means no, right? So, I telexed back and said, “Well, scientists always make friends and something good always comes from that. We’d like to go to Sverdlovsk anyway,” and I never heard back. And then, the Soviet Union collapses, and we have Yeltsin now, and it’s the Russian Republic.

It turns out that a group of — I guess at that time they were still Soviets — Soviet biologists came to visit our Fort Detrick, and they were the guests of our Academy of Sciences. So, there was a welcoming party, and I was on the welcoming party, and I was assigned to take care of one particular one, a man named Mr. Yablokov. So, we got to know each other a little bit, and at that time we went to eat crabs in a Baltimore restaurant, and I told him I was very interested in this epidemic in Sverdlovsk, and I guess he took note of that. He went back to Russia and that was that. Later, I read in a journal that the CIA produced, abstracts from the Russian literature press, that Yeltsin had ordered his minister, or his assistant for Environment and Health, to investigate the anthrax epidemic back in 1979, and the guy who he appointed to do this investigation for him was my Mr. Yablokov, who I knew.

So, I sent a telex to Mr. Yablokov saying, “I see that President Yeltsin has asked for you to look into this old epidemic and decide what really happened, and that’s great, I’m glad he did that, and I’d like to come and help you. Could I come and help you?” So, I got back a telex saying, “Well, it’s a long time ago. You can’t bring skeletons out of the closet, and anyway, you’d have to know somebody there.” Basically it was a letter that said no. But then my friend Alex Rich of Cambridge Massachusetts, a great molecular biologist and X-ray crystallographer at MIT, had a party for a visiting Russian. Who is the visiting Russian but a guy named Sverdlov, like Sverdlovsk, and he’s staying with Alex. And Alex’s wife came over to me and said, “Well, he’s a very nice guy. He’d been staying with us for several days. I make him breakfast and lunch. I make the bed. Maybe you could take him for a while.”

So we took him into our house for a while, and I told him that I had been given a turn down by Mr. Yablokov, and this guy whose name is Sverdlov, which is an immense coincidence, said, “Oh, I know Yablokov very well. He’s a pal. I’ll talk to him. I’ll get it fixed so you can go.” Now, I get a letter. In this letter, handwritten by Mr. Yablokov, he said, “Of course, you can go, but you’ve got to know somebody there to invite you.” Oh, who would I know there?

Well, there had been an American Physicist, a solid-state physicist named Ellis who was there on a United States National Academy of Sciences–Russian Academy of Sciences Exchange Agreement doing solid-state physics with a Russian solid-state physicist there in Sverdlovsk. So, I called Don Ellis and I asked him, “That guy who you cooperated with in Sverdlovsk — whose name was Gubanov — I need someone to invite me to go to Sverdlovsk, and you probably still maintain contact with him over there in Sverdlovsk, and you could ask him to invite me.” And Don said, “I don’t have to do that. He’s visiting me today. I’ll just hand him the telephone.”

So, Mr. Gubanov comes on the telephone and he says, “Of course I’ll invite you, my wife and I have always been interested in that epidemic.” So, a few days later, I get a telex from the rector of the university there in Sverdlovsk, who was a mathematical physicist. And he says, “The city is yours. Come on. We’ll give you every assistance you want.” So we went, and I formed a little team, which included a pathologist, thinking maybe we’ll get ahold of some information of autopsies that could decide whether it was inhalation or gastrointestinal. And we need someone who speaks Russian; I had a friend who was a virologist who spoke Russian. And we need a guy who knows a lot about anthrax, and veterinarians know a lot about anthrax, so I got a veterinarian. And we need an anthropologist who knows a lot about how to work with people and that happened to be my wife, Jeanne Guillemin.

So, we all go over there, we were assigned a solid-state physicist, a guy named Borisov, to take us everywhere. He knew how to fix everything. Cars that wouldn’t work, and also the KGB. He was a genius, and became a good friend. It turns out that he had a girlfriend, and she, by this time, had been elected to be a member of the Duma. In other words, she’s a congresswoman. She’s from Sverdlovsk. She had been a friend of Yeltsin. She had written Yeltsin a letter, which my friend Borisov knew about, and I have a photocopy of the letter. What it says is, “Dear Boris Nikolayevich,”that’s Yeltsin, “My constituents here at Sverdlovsk want to know if that anthrax epidemic was caused by a government activity or not. Because if it was, the families of those who died — they’re entitled to double pension money, just like soldiers killed in war.” So, Yeltsin writes back, “We will look into it.” And that’s why my friend Yablokov got asked to look into it. It was decided eventually that it was the result of government activity — by Yeltsin, he decided that — and so he had to have a list of the people who were going to get the extra pensions. Because otherwise everybody would say, “I’d like to have an extra pension.” So there had to be a list.

So she had this list with 68 names of the people who had died of anthrax during this time period in 1979. The list also had the address where they lived. So,now my wife, Jeanne Guillemin, Professor of Anthropology at Boston College, goes door-to-door — with two Russian women who were professors at the university and who knew English so they could communicate with Jeanne — knocks on the doors: “We would like to talk to you for a little while. We’re studying health, we’re studying the anthrax epidemic of 1979. We’re from the university.”

Everybody let them in except one lady who said she wasn’t dressed, so she couldn’t let anybody in. So in all the other cases, they did an interview and there were lots of questions. Did the person who died have TB? Was that person a smoker? One of the questions was where did that person work, and did they work in the day or the night? We asked that question because we wanted to make a map. If it had been inhalation anthrax, it had to be windborne, and depending on the wind, it might have been blown in a straight line if the wind was of a more or less unchanging direction.

If, on the other hand, it was gastrointestinal, people get bad meat from black market sellers all over the place, and the map of where they were wouldn’t show anything important, they’d just be all over the place. So, we were able to make a map when we got back home, we went back there a second time to get more interviews done, and Jeanne went back a third time to get even more interviews done. So, finally we had interviews with families of nearly all of those 68 people, and so we had 68 map locations: where they lived, and where they worked, and whether it was day or night. Nearly all of them were daytime workers.

When we plotted where they lived, they lived all over the southern part of the city of Sverdlovsk. When we plotted where they were likely would have been in the daytime, they all fell in to one narrow zone with one point at the military biological lab. The lab was inside the city. The other point was at the city limit: The last case was at the edge of the city limit, the southern part. We also had meteorological information, which I had brought with me from the United States. We knew the wind direction every three hours, and there was only one day when the wind was constantly blowing in the same direction, and that same direction was exactly the direction along which the people who died of anthrax lived.

Well, bad meat does not blow around in straight lines. Clouds of anthrax spores do. It was rigorous: We could conclude from this, with no doubt whatsoever, that it had been airborne, and we published this in Science magazine. It was really a classic of epidemiology, you couldn’t ask for anything better. Also, the autopsy records were inspected by the pathologist along with our trip, and he concluded from the autopsy specimens that it was inhalation. So, there was that evidence, too, and that was published in the PNAS. So, that really ended the mystery. The Soviet explanation was just wrong, and the CIA explanation, which was only probable: it was confirmed.

Max: Amazing detective story.

Matthew: I liked going out in the field, using whatever science I knew to try and deal with questions of importance to arms control, especially chemical and biological weapons arms control. And that happened to me on three occasions, one I just told you. There were two others.

Ariel: So, actually real quick before you get into that. I just want to mention that we will share or link to that paper and the map. Because I’ve seen the map that shows that straight line, and it is really amazing, thank you.

Matthew: Oh good.

Max: I think at the meta level this is also a wonderful example of what you mentioned earlier there, Matthew, about verification. It’s very hard to hide big programs because it’s so easy for some little thing to go wrong or not as planned and then something like this comes out.

Matthew: Exactly. By the way, that’s why having a verification provision in the treaty is worth it even if you never inspect. Let’s say that the guys who are deciding whether or not to do something which is against the treaty, they’re in a room and they’re deciding whether or not to do it. Okay? Now it is prohibited by a treaty that provides for verification. Now they’re trying to make this decision and one guy says, “Let’s do it. They’ll never see it. They’ll never know it.” Another guy says, “Well, there is a provision for verification. They may ask for a challenge inspection.” So, even the remote possibility that, “We might get caught,” might be enough to make that meeting decide, “Let’s not do it.” If it’s not something that’s really essential, then there is a potential big price.

If, on the other hand, there’s not even a treaty that allows the possibility of a challenge inspection, if the guy says, “Well, they might find it,” the other guy is going to say, “How are they going to find it? There’s no provision for them going there. We can just say, if they say, ‘I want to go there,’ we say, ‘We don’t have a treaty for that. Let’s make a treaty, then we can go to your place, too.’” It makes a difference: Even a provision that’s never used is worth having. I’m not saying it’s perfection, but it’s worth having. Anyway, let’s go on to one of these other things. Where do you want me to go?

Ariel: I’d really love to talk about the Agent Orange work that you did. So, I guess if you could start with the Agent Orange research and the other rainbow herbicides research that you were involved in. And then I think it would be nice to follow that up with, sort of another type of verification example, of the Yellow Rain Affair.

Matthew: Okay. The American Association for the Advancement of Science, the biggest organization of science in the United States, became, as the Vietnam War was going on, more and more concerned that the spraying of herbicides in Vietnam might cause ecological or health harm. And so at successive national meetings, there were resolutions to have it looked into. And as a result of one of those resolutions, the AAAS asked a fellow named Fred Tschirley to look into it. Fred was at the Department of Agriculture, but he was one of the people who developed the military use of herbicides. He did a study, and he concluded that there was no great harm. Possibly to the mangrove forest, but even then they would regenerate.

But at the next annual meeting, there was more appealing on the part of the membership, and now they wanted the AAAS to do its own investigation, and the compromise was they’d do their own study to design an investigation, and they had to have someone to lead that. So, they asked a fellow named John Cantlon, who was provost of Michigan State University, would he do it, and he said yes. And after a couple of weeks, John Cantlon said, “I can’t do this. I’m being pestered by the left and the right and the opponents on all sides and it’s just, I can’t do it. It’s too political.”

So, then they asked me if I would do it. Well, I decided I’d do it. The reason was that I wanted to see the war. Here I’d been very interested in chemical and biological weapons; very interested in war, because that’s the place where chemical and biological weapons come into play. If you don’t know anything about war, you don’t know what you’re talking about. I taught a course at Harvard for over two years on war, but that wasn’t like being there. So, I said I’d do it.

I formed a little group to do it. A guy named Arthur Westing, who had actually worked with herbicides and who was a forester himself and had been in the army in Korea, and I think had a battlefield promotion to captain. Just the right combination of talents. Then we had a chemistry graduate student, a wonderful guy named Bob Baughman. So, to design a study, I decided I couldn’t do it sitting here in Cambridge, Massachusetts. I’d have to go to Vietnam and do a pilot study in order to design a real study. So, we went to Vietnam — by the way, via Paris, because I wanted to meet the Vietcong people, I wanted them to give me a little card we could carry in our boots that would say, if we were captured, “We’re innocent scientists, don’t imprison us.” And we did get such little cards that said that. We were never captured by the Vietcong, but we did have some little cards.

Anyway, we went to Vietnam and we found, to my surprise, that the military assistance command, that is the United States Military in Vietnam, very much wanted to help our investigation. They gave us our own helicopter. That is, they assigned a helicopter and a pilot to me. And anywhere we wanted to go, I’d just call a certain number the night before and then go to Tan Son Nhut Air Base, and there would be a helicopter waiting with a pilot instructed FAD — fly as directed.

So, one of the things we did was to fly over a valley on which herbicides had been sprayed to kill the rice. John Constable, the medical member of our team, and I did two flights of that so we could take a lot of pictures. And the man who had designed this mission, a chemical corps captain named Captain Franz, had designed the mission and requested it and gotten permission through a series of review processes that it was really an enemy crop production area, not an area of indigenous Montagnard people growing food for their own eating, but rather enemy soldiers growing it for themselves.

So we took a lot of pictures and as we flew, Colonel Franz said, “See down there, there are no houses. There’s no civilian population. It’s just military down there. Also, the rice is being grown on terraces on the hillsides. The Montagnard people don’t do that. They just grow it down in the valley. They don’t practice terracing. And also, the extent of the rice fields down there — that’s all brand new. Fields a few years ago were much, much smaller in area. So, that’s how we know that it’s an enemy crop production area.” And he was a very nice man, and we believed him. And then we got home, and we had our films developed.

Well, we had very good cameras and although you couldn’t see from the aircraft, you could certainly see in the film: The valley was loaded with little grass shacks with yellow roofs — meaning that they were built recently, because you have to replace the roofs every once in a while with straw and if it gets too old, it turns black, but if there’s yellow, it means that somebody is living in those. And there were hundreds and hundreds of them.

We got from the Food and Agriculture Organization in Rome how much rice you need to stay alive for one year, and what area in hectares of dry rice — because this isn’t patty rice, it’s dry rice — you’d need to make that much rice, and we measured the area that was under cultivation from our photographs, and the area was just enough to support that entire population, if we assumed that there were five people who needed to be fed in every one of the houses that we counted.

Also, we could get from the French aerial photography that they had done in the late 1940s, and it turns out that the rice fields had not expanded. They were exactly the same. So it wasn’t that the military had moved in and made bigger rice fields: They were the same. So, everything that Colonel Franz said was just wrong. I’m sure he believed it, but it was wrong.

So, we made great big color enlargements of our photographs — we took photographs all up and down this valley, 15 kilometers long — and we made one set for Ambassador Bunker; one copy for General Abrams — Creighton Abrams was the head of our military assistance command; and one set for Secretary of State Rogers; along with a letter saying that this one case that we saw may not be typical, but in this one case, this crop destruction program was achieving the opposite of what it intended. It was denying food to the civilian population and not to the enemy. It was completely mistaken. So, as a result, I think, of that, but I have no proof, only the time connection, but right after that in early November — we’d sent the stuff in early November — Ambassador Bunker and General Abrams ordered a new review of the crop destruction program. Was it in response to our photographs and our letter? I don’t know, but I think it was.

The result of that review was a recommendation by Ambassador Bunker and General Abrams to stop the herbicide program immediately. They sent this recommendation back in a top secret telegram to Washington. Well, the top-secret telegram fell into the hands of the Washington Post, and they published it. Well, now here are the Ambassador and the General on the spot, saying to stop doing something in Vietnam. How on earth can anybody back in Washington gainsay them? Of course, President Nixon had to stop it right away. There’d be no grounds. How could he say, “Well, my guys here in Washington, in spite of what the people on the spot say, tell us we should continue this program.”

So that very day, he announced that the United States would stop all herbicide operations in Vietnam in a rapid and orderly manner. That very day happened to be the day that I, John Constable, and Art Westing were on the stage at the annual meeting in Chicago of the AAAS, reporting on our trip to Vietnam. And the president of AAAS ran up to me to tell me this news, because it just came in while I was talking, giving our report. So, that’s how it got stopped, and thanks to General Abrams.

By the way, the last day I was in Vietnam, General Abrams had just come back from Japan — he’d had an operation for gallbladder, and he was still convalescing. We spent all morning talking with each other. And he asked me at one point, “What about the military utility of the herbicides?” And of course, I said I had no idea what it was, or not. And he said, “Do you want to know what I think?” I said, “Yes, sir.” He said, “I think it’s shit.” I said, “Well, why are we doing it here?” He said, “You don’t understand anything about this war, young man. I do what I’m ordered to do from Washington. It’s Washington who tells me to use this stuff, and I have to use it because if I didn’t have those 55-gallon drums of herbicides offloaded on the decks at Da Nang and Saigon, then they’d make walls. I couldn’t offload the stuff I need over those walls. So, I do let the chemical corps use this stuff.” He said, “Also, my son, who is a captain up in I Corps, agrees with me about that.”

I wrote something about this recently, which I sent to you, Ariel. I want to be sure my memory was right about the conversation with General Abrams — who, by the way, was a magnificent man. He is the man who broke through at the Battle of the Bulge in World War II. He’s the man about whom General Patton, the great tank general, said, “There’s only one tank officer greater than me, and it’s Abrams.”

Max: Is he the one after whom the Abrams tank is named?

Matthew: Yes, it was named after him. Yes. He had four sons, they all became generals, and I think three of them became four-stars. One of them who did become a four-star is still alive in Washington. He has a consulting company. I called him up and I said, “Am I right, is this what your dad thought and what you thought back then?” He said, “Hell, yes. It’s worse than that.” Anyway, that’s what stopped the herbicides. They may have stopped anyway. It was dwindling down, no question. Now the question of whether dioxin and herbicides have caused too many health effects, I just don’t know. There’s an immense literature about this and it’s nothing I can say we ever studied. If I read all the literature, maybe I’d have an opinion.

I do know that dioxin is very poisonous, and there’s a prelude to this order from President Nixon to stop the use of all herbicides. That’s what caused the United States to stop the use of Agent Orange specifically. That happened first, before I went to Vietnam. That happened for a funny reason. A Harvard student, a Vietnamese boy, came to my office one day with a stack of newspapers from Saigon in Vietnamese. I couldn’t read them, of course, but they all had pictures of deformed babies, and this student claimed that this was because of Agent Orange, that the newspaper said it was because of Agent Orange.

Well, deformed babies are born all the time and I appreciated this coming from him, but there’s nothing I could do about it. But then I got from a graduate student here — Bill Haseltine, now become a very wealthy man — he had a girlfriend and she was working for Ralph Nader one summer, and she somehow got a purloined copy of a study that had been ordered by the NIH of the possible keratogenic, mutagenic, and carcinogenic effects of common herbicides, pesticides, and fungicides.

This company, called the Bionetics company, had this huge contract that tests all these different compounds, and they concluded from this that there was only one of these chemicals that did anything that might be dangerous for people. That was 2,4,5-T, trichlorophenoxyacetic acid. Well, that’s what Agent Orange is made out of. So, I had this report that had not yet been released to the public saying that this could cause birth defects in humans if it did the same thing as it did in guinea pigs and mice. I thought, the White House better know about this. That’s pretty explosive: claims in the newspapers in Saigon and scientific suggestions that this stuff might cause birth defects.

So, I decided to go down to Washington and see President Nixon’s science advisor. That was Lee DuBridge, physicist. Lee DuBridge had been the president of Caltech when I was a graduate student there and so he knew me, and I knew him. So, I went down to Washington with some friends, and I think one of the friends was Arthur Galston from Yale. He was a scientist who worked on herbicides, not on the phenoxyacetic herbicides but other herbicides. So we went down to see the President’s science advisor, and I showed them these newspapers and showed him the Bionetics report. He hadn’t seen it, it was at too low a level of government for him to see it and it had not yet been released to the public. Then he did something amazing, Lee DuBridge: He picked up the phone and he called David Packard, who was the number two at the Defense Department. Right then and there, without consulting anybody else, without asking the permission of the President, they canceled Agent Orange.

Max: Wow.

Matthew: That was the end Agent Orange. Now, not exactly the end. I got a phone call from Lee DuBridge a couple of days later when I was back at Harvard. He says, “Matt, the DuPont people have come to me. It’s not Agent Orange itself, it’s an impurity in Agent Orange called dioxin, and they know that dioxin is very toxic, and the Agent Orange that they make has very little dioxin in it because they know it’s bad and they make the stuff at low temperature, when dioxin is a by-product, that’s made in very small amount. These other companies like Diamond Shamrock and other companies, Monsanto, who make Agent Orange for the military, it must be their Agent Orange. It’s not our Agent Orange.

So, in other words the question was, we just use the Dow Agent Orange — maybe that’s safe. But the question is does the Dow Agent Orange cause defects in mice? So, a whole new series of experiments were done with Agent Orange containing much less dioxin in it. It still made birth defects. So, since it still made birth defects in one species of rodent, you could hardly say, “Well, it’s okay then for humans.” So, that really locked it, closed it down, and then even the Department of Agriculture prohibited the use in the United States, except on land that would have been unlikely to get into the human food chain. So, that ended the use of Agent Orange.

That had happened already before we went to Vietnam. They were then using only Agent White and Agent Blue, two other herbicides, but Agent Orange had been knocked out ahead of time. But that was the end of the whole herbicide program. It was two things: the dioxin concern, on the one hand, stopping Agent Orange, and the decision of President Nixon; and militarily Bunker and Abrams had said, “It’s no use, we want to get it stopped, it’s doing more harm than good. It’s getting the civilian population against us.”

Max: One reaction I have to these fascinating stories is how amazing it is that back in those days politicians really trusted scientists. You could go down to Washington, there would be a science advisor. You know, we even didn’t have a presidential science advisor for a while now during this administration. Do you feel that the climate has changed somehow in the way politicians view scientists?

Matthew: Well, I don’t have a big broad view of the whole thing. I just get the impression, like you do, that there are more politicians who don’t pay attention to science than there used to be. There are still some, but not as many, and not in the White House.

Max: I would say we shouldn’t particularly just point fingers at any particular administration, I think there has been a general downward trend for people’s respect for scientists overall. If you go back to when you were born, Matthew, and when I was born, I think generally people thought a lot more highly about scientists contributing very valuable things to society and they were very interested in them. I think right now there are much more people who can name — If you ask the average person how many famous movie stars can they name, or how many billionaires can they name, versus how many Nobel laureates can they name, the answer is going to be kind of different from the way it was a long time ago. It’s very interesting to think about what we can do to more help people appreciate the things that they do care about, like living longer and having technology and so on, are things that they, to a large extent, owe to science. It isn’t just the nerdy stuff that isn’t relevant to them.

Matthew: Well, I think movie stars were always at the top of the list. Way ahead of Nobel Prize winners and even of billionaires, but you’re certainly right.

Max: The second thing that really strikes me, which you did so wonderfully there, is that you never antagonized the politicians and the military, but rather went to them in a very constructive spirit and said look, here are the options. And based on the evidence, they came to your conclusion.

Matthew: That’s right. Except for the people who actually were doing these programs — that was different, you couldn’t very well tell them that. But for everybody else, yes, it was a help. You need to offer help, not hindrance.

The last thing was the Yellow Rain. That, too, involved the CIA. I was contacted by the CIA. They had become aware of reports from Southeast Asia, particularly from Thailand, Hmong tribespeople who were living in Laos, coming out of Laos across the Mekong into Thailand, and telling stories of being poisoned by stuff dropped from airplanes. Stuff that they called kemi or yellow rain.

At first, I thought maybe there was something to this, there are some nasty chemicals that are yellow. Not that lethal, but who knows, maybe there is exaggeration in their stories. One of them is called adamsite, it’s yellow, it’s an arsenical. So we decided we’d have a conference, because there was a  mystery: What is this yellow rain? We had a conference. We invited people from the intelligence community, from the state department. We invited anthropologists. We invited a bunch of people to ask, what is this yellow rain?

By this time, we knew that the samples that had been turned in contained pollen. One reason we knew that was that the British had samples of this yellow rain and they had shown that it contains pollen. They had looked at the samples of the yellow rain brought in by the Hmong tribespeople, given to British officers — or maybe Americans, I don’t know — but found its way into the hands of British intelligence, who bring these samples back to Porton and they’re examined in various ways, but also under the microscope. And the fellow who looked at them under the microscope happened to be a beekeeper. He knew just what pollen grains look like. And he knew that there was pollen, and then they sent this information to the United States, and we looked at the samples of yellow rain we had, and they all contained — all these yellow samples contained pollen.

The question was, what is it? It’s got pollen in it. Maybe it’s very poisonous. The Montagnard people say it falls from the sky. It lands on leaves and on rocks. The spots were about two millimeters in diameter. It’s yellow or brown or red, different colors. What is it? So, we had this meeting in Cambridge, and one of the people there, Peter Ashton, is a great botanist, his specialty is the trees of Southeast Asia and in particular the great dipterocarp trees, which are like the oaks in our part of the world. And he was interested in the fertilization of these dipterocarps, and the fertilization is done by bees. They collect pollen, though, like other bees.

And so the hypothesis we came to at the end of this day-long meeting was that maybe this stuff is poisonous, and the bees get poisoned by it because it falls on everything, including flowers that have pollen, and the bees get sick, and these yellow spots, they’re the vomit of the bees. These bees are smaller individually than the yellow spots, but maybe several bees get together and vomit on the same spot. Really a crazy idea. Nevertheless, it was the best idea we could come up with that explained why something could be toxic but have pollen in it. It could be little drops, associated with bees, and so on.

A couple of days later, both Peter Ashton, the botanist, and I, noticed on the backs of our cars on the windshields, the rear windshields, yellow spots loaded with pollen. These were being dropped by bees,  these were the natural droppings of bees, and that gave us the idea that maybe there was nothing poisonous in this stuff. Maybe it was the natural droppings of bees that the people in the villages thought was poisonous, but that wasn’t. So, we decided we better go to Thailand and find out what’s happening.

So, a great bee biologist named Thomas Seeley, who’s now at Cornell — he was at Yale at that time — and I flew over to Thailand, and went up into the forest to see if bees defecate in showers. Now why did we do that? It’s because friends here said, “Matt, this can’t be the source of the yellow rain that the Hmong people complained about, because bees defecate one by one. They don’t go out in a great armada of bees and defecate all at once. Each bee goes out and defecates by itself. So, you can’t explain the showers — they’d only get tiny little driblets, and the Hmong people say they’re real showers, with lots of drops falling all at once.”

So, Tom Seeley and I went to Thailand, where they also had this kind of bee. So, we went there, and it turns out that they defecate all at once, unlike the bees here. Now they do defecate in showers here too, but they’re small showers. That’s because the number of bees in a nest here is rather small, but they do come out on the first warm days of spring, when there’s now pollen and nectar to be harvested, but those showers are kind of small. Besides that, the reason that there are showers at all even in New England is because the bees are synchronized by winter. Winter forces them to stay in their nest all winter long, during which they’re eating the stored-up pollen and getting very constipated. Now, when they fly out, they all fly out, they’re all constipated, and so you get a big shower. Not as big as the natives in Southeast Asia reported, but still a shower.

But in southeast Asia, there are no seasons. Too near the equator. So, there’s nothing that would synchronize the defecation of bees, and that’s why we had to go to Thailand to see if — even though there’s no winter to synchronize their defecation flights — if they nevertheless do go out in huge numbers and all at once.

So, we’re in Thailand and we go up into the Khao Yai National Park and find places where there are clearings in the forests where you could see up into the sky, where if there were bees defecating their feces would fall to the ground, not get caught up in the trees. And we put down big pieces, one meter square, of white paper, and anchored them with rocks, and went walking around in the forest some more, and come back and look at our pieces of white paper every once in a while.

And then suddenly we saw a large number of spots on the paper, which meant that they had defecated all at once. They weren’t going around defecating one by one by one. There were great showers then. That’s still a question: Why they don’t go out one by one? And there are some good ideas why, I won’t drag you into that. It’s the convoy principle, to avoid getting picked off one by one by birds. That’s why people think that they go out in great armadas of constipated bees.

So, this gave us a new hypothesis. The so-called yellow rain is all a mistake. It’s just bees defecating, which people confuse and think is poisonous. Now, that still doesn’t prove that there wasn’t a poison. What was the evidence for poison? The evidence was that the Defense Intelligence Agency was sending samples of this yellow rain and also samples of human blood and other materials to a laboratory in Minnesota that knew how to analyze for the particular toxin that the Defense establishment thought was the poison. It’s a toxin called trichothecene mycotoxins, there’s a whole family of them. And this lab reported positive findings in the samples from Thailand but not in controls. So that seemed to be real proof that there was poison.

Well, this lab is a lab that also produced trichothecene mycotoxins, and the way they analyzed for them was by mass spectroscopy, and everybody knows that if you’re going to do mass spectroscopy, you’re going to be able to detect very, very, very tiny amounts of stuff, and so you shouldn’t both make large quantities and try to detect small quantities in the same room, because there’s the possibility of cross contamination. I have an internal report from the Defense Intelligence Agency saying that that laboratory did have numerous false positive, and that probably all of their results were bedeviled by contamination from the trichothecenes that were in the lab, and also because there may have been some false reading of the mass spec diagram.

The long and short of it is that when other laboratories tried to find trichothecenes in their samples: the US Army looked at at least 80 samples and found nothing. The British looked at at least 60 samples, found nothing. The Swedes looked at some number of samples, I don’t know the number, but found nothing. The French looked at a very few samples at their military analytical lab, and the French found nothing. No lab could confirm it. There was one lab at Rutgers that thought it could confirm it, but I believe that they were suffering from contamination also, because they were a lab that worked with trichothecenes also.

So, the long and short of it is that the chemical evidence was no good, and finally the ambassador there decided that we should have another look — Ambassador Dean. And that the military should send out a team that was properly equipped to check up on these stories, because up until then there was no dedicated team. There were teams that would come up briefly, listen to the refugees’ stories, collect samples, and go back. So Ambassador Dean requested a team that would stay there. So out comes a team from Washington, stays there longer than a year. Not just a week, but longer than a year, and they tried to re-locate the Hmong people in the camps who had told these stories in the refugee camps.

They couldn’t find a single one who would tell the same story twice. Either because they weren’t telling the same story twice, or because the interpreter interpreted the same story differently. So, whatever it was. Then they did something else. They tried to find people who were in the same location at the same time as was claimed there was such attacks, and those people never confirmed the attack. They could never find any confirmation by interrogation of people.

Then also, there was a CIA unit out there in that theater questioning captured prisoners of war and also people who surrendered from the North Vietnamese army: the people who were presumably behind the use of this toxic stuff. And they interrogated hundreds of people, and one of these interrogators wrote an article in an Intelligence Agency Journal, but an open journal, saying that he doubted that there was anything to the yellow rain because they had interrogated so many people including chemical corps people from the North Vietnamese Army, that he couldn’t believe that there really was anything going on.

So we did some more investigating of various kinds, not just going to Thailand, but doing some analysis of various things. We looked at the samples — we found bee hairs in the samples. We found that the bee pollen in the samples of the alleged poison had no protein inside. You can stain pollen grains with something called Coomassie brilliant blue, and these pollen grains that were in the samples handed in by the refugees, that were given to us by the army and by the Canadians, by the Australians, they didn’t stain blue. Why not? Because if a pollen grain passes through the gut of a bee, the bee digests out all of the good protein that’s inside the pollen grain, as its nutrition.

So, you’d have to believe that the Soviets were collecting pollen not from plants, which is hard enough, but had been regurgitated by bees. Well, that’s insane. You could never get enough to be a weapon by collecting bee vomit. So the whole story collapsed, and we’ve written a longer account of this. The United States government has never said we were right, but a few years ago said that maybe they were wrong. So that’s at least something.

So one case we were right, and the Soviets were wrong. Another case, the Soviets were wrong, and we were right, and the third case, the herbicides, nobody was right or wrong. It was just that it was, in my view, by the way, it was useless militarily. I’ll tell you why.

If you spray the deep forest, hoping to find a military installation that you can now see because there are no more leaves, it takes four or five weeks for the leaves to fall off. So, you might as well drop little courtesy cards that say, “Dear enemy. We have now sprayed where you are with herbicide. In four or five weeks we will see you. You may choose to stay there, in which case, we will shoot you. Or, you have four or five weeks to move somewhere else, in which case, we won’t be able to find you. You decide.” Well, come on, what kind of a brain came up with that?

The other use was along roadsides, for convoys to be safer from snipers who might be hidden in the woods. You knock the leaves off the trees and you can see deeper into the woods. That’s right, but you have to realize the fundamental law of physics, which is that if you can see from A to B, B can see back to A, right? If there’s a clear light path from one point to another, there’s a clear light path in the other direction.

Now think about it. You are a sniper in the woods, and the leaves now have not been sprayed. They grow right up to the edge of the forest and a convoy is coming down the road. You can stick your head out a little bit but not for very long. They have long-range weapons; When they’re right opposite you, they have huge firepower. If you’re anywhere nearby, you could get killed.

Now, if we get rid of all the leaves, now I can stand way back into the forest, and still sight you between the trunks. Now, that’s a different matter. A very slight move on my part determines how far up the road and down the road I can see. By just a slight movement of my eye and my gun, I can start putting you under fire a couple kilometers up the road — you won’t even know where it’s coming from. And I can keep you under fire a few kilometers down the road, when you pass me by. And you don’t know where I am anymore. I’m not right up by the roadside, because the leaves would otherwise keep me from seeing anything. I’m back in there somewhere. You can pour all kinds of fire, but you might not hit me.

So, for all these reasons, the leaves are not the enemy. The leaves are the enemy of the enemy. Not of us. We’d like to get rid of the trunks — that’s different, we do that with bulldozers. But getting rid of the leaves leaves a kind of a terrain which is advantageous to the enemy, not to us. So, on all these grounds, my hunch is that by embittering the civilian population — and after all our whole strategy was to win the hearts and minds — by embittering the native population by wiping out their crops with drifting herbicide, the herbicides helped us lose the war, not win it. We didn’t win it. But it helped us lose it.

But anyway, the herbicides got stopped in two steps. First Agent Orange, because of dioxin and the report from the Bionetics Company, and second because Abrams and Bunker said, “Stop it.” We now have a treaty, by the way, the ENMOD treaty, that makes it illegal under international law to do any kind of large-scale environmental modification as a weapon of war. So, that’s about everything I know.

And I should add: you might say, how could they interpret something that’s common in that region as a poison? Well, in China, in 1970, I believe it was, the same sort of thing happened, but the situation was very different. People believed that yellow spots were falling from the sky, they were fallout from nuclear weapons tests being conducted by the Soviet Union, and they were poisonous.

Well, the Chinese government asked a geologist from a nearby university to go investigate, and he figured out — completely out of touch with us, he had never heard of us, we had never heard of him — that it was bee feces that were being misinterpreted by the villagers as fallout from nuclear weapons test done by Russians.

It was exactly the same situation, except that in this case there was no reason whatsoever to believe that there was anything toxic there. And why was it that people didn’t recognize bee droppings for what they were? After all, there’s lots of bees out there. There are lots of bees here, too. And if in April, or near that part of spring, you look at the rear windshield of your car, if you’ve been out in the countryside or even here in midtown, you will see lots of these spots, and that’s what those spots are.

When I was trying to find out what kinds of pollen were in the samples of the yellow rain — the so-called yellow rain — that we had, I went down to Washington. The greatest United States expert on pollen grains and where they come from was at the Smithsonian Institution, a woman named Joan Nowicki. I told her that bees make spots like this all the time and she said, “Nonsense. I never see it.” I said, “Where do you park your car?” Well there’s a big parking lot by the Smithsonian, we go down there, and her rear windshield was covered with these things. We see them all the time. They’re part of what we see, but we don’t take any account of.

Here at Harvard there’s a funny story about that. One of our best scientists here, Ed Wilson, studies ants — but also bees — but mostly ants. But he knows a lot about bees. Well, he has an office in the museum building, and lots of people come to visit the museum at Harvard, a great museum, and there’s a parking lot for them. Now there’s a graduate student who has, in those days, bee nests up on top of the museum building. He’s doing some experiments with bees. But these bees defecate, of course. And some of the nice people who come to see Harvard Museum park their cars there and some of them are very nice new cars, and they come back out from seeing the museum and there’s this stuff on their windshields. So, they go to find out who is it that they can blame for this and maybe do something about it or pay them get it fixed or I don’t know what — anyway, to make a complaint. So, they come to Ed Wilson’s office.

Well, this graduate student is a graduate student of Ed Wilson, and of course, he knows that he’s got bee nests up there, and so the secretary of Ed Wilson knows what this stuff is. And the graduate student has the job of taking a rag with alcohol on it and going down and gently wiping the bee feces off of the windshields of these distressed drivers, so there’s never any harm done. But now, when I had some of this stuff that I’d collected in Thailand, I took two people to lunch at the faculty club here at Harvard, and some leaves with these spots on them under a plastic petri dish, just to see if they would know.

Now, one of these guys, Carroll Williams, knew all about insects, lots of things about insects, and Wilson of course; and we’re having lunch and I bring out this petri dish with the leaves covered with yellow spots and asked them, two professors who are great experts on insects, what the stuff is, and they hadn’t the vaguest idea. They didn’t know. So, there can be things around us that we see every day, and even if we’re experts we don’t know what it is. We don’t notice it. It’s just part of the environment. We don’t notice it. I’m sure that these Hmong people were getting shot at, they were getting napalmed, they were getting everything else, but they were not getting poisoned. At least not by bee feces. It was all a big mistake.

Max: Thank you so much, both for this fascinating conversation and all the amazing things you’d done to keep science a force for good in the world.

Ariel: Yes. This has been a really, really great and informative discussion, and I have loved learning about the work that you’ve done, Matthew. So, Matthew and Max, thank you so much for joining the podcast.

Max: Well, thank you.

Matthew: I enjoyed it. I’m sure I enjoyed it more than you did.

Ariel: No, this was great. It’s truly been an honor getting to talk with you.

If you’ve enjoyed this interview, let us know! Please like it, share it, or even leave a good review. I’ll be back again next month with more interviews with experts.  

 

AI Alignment Podcast: Human Cognition and the Nature of Intelligence with Joshua Greene

How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind’s eyes and ears? How does your brain distinguish what it’s thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you’d believe me, and then I say, oh I was just kidding, didn’t really happen. You still have the idea in your head, but in one case you’re representing it as something true, in another case you’re representing it as something false, or maybe you’re representing it as something that might be true and you’re not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they’re false or you could just be agnostic, and that’s essential not just for idle speculation, but it’s essential for planning. You have to be able to imagine possibilities that aren’t yet actual. So these are all things we’re trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence.” -Joshua Greene

Josh Greene is a Professor of Psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. He is the author of Moral Tribes: Emotion, Reason, and the Gap Bewtween Us and Them. Joshua Greene’s current research focuses on further understanding key aspects of both individual and collective intelligence. Deepening our knowledge of these subjects allows us to understand the key features which constitute human general intelligence, and how human cognition aggregates and plays out through group choice and social decision making. By better understanding the one general intelligence we know of, namely humans, we can gain insights into the kinds of features that are essential to general intelligence and thereby better understand what it means to create beneficial AGI. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

Topics discussed in this episode include:

  • The multi-modal and combinatorial nature of human intelligence
  • The symbol grounding problem
  • Grounded cognition
  • Modern brain imaging
  • Josh’s psychology research using John Rawls’ veil of ignorance
  • Utilitarianism reframed as ‘deep pragmatism’
You can find out more about Joshua Greene at his website or follow his lab on their Twitter. You can listen to the podcast above or read the transcript below.

Lucas: Hey everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Joshua Greene about his research on human cognition as well as John Rawls’ veil of ignorance and social choice. Studying the human cognitive engine can help us better understand the principles of intelligence, and thereby aid us in arriving at beneficial AGI. It can also inform group choice and how to modulate persons’ dispositions to certain norms or values, and thus affect policy development in observed choice. Given this, we discussed Josh’s ongoing projects and research regarding the structure, relations, and kinds of thought that make up human cognition, key features of intelligence such as it being combinatorial and multimodal, and finally how a particular thought experiment can change how impartial a person is, and thus what policies they support.

And as always, if you enjoy this podcast, please give it a like, share it with your friends, and follow us on your preferred listening platform. As a bit of announcement, the AI Alignment Podcast will be releasing every other Wednesday instead of once a month, so there are a lot more great conversations on the way. Josh Greene is a professor of psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. And without further ado, I give you Josh Greene.

What sort of thinking has been predominantly occupying the mind of Josh Greene?

Joshua: My lab has two different main research areas that are related, but on a day to day basis are pretty separate. You can think of them as focused on key aspects of individual intelligence versus collective intelligence. On the individual intelligence side, what we’re trying to do is understand how our brains are capable of high level cognition. In technical terms, you can think of that as compositional semantics, or multimodal compositional semantics. What that means in more plain English is how does the brain take concepts and put them together to form a thought, so you can read a sentence like the dog chased the cat, and you understand that it means something different from the cat chased the dog. The same concepts are involved, dog and cat and chasing, but your brain can put things together in different ways in order to produce a different meaning.

Lucas: The black box for human thinking and AGI thinking is really sort of this implicit reasoning that is behind the explicit reasoning, that it seems to be the most deeply mysterious, difficult part to understand.

Joshua: Yeah. A lot of where machine learning has been very successful has been on the side of perception, recognizing objects, or when it comes to going from say vision to language, simple labeling of scenes that are already familiar, so you can show an image of a dog chasing a cat and maybe it’ll say something like dog chasing cat, or at least we get that there’s a dog running and a cat chasing.

Lucas: Right. And the caveat is that it takes a massive amount of training, where it’s not one shot learning, it’s you need to be shown a cat chasing a dog a ton of times just because of how inefficient the algorithms are.

Joshua: Right. And the algorithms don’t generalize very well. So if I show you some crazy picture that you’ve never seen before where it’s a goat and a dog and Winston Churchill all wearing roller skates in a rowboat on a purple ocean, a human can look at that and go, that’s weird, and give a description like the one I just said. Whereas today’s algorithms are going to be relying on brute statistical associations, and that’s not going to cut it for getting a precise, immediate reasoning. So humans have this ability to have thoughts, which we can express in words, but we also can imagine in something like pictures.

And the tricky thing is that it seems like a thought is not just an image, right? So to take an example that I think comes from Daniel Dennett, if you hear the words yesterday my uncle fired his lawyer, you might imagine that in a certain way, maybe you picture a guy in a suit pointing his finger and looking stern at another guy in a suit, but you understand that what you imagined doesn’t have to be the way that that thing actually happened. The lawyer could be a woman rather than a man. The firing could have taken place by phone. The firing could have taken place by phone while the person making the call was floating in a swimming pool and talking on a cell phone, right?

The meaning of the sentence is not what you imagined. But at the same time we have the symbol grounding problem, that is it seems like meaning is not just a matter of symbols chasing each other around. You wouldn’t really understand something if you couldn’t take those words and attach them meaningfully to things that you can see or touch or experience in a more sensory and motor kind of way. So thinking is something in between images and in between words. Maybe it’s just the translation mechanism for those sorts of things, or maybe there’s a deeper language of thought to use, Jerry Fodor’s famous phrase. But in any case, what part of my lab is trying to do is understand how does this central really poorly understood aspect of human intelligence work? How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind’s eyes and ears?

How does your brain distinguish what it’s thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you’d believe me, and then I say, oh I was just kidding, didn’t really happen. You still have the idea in your head, but in one case you’re representing it as something true, in another case you’re representing it as something false, or maybe you’re representing it as something that might be true and you’re not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they’re false or you could just be agnostic, and that’s essential not just for idle speculation, but it’s essential for planning. You have to be able to imagine possibilities that aren’t yet actual.

So these are all things we’re trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence.

Lucas: Right. So what’s deeply mysterious here is the kinetics that underlie thought, which is sort of like meta-learning or meta-awareness, or how it is that we’re able to have this deep and complicated implicit reasoning behind all of these things. And what that actually looks like seems deeply puzzling in sort of the core and the gem of intelligence, really.

Joshua: Yeah, that’s my view. I think we really don’t understand the human case yet, and my guess is that obviously it’s all neurons that are doing this, but these capacities are not well captured by current neural network models.

Lucas: So also just two points of question or clarification. The first is this sort of hypothesis that you proposed, that human thoughts seem to require some sort of empirical engagement. And then what was your claim about animals, sorry?

Joshua: Well animals certainly show some signs of thinking, especially some animals like elephants and dolphins and chimps engage in some pretty sophisticated thinking, but they don’t have anything like human language. So it seems very unlikely that all of thought, even human thought, is just a matter of moving symbols around in the head.

Lucas: Yeah, it’s definitely not just linguistic symbols, but it still feels like conceptual symbols that have structure.

Joshua: Right. So this is the mystery, human thought, you could make a pretty good case that symbolic thinking is an important part of it, but you could make a case that symbolic thinking can’t be all it is. And a lot of people in AI, most notably DeepMind, have taken the strong view and I think it’s right, that if you’re really going to build artificial general intelligence, you have to start with grounded cognition, and not just trying to build something that can, for example, read sentences and deduce things from those sentences.

Lucas: Right. Do you want to unpack what grounded cognition is?

Joshua: Grounded cognition refers to a representational system where the representations are derived, at least initially, from perception and from physical interaction. There’s perhaps a relationship with empiricism in the broader philosophy of science, but you could imagine trying to build an intelligent system by giving it lots and lots and lots of words, giving it lots of true descriptions of reality, and giving it inference rules for going from some descriptions to other descriptions. That just doesn’t seem like it’s going to work. You don’t really understand what apple means unless you have some sense of what an apple looks like, what it feels like, what it tastes like, doesn’t have to be all of those things. You can know what an apple is without ever eaten one, or I could describe some fruit to you that you’ve never seen, but you have experience with other fruits or other physical objects. Words don’t just exist in a symbol storm vacuum. They’re related to things that we see and touch and interact with.

Lucas: I think for me, just going most foundationally, the question is before I know what an apple is, do I need to understand spatial extension and object permanence? I have to know time, I have to have some very basic ontological understanding and world model of the universe.

Joshua: Right. So we have some clues from human developmental psychology about what kinds of representations, understandings, capabilities humans acquire, and in what order. To state things that are obvious, but nevertheless revealing, you don’t meet any humans who understand democratic politics before they understand objects.

Lucas: Yes.

Joshua: Right?

Lucas: Yeah.

Joshua: Which sounds obvious and it is in a sense obvious, right? But it tells you something about what it takes to build up abstract and sophisticated understandings of the world and possibilities for the world.

Lucas: Right. So for me it seems that the place where grounded cognition is most fundamentally is in between when like the genetic code that seeds the baby and when the baby comes out, the epistemics and whatever is in there, has the capacity to one day potentially become Einstein. So like what is that grounded cognition in the baby that underlies this potential to be a quantum physicist or a scientist-

Joshua: Or even just a functioning human. 

Lucas: Yeah.

Joshua: I mean even people with mental disabilities walk around and speak and manipulate objects. I think that in some ways the harder question is not how do we get from normal human to Einstein, but how do we get from a newborn to a toddler? And the analogous or almost analogous question for artificial intelligence is how do you go from a neural network that has some kind of structure, have some that’s favorable for acquiring useful cognitive capabilities, and how do you figure out what the starting structure is, which is kind of analogous to the question of how does the brain get wired up in utero?

And it gets connected to these sensors that we call eyes and ears, and it gets connected to these effectors that we call hands and feet. And it’s not just a random blob of connectoplasm, the brain has a structure. So one challenge for AI is what’s the right structure for acquiring sophisticated intelligence, or what are some of the right structures? And then what kind of data, what kind of training, what kind of training process do you need to get there?

Lucas: Pivoting back into the relevance of this with AGI, there is like you said, this fundamental issue of grounded cognition that babies and toddlers have that sort of lead them to become full human level intelligences eventually. How does one work to isolate the features of grounded cognition that enable babies to grow and become adults?

Joshua: Well, I don’t work with babies, but I can tell you what we’re doing with adults, for example.

Lucas: Sure.

Joshua: In the one paper in this line of research we already have published, this is work led by Steven Franklin, we have people reading sentences like the dog chased the cat, the cat chased the dog, or the dog was chased by the cat and the cat was chased by the dog. And what we’re doing is looking for parts of the brain where the pattern is different depending on whether the dog is chasing the cat and the cat is chasing the dog. So it has to be something that’s not just involved in representing dog or cat or chasing, but of representing that composition of those three concepts where they’re composed in one way rather than another way. And we found is that their region in the temporal lobe where the pattern is different for those things.

And more specifically, what we’ve found is that in one little spot in this broader region in the temporal lobe, you can better than chance decode who the agent is. So if it’s the dog chased the cat, then in this spot you can better than chance tell that it’s dog that’s doing the chasing. If it’s the cat was chased by the dog, same thing. So it’s not just about the order of the words, and then you can decode better than chance that it’s cat being chased for a sentence like that. So the idea is that these spots in the temporal lobe are functioning like data registers, and representing variables rather than specific values. That is this one region is representing the agent who did something and the other region is representing the patient, as they say in linguistics, who had something done to it. And this is starting to look more like a computer program where the way classical programs work is they have variables and values.

Like if you were going to write a program that translates Celsius into Fahrenheit, what you could do is construct a giant table telling you what Fahrenheit value corresponds to what Celsius value. But the more elegant way to do it is to have a formula where the formula has variables, right? You put in the Celsius value, you multiply it by the right thing and you get the Fahrenheit value. And then what that means is that you’re taking advantage of that recurring structure. Well, the something does something to something else is a recurring structure in the world and in our thought. And so if you have something in your brain that has that structure already, then you can quickly slot in dog as agent, chasing as the action, cat as patient, and that way you can very efficiently and quickly combine new ideas. So the upshot of that first work is that it seems like when we’re representing the meaning of a sentence, we’re actually doing it in a more classical computer-ish way than a lot of neuroscientists might have thought.

Lucas: It’s Combinatorial.

Joshua: Yes, exactly. So what we’re trying to get at is modes of composition. In that experiment, we did it with sentences. In an experiment we’re now doing, this is being led by my grad student Dylan Plunkett, and Steven Franklin is also working on this, we’re doing it with words and with images. We actually took a bunch of photos of different people doing different things. Specifically we have a chef which we also call a cook, and we have a child which we also call a kid. We have a prisoner, which we also call an inmate, and we have male and female versions of each of those. And sometimes one is chasing the other and sometimes one is pushing the other. In the images, we have all possible combinations of the cook pushes the child, the inmate chases the chef-

Lucas: Right, but it’s also gendered.

Joshua: We have male and female versions for each. And then we have all the possible descriptions. And in the task what people have to do is you put two things on the screen and you say, do these things match? So sometimes you’ll have two different images and you have to say, do those images have the same meaning? So it could be a different chef chasing a different kid, but if it’s a chef chasing a kid in both cases, then you would say that they mesh. Whereas if it’s a chef chasing an inmate, then you’d say that they don’t. And then in other cases you would have two sentences, like the chef chased the kid, or it could be the child was chased by the cook, or was pursued by the cook, and even though those are all different words in different orders, you’ve recognized that they have the same meaning or close enough.

And then in the most interesting case, we have an image and a set of words, which you can think of it as as a description, and the question is, does it match? So if you see a picture of a chef chasing a kid, and then the words are chef chases kid or cook pursues child, then you’d say, okay, that one’s a match. And what we’re trying to understand is, is there something distinctive that goes on in that translation process when you have to take a complex thought, not complex in the sense of very sophisticated by human standards, but complex in the sense that it has parts, that it’s composite, and translate it from a verbal representation to a visual representation, and is that different or is the base representation visual? So for example, one possibility is when you get two images, if you’re doing something that’s fairly complicated, you have to translate them both into words. It’s possible that you could see language areas activated when people have to look at two images and decide if they match. Or maybe not. Maybe you can do that in a purely visual kind of way-

Lucas: And maybe it depends on the person. Like some meditators will report that after long periods of meditation, certain kinds of mental events happen much less or just cease, like images or like linguistic language or things like that.

Joshua: So that’s possible. Our working assumption is that basic things like understanding the meaning of the chef chased the kid, and being able to point to a picture of that and say that’s the thing, the sentence described, that our brains do this all more or less than the same way. That could be wrong, but our goal is to get at basic features of high level cognition that all of us share.

Lucas: And so one of these again is this combinatorial nature of thinking.

Joshua: Yes. That I think is central to it. That it is combinatorial or compositional, and that it’s multimodal, that you’re not just combining words with other words, you’re not just combining images with other images, you’re combining concepts that are either not tied to a particular modality or connected to different modalities.

Lucas: They’re like different dimensions of human experience. You can integrate it with if you can feel it, or some people are synesthetic, or like see it or it could be a concept, or it could be language, or it could be heard, or it could be subtle intuition, and all of that seems to sort of come together. Right?

Joshua: It’s related to all those things.

Lucas: Yeah. Okay. And so sorry, just to help me get a better picture here of how this is done. So this is an MRI, right?

Joshua: Yeah.

Lucas: So for me, I’m not in this field and I see generally the brain is so complex that our resolution is just different areas of the brain light up, and so we understand what these areas are generally tasked for, and so we can sort of see how they relate when people undergo different tasks. Right?

Joshua: No, we can do better than that. So that was kind of brain imaging 1.0, and brain imaging 2.0 is not everything we want from a brain imaging technology, but it does take us a level deeper, which is to say instead of just saying this brain region is involved, or it ramps up when people are doing this kind of thing, region function relationships, we can look at the actual encoding of content, I can train a pattern classifier. So let’s say you’re showing people pictures of dog or the word dog versus other things. You can train a pattern classifier to recognize the difference between someone looking at a dog versus looking at a cat, or reading the word dog or reading the word cat. There are patterns of activity that are more subtle than just this region is active or more or less active.

Lucas: Right. So the activity is distinct in a way that when you train the thing on when it looks like people are recognizing cats, then it can recognize that in the future.

Joshua: Yeah.

Lucas: So is there anything besides this multimodal and combinatorial features that you guys have isolated, or that you’re looking into, or that you suppose are like essential features of grounded cognition?

Joshua: Well, this is what we’re trying to study, and we have the ones that have result that’s kind of done and published that I described about representing the meaning of a sentence in terms of representing the agent here and the patient there for that kind of sentence, and we have some other stuff in the pipeline that’s getting at the kinds of representations that the brain uses to combine concepts and also to distinguish concepts that are playing different roles. In another set of studies we have people thinking about different objects.

Sometimes they’ll think about an object where it’s a case where they’d actually get money if it turns out that that object is the one that’s going to appear later. It looks like when you think about, say dog, and if it turns out that it’s dog under the card, then you’ll get five bucks. You see that you were able to decode the dog representation in part of our motivational circuitry, whereas you don’t see that if you’re just thinking about it. So that’s another example, is that things are represented in different places in the brain depending on what function that representation is serving at that time.

Lucas: So with this pattern recognition training that you can do based on how people recognize certain things, you’re able to see sort of the sequence and kinetics of the thought.

Joshua: MRI is not great for temporal resolution. So what we’re not seeing is how on the order of milliseconds a thought gets put together.

Lucas: Okay. I see.

Joshua: What MRI is better for, it has better spatial resolution and is better able to identify spatial patterns of activity that correspond to representing different ideas or parts of ideas.

Lucas: And so in the future, as our resolution begins to increase in terms of temporal imaging or being able to isolate more specific structures, I’m just trying to get a better understanding of what your hopes are for increased ability of resolution and imaging in the future, and how that might also help to disclose grounded cognition.

Joshua: One strategy for getting a better understanding is to combine different methods. fMRI can give you some indication of where you’re representing the fact that it’s a dog that you’re thinking about as opposed to a cat. But other neuroimaging techniques have better temporal resolution but not as good spatial resolution. So EEG which measures electrical activity from the scalp has millisecond temporal resolution, but it’s very blurry spatially. The hope is that you combine those two things and you get a better idea. Now both of these things have been around for more than 20 years, and there hasn’t been as much progress as I would have hoped combining those things. Another approach is more sophisticated models. What I’m hoping we can do is say, all right, so we have humans doing this task where they are deciding whether or not these images match these descriptions, and we know that humans do this in a way that enables them to generalize, so that if they see some combination of things they’ve never seen before.

Joshua: Like this is a giraffe chasing a Komodo Dragon. You’ve never seen that image before, but you could look at that image for the first time and say, okay, that’s a giraffe chasing a Komodo Dragon, at least if you know what those animals look like, right?

Lucas: Yeah.

Joshua: So then you can say, well, what does it take to train a neural network to be able to do that task? And what does it take to train a neural network to be able to do it in such a way that it can generalize to new examples? So if you teach it to recognize Komodo Dragon, can it then generalize such that, well, it learned how to recognize giraffe chases lion, or lion chases giraffe, and so it understands chasing, and it understands lion, and it understands giraffe. Now if you teach it what a Komodo dragon looks like, can it automatically slot that into a complex relational structure?

And so then let’s say we have a neural network that we trained, is able to do that. It’s not all of human cognition. We assume it’s not conscious, but it may capture key features of that cognitive process. And then we look at the model and say, okay, well in real time, what is that model doing and how is it doing it? And then we have a more specific hypothesis that we can go back to the brain and say, well, does the brain do it, something like the way this artificial neural network does it? And so the hope is that by building artificial neural models of these certain aspects of high level cognition, we can better understand human high level cognition, and the hope is that also it will feed back the other way. Where if we look and say, oh, this seems to be how the brain does it, well maybe if you wired up a network like this, what if we mimic that kind of architecture in a neural network and an artificial neural network, does that enable it to solve the problem in a way that it otherwise wouldn’t?

Lucas: Right. I mean we already have AGIs, they just have to be created by humans and they live about 80 years, and then they die, and so we already have an existence proof, and the problem really is the brain is so complicated that there are difficulties replicating it on machines. And so I guess the key is how much can our study of the human brain inform our creation of AGI through machine learning or deep learning or like other methodologies.

Joshua: And it’s not just that the human brain is complicated, it’s that the general intelligence that we’re trying to replicate in machines only exists in humans. You could debate the ethics of animal research and sticking electrodes in monkey brains and things like that, but within ethical frameworks that are widely accepted, you can do things to monkeys or rats that help you really understand in a detailed way what the different parts of their brain are doing, right?

But for good reason, we don’t do those sorts of studies with humans, and we would understand much, much, much, much more about how human cognition works if we were–

Lucas: A bit more unethical.

Joshua: If we were a lot more unethical, if we were willing to cut people’s brains open and say, what happens if you lesion this part of the brain? What happens if you then have people do these 20 tasks? No sane person is suggesting we do this. What I’m saying is that part of the reason why we don’t understand it is because it’s complicated, but another part of the reason why we don’t understand is that we are very much rightly placing ethical limits on what we can do in order to understand it.

Lucas: Last thing here that I just wanted to touch on on this is when I’ve got this multimodal combinatorial thing going on in my head, when I’m thinking about how like a Komodo dragon is chasing a giraffe, how deep does that combinatorialness need to go for me to be able to see the Komodo Dragon chasing the giraffe? Your earlier example was like a purple ocean with a Komodo Dragon wearing like a sombrero hat, like smoking a cigarette. I guess I’m just wondering, well, what is the dimensionality and how much do I need to know about the world in order to really capture a Komodo Dragon chasing a giraffe in a way that is actually general and important, rather than some kind of brittle, heavily trained ML algorithm that doesn’t really know what a Komodo Dragon chasing a giraffe is.

Joshua: It depends on what you mean by really know. Right? But at the very least you might say it doesn’t really know it if it can’t both recognize it in an image and output a verbal label. That’s the minimum, right?

Lucas: Or generalize the new context-

Joshua: And generalize the new cases, right. And I think generalization is key, right. What enables you to understand the crazy scene you described is it’s not that you’ve seen so many scenes that one of them is a pretty close match, but instead you have this compositional engine, you understand the relations, and you understand the objects, and that gives you the power to construct this effectively infinite set of possibility. So what we’re trying to understand is what is the cognitive engine that interprets and generates those infinite possibilities?

Lucas: Excellent. So do you want to sort of pivot here into how Rawls’ veil of justice fits in here?

Joshua: Yeah. So on the other side of the lab, one side is focused more on this sort of key aspect of individual intelligence. On the more moral and social side of the lab, we’re trying to understand our collective intelligence and our social decision making, and we’d like to do research that can help us make better decisions. Of course, what counts is better is always contentious, especially when it comes to morality, but these influences that one could plausibly interpret as better. Right? One of the most famous ideas in moral and political philosophy is John Rawls’s idea of the veil of ignorance, where what Rawls essentially said is you want to know what a just society looks like? Well, the essence of justice is impartiality. It’s not favoring yourself over other people. Everybody has to play this side by the same rules. It doesn’t mean necessarily everybody gets exactly the same outcome, but you can’t get special privileges just because you’re you.

And so what he said was, well, a just society is one that you would choose if you didn’t know who in that society you would be. Even if you are choosing selfishly, but you are constrained to be impartial because of your ignorance. You don’t know where you’re going to land in that society. And so what Rawls says very plausibly is would you rather be randomly slotted into a society where a small number of people are extremely rich and most people are desperately poor? Or would you rather be slotted into a society where most people aren’t rich but are doing pretty well? The answer pretty clearly is you’d rather be slotted randomly into a society where most people are doing pretty well instead of a society where you could be astronomically well off, but most likely would be destitute. Right? So this is all background that Rawls applied this idea of the veil of ignorance to the structure of society overall, and said a just society is one that you would choose if you didn’t know who in it you were going to be.

And this sort of captures the idea of impartiality as sort of the core of justice. So what we’ve been doing recently, and we as this is a project led by Karen Huang and Max Bazerman along with myself, is applying the veil of ignorance idea to more specific dilemmas. So one of the places where we have applied this is with ethical dilemmas surrounding self driving cars. We took a case that was most famously recently discussed by Bonnefon, Sharrif, and Rahwan in their 2016 science paper, The Social Dilemma of Autonomous Vehicles, and the canonical version goes something like you’ve got an autonomous vehicle, and AV, that is headed towards nine people and nothing is done. It’s going to run those nine people over. But it can swerve out of the way and save those nine people, but if it does that, it’s going to drive into a concrete wall and kill the passenger inside.

So the question is should the car swerve or should it go straight? Now, you can just ask people. So what do you think the car should do, or would you approve a policy that says that in a situation like this, the car should minimize the loss of life and therefore swerve? What we did is, some people we just had answer the question just the way I posed it, but other people, we had them do a veil of ignorance exercise first. So we say, suppose you’re going to be one of these 10 people, the nine on the road or the one in the car, but you don’t know who you’re going to be.

From a purely selfish point of view, would you want the car to swerve or not, and almost everybody says, I’d rather have the car swerve. I’d rather have a nine out of 10 chance of living instead of a one out of 10 chance of living. And then we asked people, okay, that was a question about what you would want selfishly, if you didn’t know who you were going to be. Would you approve of a policy that said that cars in situations like this should swerve to minimize the loss of life.

The people who’ve gone through the veil of ignorance exercise, they are more likely to approve of the utilitarian policy, the one that aims to minimize the loss of life, if they’ve gone through that veil of ignorance, exercise first, than if they just answered the question. And we have control conditions where we have them do a version of the veil of ignorance exercise, but where the probabilities are mixed up. So there’s no relationship between the probability and the number of people, and that’s sort of the tightest control condition, and you still see the effect. The idea is that the veil of ignorance is a cognitive device for thinking about a dilemma in a kind of more impartial kind of way.

And then what’s interesting is that people recognize, they do a bit of kind of philosophizing. They say, huh, if I said that what I would want is to have the car swerve, and I didn’t know who I was going to be, that’s an impartial judgment in some sense. And that means that even if I feel sort of uncomfortable about the idea of a car swerving and killing its passenger in a way that is foreseen, if not intended in the most ordinary sense, even if I feel kind of bad about that, I can justify it because I say, look, it’s what I would want if I didn’t know who I was going to be. So we’ve done this with self driving cars, we’ve done it with the classics of the trolley dilemma, we’ve done it with a bioethical case involving taking oxygen away from one patient and giving it to nine others, and we’ve done it with a charity where we have people making a real decision involving real money between a more versus less effective charity.

And across all of these cases, what we find is that when you have people go through the veil of ignorance exercise, they’re more likely to make decisions that promote the greater good. It’s an interesting bit of psychology, but it’s also perhaps a useful tool, that is we’re going to be facing policy questions where we have gut reactions that might tell us that we shouldn’t do what favors the greater good, but if we think about it from behind a veil of ignorance and come to the conclusion that actually we’re in favor of what promotes the greater good at least in that situation, then that can change the way we think. Is that a good thing? If you have consequentialist inclinations like me, you’ll think it’s a good thing, or if you just believe in the procedure, that is I like whatever decisions come out of a veil of ignorance procedure, then you’ll think it’s a good thing. I think it’s interesting that it affects the way people make the choice.

Lucas: It’s got me thinking about a lot of things. I guess a few things are that I feel like if most people on earth had a philosophy education or at least had some time to think about ethics and other things, they’d probably update their morality in really good ways.

Joshua: I would hope so. But I don’t know how much of our moral dispositions come from explicit education versus our broader personal and cultural experiences, but certainly I think it’s worth trying. Certainly believe in the possibility that, understand, this is why I do research on and I come to that with some humility about how much that by itself can accomplish. I don’t know.

Lucas: Yeah, it would be cool to see like the effect size of Rawls’s veil of ignorance across different societies and persons, and then other things you can do are also like the child drowning in the shallow pool argument, and there’s just tons of different thought experiments, it would be interesting to see how it updates people’s ethics and morality. The other thing I just sort of wanted to inject here, the difference between naive consequentialism and sophisticated consequentialism. Sophisticated consequentialism would also take into account not only the direct effect of saving more people, but also how like human beings have arbitrary partialities to what I would call a fiction, like rights or duties or other things. A lot of people share these, and I think within our sort of consequentialist understanding and framework of the world, people just don’t like the idea of their car smashing into walls. Whereas yeah, we should save more people.

Joshua: Right. And as Bonnefon and all point out, and I completely agree, if making cars narrow the utilitarian in the sense that they always try to minimize the loss of life, makes people not want to ride in them, and that means that there are more accidents that lead to human fatalities because people are driving instead of being driven, then that is bad from a consequentialist perspective, right? So you can call it sophisticated versus naive consequentialism, but really there’s no question that utilitarianism or consequentialism in its original form favors the more sophisticated readings. So it’s kind of more-

Lucas: Yeah, I just feel that people often don’t do the sophisticated reasoning, and then they come to conclusions.

Joshua: And this is why I’ve attempted with not much success, at least in the short term, to rebrand utilitarianism as what I call deep pragmatism. Because I think when people hear utilitarianism, what they imagine is everybody walking around with their spreadsheets and deciding what should be done based on their lousy estimates of the greater good. Whereas I think the phrase deep pragmatism gives you a much clearer idea of what it looks like to be utilitarian in practice. That is you have to take into account humans as they actually are, with all of their biases and all of their prejudices and all of their cognitive limitations.

When you do that, it’s obviously a lot more subtle and flexible and cautious than-

Lucas: Than people initially imagine.

Joshua: Yes, that’s right. And I think utilitarian has a terrible PR problem, and my hope is that we can either stop talking about the U philosophy and talk instead about deep pragmatism, see if that ever happens, or at the very least, learn to avoid those mistakes when we’re making serious decisions.

Lucas: The other very interesting thing that this brings up is that if I do the veil of ignorance thought exercise, and then I’m more partial towards saving more people and partial towards policies, which will reduce the loss of life. And then I sort of realize that I actually do have this strange arbitrary partiality, like my car I bought not crash me into a wall, from sort of a third person point of view, I think maybe it seems kind of irrational because the utilitarian thing initially seems most rational. But then we have the chance to reflect as persons, well maybe I shouldn’t have these arbitrary beliefs. Like maybe we should start updating our culture in ways that gets rid of these biases so that the utilitarian calculations aren’t so corrupted by scary primate thoughts.

Joshua: Well, so I think the best way to think about it is how do we make progress? Not how do we radically transform ourselves into alien beings who are completely impartial, right. And I don’t think it’s the most useful thing to do. Take the special case of charitable giving, that you can turn yourself into a happiness pump, that is devote all of your resources to providing money for the world’s most effective charities.

And you may do a lot of good as an individual compared to other individuals if you do that, but most people are going to look at you and just say, well that’s admirable, but it’s super extreme. That’s not for me, right? Whereas if you say, I give 10% of my money, that’s an idea that can spread, that instead of my kids hating me because I deprived them of all the things that their friends had, they say, okay, I was brought up in a house where we give 10% and I’m happy to keep doing that. Maybe I’ll even make it 15. You want norms that are scalable, and that means that your norms have to feel livable. They have to feel human.

Lucas: Yeah, that’s right. We should be spreading more deeply pragmatic approaches and norms.

Joshua: Yeah. We should be spreading the best norms that are spreadable.

Lucas: Yeah. There you go. So thanks so much for joining me, Joshua.

Joshua: Thanks for having me.

Lucas: Yeah, I really enjoyed it and see you again soon.

Joshua: Okay, thanks.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode of the AI Alignment Series.

[end of recorded material]

FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but I want to start by looking at this from the research and development side of things. So my very first question for both of you is: do you agree that 2018 was more about impressive progress, and less about major breakthroughs? Or were there breakthroughs that really were important to the AI community that just didn’t make it into the mainstream press?

David: Broadly speaking I think I agree, although I have a few caveats for that. One is just that it’s a little bit hard to recognize always what is a breakthrough, and a lot of the things in the past that have had really big impacts didn’t really seem like some amazing new paradigm shift—it was sort of a small tweak that then made a lot of things work a lot better. And the other caveat is that there are a few works that I think are pretty interesting and worth mentioning, and the field is so large at this point that it’s a little bit hard to know if there aren’t things that are being overlooked.

Roman: So I’ll agree with you, but I think the pattern is more important than any specific breakthrough. We kind of got used to getting something really impressive every month, so relatively it doesn’t sound as good, all the AlphaStar, AlphaFold, AlphaZero happening almost every month. And it used to be it took 10 years to see something like that.

It’s likely it will happen even more frequently. We’ll conquer a new domain once a week or something. I think that’s the main pattern we have to recognize and discuss. There are significant accomplishments in terms of teaching AI to work in completely novel domains. I mean now we can predict protein folding, now we can have multi-player games conquered. That never happened before so frequently. Chess was impressive because it took like 30 years to get there.

David: Yeah, so I think a lot of people were kind of expecting or at least hoping for StarCraft or Dota to be solved—to see, like we did with AlphaGo, AI systems that are beating the top players. And I would say that it’s actually been a little bit of a let down for people who are optimistic about that, because so far the progress has been kind of unconvincing.

So the AlphaStar, which was a really recent result from last week, for instance: I’ve seen criticism of it that I think is valid that it was making more actions than a human could within a very short interval of time. So they carefully controlled the actions-per-minute that AlphaStar was allowed to take, but they didn’t prevent it from doing really short bursts of actions that really helped its micro-game, and that means that it can win without really being strategically superior to its human opponents. And I think the Dota results that OpenAI has had was also criticized as being sort of not the hardest version of the problem, and still the AI sort of is relying on some crutches.

Ariel: So before we get too far into that debate, can we take a quick step back and explain what both of those are?

David: So these are both real-time strategy games that are, I think, actually the two most popular real-time strategy games in the world that people play professionally, and make money playing. I guess that’s all to say about them.

Ariel: So a quick question that I had too about your description then, when you’re talking about AlphaStar and you were saying it was just making more moves than a person can realistically make. Is that it—it wasn’t doing anything else special?

David: I haven’t watched the games, and I don’t play StarCraft, so I can’t say that it wasn’t doing anything special. I’m basing this basically on reading articles and reading the opinions of people who are avid StarCraft players, and I think the general opinion seems to be that it is more sophisticated than what we’ve seen before, but the reason that it was able to win these games was not because it was out-thinking humans, it’s because it was out-clicking, basically, in a way that just isn’t humanly possible.

Roman: I would agree with this analysis, but I don’t see it as a bug, I see it as a feature. That just shows another way machines can be superior to people. Even if they are not necessarily smarter, they can still produce superior performance, and that’s what we really care about. Right? We found a different way, a non-human approach to solving this problem. That’s impressive.

David: Well, I mean, I think if you have an agent that can just click as fast as it wants, then you can already win at StarCraft, before this work. There needs to be something that makes it sort of a fair fight in some sense.

Roman: Right, but think what you’re suggesting: We have to handicap machines to make them even remotely within being comparative to people. We’re talking about getting to superintelligent performance. You can get there by many ways. You can think faster, you can have better memory, you can have better reaction time—as long as you’re winning in whatever domain we’re interested in, you have superhuman performance.

David Krueger: So maybe another way of putting this would be if they actually made a robot play StarCraft and made it use the same interface that humans do, such as a screen and mouse, there’s no way that it could have beat the human players. And so by giving it direct access to the game controls, it’s sort of not solving the same problem that a human is when they play this game.

Roman: I feel what you’re saying, I just feel that it is solving it in a different way, and we have pro-human bias saying, well that’s not how you play this game, you have an advantage. Human players usually rely on superior strategy, not just faster movements that may take advantage of it for a few nanoseconds, a couple of seconds. But it’s not a long-term sustainable pattern.

One of the research projects I worked on was this idea of artificial stupidity, we called it—kind of limiting machines to human-level capacity. And I think that’s what we’re talking about it here. Nobody would suggest limiting a chess program to just human-level memory, or human memorization of opening moves. But we don’t see it as a limitation. Machines have an option of beating us in ways humans can’t. That’s the whole point, and that’s why it’s interesting, that’s why we have to anticipate such problems. That’s where most of the safety and security issues will show up.

Ariel: So I guess, I think, Roman, your point earlier was sort of interesting that we’ve gotten so used to breakthroughs that stuff that maybe a couple of years ago would have seemed like a huge breakthrough is just run-of-the-mill progress. I guess you’re saying that that’s what this is sort of falling into. Relatively recently this would have been a huge deal, but because we’ve seen so much other progress and breakthroughs, that this is now interesting and we’re excited about it—but it’s not reaching that level of, oh my god, this is amazing! Is that fair to say?

Roman: Exactly! We get disappointed if the system loses one game. It used to be we were excited if it would match amateur players. Now it’s, oh, we played a 100 games and you lost one? This is just not machine-level performance, you disappoint us.

Ariel: David, do you agree with that assessment?

David: I would say mostly no. I guess, I think what really impressed me with AlphaGo and AlphaZero was that it was solving something that had been established as a really grand challenge for AI. And then in the case of AlphaZero, I think the technique that they actually used to solve it was really novel and interesting from a research point of view, and they went on to show that this same technique can solve a bunch of other board games as well.

And my impression from what I’ve seen about how they did AlphaStar and AlphaFold is that there were some interesting improvements and the performance is impressive but I think it’s neither, like, quite at the point where you can say we’ve solved it, we’re better than everybody, or in the case of protein folding, there’s not a bunch more room for improvement that has practical significance. And it’s also—I don’t see any really clear general algorithmic insights about AI coming out of these works yet. I think that’s partially because they haven’t been published yet, but from what I have heard about the details about how they work, I think it’s less of a breakthrough on the algorithm side than AlphaZero was.

Ariel: So you’ve mentioned AlphaFold. Can you explain what that is real quick?

David: This is the protein folding project that DeepMind did, and I think there’s a competition called C-A-S-P or CASP that happens every three years, and they sort of dominated that competition this last year doing what was described as two CASPs in one, so basically doubling the expected rate of improvement that people have seen historically at these tasks, or at least at the one that is the most significant benchmark.

Ariel: I find the idea of the protein folding thing interesting because that’s something that’s actually relevant to scientific advancement and health as opposed to just being able to play a game. Are we seeing actual applications for this yet?

David: I don’t know about that, but I agree with you that that is a huge difference that makes it a lot more exciting than some of the previous examples. I guess one thing that I want to say about that, though, is that it does look a little bit more to me like continuation of progress that was already happening in the communities. It’s definitely a big step up, but I think a lot of the things that they did there could have really happened over the next few years anyways, even without DeepMind being there. So, one of the articles I read put it this way: If this wasn’t done by DeepMind, if this was just some academic group, would this have been reported in the media? I think the answer is sort of like a clear no, and that says something about the priorities of our reporting and media as well as the significance of the results, but I think that just gives some context.

Roman: I’ll agree with David—the media is terrible in terms of what they report on, we can all agree on that. I think it was quite a breakthrough, I mean, to say that they not just beat the competition, but to actually kind of doubled performance improvement. That’s incredible. And I think anyone who got to that point would not be denied publication in a top journal; It would be considered very important in that domain. I think it’s one of the most important problems in medical research. If you can accurately predict this, possibilities are really endless in terms of synthetic biology, in terms of curing diseases.

So this is huge in terms of impact from being able to do it. As far as how applicable is it to other areas, is it a great game-changer for AI research? All those things can adapt between this ability to perform in real-life environments of those multiplayer games, and being able to do this. Look at how those things can be combined. Right? You can do things in the real world you couldn’t do before, both in terms of strategy games, which are basically simulations for economic competition, for wars, for quite a few applications where impact would be huge.

So all of it is very interesting. It’s easy to say that, “Well if they didn’t do it, somebody else maybe would do it in a couple of years.” But it’s almost always true for all inventions. If you look at the history of inventions, things like, I don’t know, telephone, have been invented at the same time by two or three people; radio, two or three people. It’s just the point where science gets enough ingredient technology where yeah, somebody’s going to do it, nice. But still, we give credit to whoever got there first.

Ariel: So I think that’s actually a really interesting point, because I think for the last few years we have seen sort of these technological advances but I guess we also want to be considering the advances that are going to have a major impact on humanity even if it’s not quite as technologically new.

David: Yeah, absolutely. I think the framing in terms of breakthroughs is a little bit unclear what we’re talking about when we talk about AI breakthroughs, and I think a lot of people in the field of AI kind of don’t like how much people talk about it in terms of breakthroughs because a lot of the progress is gradual and builds on previous work and it’s not like there was some sudden insight that somebody had that just changed everything, although that does happen in some ways.

And I think you can think of the breakthroughs both in terms of like what is the impact—is this suddenly going to have a lot of potential to change the world? You can also think of it, though, from the perspective of researchers as like, is this really different from the kind of ideas and techniques we’ve seen or seen working before? I guess I’m more thinking about the second right now in terms of breakthroughs representing really radical new ideas in research.

Ariel: Okay, well I will take responsibility for being one of the media people who didn’t do a good job with presenting AI breakthroughs. But I think both with this podcast and probably moving forward, I think that is actually a really important thing for us to be doing—is both looking at the technological progress and newness of something but also the impact it could have on either society or future research.

So with that in mind, you guys also have a good list of other things that did happen this year, so I want to start moving into some of that as well. So next on your list is manual dexterity in robots. What did you guys see happening there?

David: So this is something that’s definitely not my area of expertise, so I can’t really comment too much on it. But there are two papers that I think are significant and potentially representing something like a breakthrough in this application. In general robotics is really difficult, and machine learning for robotics is still, I think, sort of a niche thing, like most robotics is using more classical planning algorithms, and hasn’t really taken advantage of the new wave of deep learning and everything.

So there’s two works, one is QT-Opt, and the other one is Dactyl, and these are both by people from the Berkeley OpenAI crowd. And these both are showing kind of impressive results in terms of manual dexterity in robots. So there’s one that does a really good job at grasping, which is one of the basic aspects of being able to act in the real world. And then there’s another one that was sort of just manipulating something like a cube with different colored faces on it—that one’s Dactyl; the grasping one is QT-Opt.

And I think this is something that was paid less attention to in the media, because it’s been more of a story of kind of gradual progress I think. But my friend who follows this deep reinforcement learning stuff more told me that QT-Opt is the first convincing demonstration of deep reinforcement learning in the real world, as opposed to all these things we’ve seen in games. The real world is much more complicated and there’s all sorts of challenges with the noise of the environment dynamics and contact forces and stuff like this that have been really a challenge for doing things in the real world. And then there’s also the limited sample complexity where when you play a game you can sort of interact with the game as much as you want and play the game over and over again, whereas in the real world you can only move your robot so fast and you have to worry about breaking it, so that means in the end you can collect a lot less data, which makes it harder to learn things.

Roman: Just to kind of explain maybe what they did. So hardware’s expensive, slow: It’s very difficult to work with. Things don’t go well in real life; It’s a lot easier to create simulations in virtual worlds, train your robot in there, and then just transfer knowledge into a real robot in the physical world. And that’s exactly what they did, training that virtual hand to manipulate objects, and they could run through thousands, millions of situations and then it’s something you cannot do with an actual, physical robot at that scale. So, I think that’s a very interesting approach for why lots of people try doing things in virtual environments. Some of the early AGI projects all concentrated on virtual worlds as domain of learning. So that makes a lot of sense.

David: Yeah, so this was for the Dactyl project, which was OpenAI. And that was really impressive I think, because people have been doing this sim-to-real thing—where you train in simulation and then try and transfer it to the real world—with some success for like a year or two, but this one I think was really kind of impressive in that sense, because they didn’t actually train it in the real world at all, and what they had learned managed to transfer to the real world.

Ariel: Excellent. I’m going to keep going through your list. One thing that you both mentioned are GANs. So very quickly, if one of you, or both of you, could explain what a GAN is and what that stands for, and then we’ll get into what happened last year with those.

Roman: Sure, so this is a somewhat new way of doing creative generational visuals and audio. You have two neural networks competing, one is kind of creating fakes, and the other one is judging them, and you get to a point where they’re kind of 50/50. You can’t tell if it’s fake or real anymore. And it’s a great way to produce artificial faces, cars, whatever. Any type of input you can provide to the networks, they quickly learn to extract the essence of that image or audio and generate artificial data sets full of such images.

And there’s really exciting work on being able to extract properties from those, different styles. So if we talk about faces, for example: there could be a style for hair, a style for skin color, a style for age, and now it’s possible to manipulate them. So I can tell you things like, “Okay, Photoshop, I need a picture of a female, 20 years old, blonde, with glasses,” and it would generate a completely realistic face based on those properties. And we’re starting to see it show up not just in images but transferred to video, to generating whole virtual worlds. It’s probably the closest thing we ever had computers get to creativity: actually kind of daydreaming and coming up with novel outputs.

David: Yeah, I just want to say a little bit about the history of the research in GAN. So the first work on GANs was actually back four or five years ago in 2014, and I think it was actually kind of—didn’t make a huge splash at the time, but maybe a year or two after that it really started to take off. And research in GANs over the last few years has just been incredibly fast-paced and there’s been hundreds of papers submitted and published at the big conferences every year.

If you look just in terms of the quality of what is generated, this is, I think, just an amazing demonstration of the rate of progress in some areas of machine learning. The first paper had these sort of black and white pictures of really blurry faces, and now you can get giant—I think 256 by 256, or 512 by 512, or even bigger—really high resolution and totally indistinguishable from real photos, to the human eye anyway—images of faces. So it’s really impressive, and we’ve seen really consistent progress on that, especially in the last couple years.

Ariel: And also, just real quick, what does it stand for?

David: Oh, generative adversarial network. So it’s generative, because it’s sort of generating things from scratch, or from its imagination or creativity. And it’s adversarial because there are two networks: the one that generates the things, and then the one that tries to tell those fake images apart from real images that we actually collect by taking photos in the world.

Ariel: This is an interesting one because it can sort of transition into some ethics stuff that came up this past year, but I’m not sure if we want to get there yet, or if you guys want to talk a little bit more about some of the other things that happened on the research and development side.

David: I guess I want to talk about a few other things that have been making, I would say, sort of steady progress, like GANs. With a lot of interest in, I guess I would say, their ideas that are coming to fruition, even though some of these are not exactly from the last year, they sort of really started to improve themselves and become widely used in the last year.

Ariel: Okay.

David: I think this is actually used in maybe the latest, greatest GAN paper, is something that’s called feature-wise transformations. So this is an idea that actually goes back up to 40 years, depending on how you measure it, but has sort of been catching on in specific applications in machine learning in the last couple of years—starting with, I would say, style-transfer, which is sort of like what Roman mentioned earlier.

So the idea here is that in a neural network, you have what are called features, which basically correspond to the activations of different neurons in the network. Like how much that neuron likes what it’s seeing, let’s say. And those can also be interpreted as representing different kinds of visual patterns, like different kinds of textures, or colors. And these feature-wise transformations basically just take each of those different aspects of the image, like the color or texture in a certain location, and then allow you to manipulate that specific feature, as we call it, by making it stronger or amplifying whatever was already there.

And so you can sort of view this as a way of specifying what sort of things are important in the image, and that’s why it allows you to manipulate the style of images very easily, because you can sort of look at a certain painting style for instance, and say, oh this person uses a lot of wide brush strokes, or a lot of narrow brush strokes, and then you can say, I’m just going to modulate the neurons that correspond to wide or narrow brush strokes, and change the style of the painting that way. And of course you don’t do this by hand, by looking in and seeing what the different neurons represent. This all ends up being learned end-to-end. And so you sort of have an artificial intelligence model that predicts how to modulate the features within another network, and that allows you to change what that network does in a really powerful way.

So, I mentioned that it has been applied in the most recent GAN papers, and I think they’re just using those kinds of transformations to help them generate images. But other examples where you can explain what’s happening more intuitively, or why it makes sense to try and do this, would be something like visual question answering. So there you can have the modulation of the vision network being done by another network that looks at a question and is trying to help answer that question. And so it can sort of read the question and see what features of images might be relevant to answering that question. So for instance, if the question was, “Is it a sunny day outside?” then it could have the vision network try and pay more attention to things that correspond to signs of sun. Or if it was asked something like, “Is this person’s hair combed?” then you could look for the patterns of smooth, combed hair and look for the patterns of rough, tangled hair, and have those features be sort of emphasized in the vision network. That allows the vision network to pay attention to the parts of the image that are most relevant to answering the question.

Ariel: Okay. So, Roman, I want to go back to something on your list quickly in a moment, but first I was wondering if you have anything that you wanted to add to the feature-wise transformations?

Roman: All of it, you can ask, “Well why is this interesting, what are the applications for it?” So you are able to generate inputs, inputs for computers, inputs for people, images, sounds, videos. A lot of times they can be adversarial in nature as well—what we call deep fakes. Right? You can make, let’s say, a video of a famous politician say something, or do something.

Ariel: Yeah.

Roman: And this has very interesting implications for elections, for forensic science, for evidence. As those systems get better and better, it becomes harder and harder to tell if something is real or not. And maybe it’s still possible to do some statistical analysis, but it takes time, and we talked about media being not exactly always on top of it. So it may take 24 hours before we realize if this video was real or not, but the election is tonight.

Ariel: So I am definitely coming back to that. I want to finish going through the list of the technology stuff, but yeah I want to talk about deep fakes and in general, a lot of the issues that we’ve seen cropping up more and more with this idea of using AI to fake images and audio and video, because I think that is something that’s really important.

David: Yeah, it’s hard for me to estimate these things, but I would say this is probably, in terms of the impact that this is going to have societally, this is sort of the biggest story maybe of the last year. And it’s not like something that happened all of the sudden. Again, it’s something that has been building on a lot of progress in generative models and GANs and things like this. And it’s just going to continue, we’re going to see more and more progress like that, and probably some sort of arms’ race here where—I shouldn’t use that word.

Ariel: A competition.

David: A competition between people who are trying to use that kind of technology to fake things and people who are sort of doing forensics to try and figure out what is real and what is fake. And that also means that people are going to have to trust the people who have the expertise to do that, and believe that they’re actually doing that and not part of some sort of conspiracy or something.

Ariel: Alright, well are you guys ready to jump into some of those ethical questions?

David: Well, there are like two other broad things I wanted to mention, which I think are sort of interesting trends in the research community. One is just the way that people have been continuing to scale up AI systems. So a lot of the progress I think has arguably just been coming from more and more computation and more and more data. And there was a pretty great blog post by OpenAI about this last year that argued that the amount of computation that’s being used to train the most advanced AI systems is increasing by a factor of 10 times every year for the last several years, which is just astounding. But it also suggests that this might not be sustainable for a long time, so to the extent that you think that using more computation is a big driver of progress, we might start to see that slow down within a decade or so.

Roman: I’ll add another—what I think also is kind of building-on technology, not so much a breakthrough, we had it for a long time—but neural evolution is something I’m starting to pay a lot more attention to and that’s kind of borrowing from biology, trying to evolve ways for neural networks, optimized neural networks. And it’s producing very impressive results. It’s possible to run it in parallel really well, and it’s competitive with some of the leading alternative approaches.

So, the idea basically is you have this very large neural network, brain-like structure, but instead of trying to train it back, propagate errors, teach it in a standard neural networks way, you just kind of have a population of those brains competing for who’s doing best in a particular problem, and they share weights between good parents, and after a while you just evolve really well performing solutions to some of the most interesting problems.

Additionally you can kind of go meta-level on it and evolve architectures for the neural network itself—how many layers, how many inputs. This is nice because it doesn’t require much human intervention. You’re essentially letting the system figure out what the solutions are. We had some very successful results with genetic algorithms for optimization. We didn’t have much success with genetic programming, and now neural evolution kind of brings it back where you’re optimizing intelligence systems, and that’s very exciting.

Ariel: So you’re saying that you’ll have—to make sure I understand this correctly—there’s two or more neural nets trying to solve a problem, and they sort of play off of each other?

Roman: So you create a population of neural networks, and you give it a problem, and you see this one is doing really well, and that one. The others, maybe not so great. So you take weights from those two and combine them—like mom and dad, parent situation that produces offspring. And so you have this simulation of evolution where unsuccessful individuals are taken out of a population. Successful ones get to reproduce and procreate, and provide their high fitness weights to the next generation.

Ariel: Okay. Was there anything else that you guys saw this year that you want to talk about, that you were excited about?

David: Well I wanted to give a few examples of the kind of massive improvements in scale that we’ve seen. One of the most significant models and benchmarks in the community is ImageNet and training image classifiers that can tell you what a picture is a picture of on this dataset.So the whole sort of deep learning revolution was arguably started, or at least really came into the eyes of the rest of the machine learning community, because of huge success on this ImageNet competition. And training the model there took something like two weeks, and this last year there was a paper where you can train a more powerful model in less than four minutes, and they do this by using like 3000 graphics cards in parallel.

And then DeepMind also had some progress on parallelism with this model called IMPALA, which basically was in the context of reinforcement learning as opposed to classification, and there they sort of came up with a way that allowed them to do updates in parallels, like learn on different machines and combine everything that was learned in a way that’s asynchronous. So in the past the sort of methods that they would use for these reinforcement learning problems, you’d have to wait for all of the different machines to finish their learning on the current problem or instance that they’re learning about, and then combine all of that centrally—whereas the new method allows you to just as soon as you’re done computing or learning something, you can communicate it to the rest of the system, the other computers that are learning in parallel. And that was really important for allowing them to scale to hundreds of machines working on their problem at the same time.

Ariel: Okay, and so that, just to clarify as well, that goes back to this idea that right now we’re seeing a lot of success just scaling up the computing, but at some point that could slow things down essentially, if we had a limit for how much computing is possible.

David: Yeah, and I guess one of my points is also doing this kinds of scaling of computing requires some amount of algorithmic insight or breakthrough if you want to be dramatic as well. So this DeepMind paper I talked about, they had to devise new reinforcement learning algorithms that would still be stable when they had this real-time asynchronous updating. And so, in a way, yeah, a lot of the research that’s interesting right now is on finding ways to make the algorithm scale so that you can keep taking advantage of more and more hardware. And the evolution stuff also fits into that picture to some extent.

Ariel: Okay. I want to start making that transition into some of the concerns that we have for misuse around AI and how easy it is for people to be deceived by things that have been created by AI. But I want to start with something that’s hopefully a little bit more neutral, and talk about Google Duplex, which is the program that Google came out with, I think last May. I don’t know the extent to which it’s in use now, but they presented it, and it’s an AI assistant that can essentially make calls and set up appointments for you. So their examples were it could make a reservation at a restaurant for you, or it could make a reservation for you to get a haircut somewhere. And it got sort of mixed reviews, because on the one hand people were really excited about this, and on the other hand it was kind of creepy because it sounded human, and the people on the other end of the call did not know that they were talking to a machine.

So I was hoping you guys could talk a little bit I guess maybe about the extent to which that was an actual technological breakthrough versus just something—this one being more one of those breakthroughs that will impact society more directly. And then also I guess if you agree that this seems like a good place to transition into some of the safety issues.

David: Yeah, no, I would be surprised if they really told us about the details of how that worked. So it’s hard to know how much of an algorithmic breakthrough or algorithmic breakthroughs were involved. It’s very impressive, I think, just in terms of what it was able to do, and of course these demos that we saw were maybe selected for their impressiveness. But I was really, really impressed personally, just to see a system that’s able to do that.

Roman: It’s probably built on a lot of existing technology, but it is more about impact than what you can do with this. And my background is cybersecurity, so I see it as a great tool for like automating spear-phishing attacks on a scale of millions. You’re getting a real human calling you, talking to you, with access to your online data; Pretty much everyone’s gonna agree and do whatever the system is asking of you, if it’s credit card numbers, or social security numbers. So, in many ways it’s going to be a game changer.

Ariel: So I’m going to take that as a definite transition into safety issues. So, yeah, let’s start talking about, I guess, sort of human manipulation that’s happening here. First, the phrase “deep fake” shows up a lot. Can you explain what those are?

David: So “deep fakes” is basically just: you can make a fake video of somebody doing something or saying something that they did not actually do or say. People have used this to create fake videos of politicians, they’ve used it to create porn using celebrities. That was one of the things that got it on the front page of the internet, basically. And Reddit actually shut down the subreddit where people were doing that. But, I mean, there’s all sorts of possibilities.

Ariel: Okay, so I think the Reddit example was technically the very end of 2017. But all of this sort became more of an issue in 2018. So we’re seeing this increase in capability to both create images that seem real, create audio that seems real, create video that seems real, and to modify existing images and video and audio in ways that aren’t immediately obvious to a human. What did we see in terms of research to try to protect us from that, or catch that, or defend against that?

Roman: So here’s an interesting observation, I guess. You can develop some sort of a forensic tool to analyze it, and give you a percentage likelihood that it’s real or that it’s fake. But does it really impact people? If you see it with your own eyes, are you going to believe your lying eyes, or some expert statistician on CNN?

So the problem is it will still have tremendous impact on most people. We’re not very successful at convincing people about multiple scientific facts. They simply go outside, or it’s cold right now, so global warming is false. I suspect we’ll see exactly that with, let’s say, fake videos of politicians, where a majority of people easily believe anything they hear once or see once versus any number of peer reviewed publications disproving it.

David: I kind of agree. I mean, I think, when I try to think about how we would actually solve this kind of problem, I don’t think a technical solution that just allows somebody who has technical expertise to distinguish real from fake is going to be enough. We really need to figure out how to build a better trust infrastructure in our whole society which is kind of a massive project. I’m not even sure exactly where to begin with that.

Roman: I guess the good news is it gives you plausible deniability. If a video of me comes out doing horrible things I can play it straight.

Ariel: That’s good for someone. Alright, so, I mean, you guys are two researchers, I don’t know how into policy you are, but I don’t know if we saw as many strong policies being developed. We did see the implementation of the GDPR, and for people who aren’t familiar with the GDPR, it’s essentially European rules about what data companies can collect from your interactions online, and the ways in which you need to give approval for companies to collect your data, and there’s a lot more to it than that. One of the things that I found most interesting about the GDPR is that it’s entirely European based, but it had a very global impact because it’s so difficult for companies to apply something only in Europe and not in other countries. And so earlier this year when you were getting all of those emails about privacy policies, that was all triggered by the GDPR. That was something very specific that happened and it did make a lot of news, but in general I felt that we saw a lot of countries and a lot of national and international efforts for governments to start trying to understand how AI is going to be impacting their citizens, and then also trying to apply ethics and things like that.

I’m sort of curious, before we get too far into anything: just as researchers, what is your reaction to that?

Roman: So I never got as much spam as I did that week when they released this new policy, so that kind of gives you a pretty good summary of what to expect. If you look at history, we have regulations against spam, for example. Computer viruses are illegal. So that’s a very expected result. It’s not gonna solve technical problems. Right?

David: I guess I like that they’re paying attention and they’re trying to tackle these issues. I think the way GDPR was actually worded, it has been criticized a lot for being either much too broad or demanding, or vague. I’m not sure—there are some aspects of the details of that regulation that I’m not convinced about, or not super happy about. I guess overall it seems like people who are making these kinds of decisions, especially when we’re talking about cutting edge machine learning, it’s just really hard. I mean, even people in the fields don’t really know how you would begin to effectively regulate machine learning systems, and I think there’s a lot of disagreement about what a reasonable level of regulation would be or how regulations should work.

People are starting to have that sort of conversation in the research community a little bit more, and maybe we’ll have some better ideas about that in a few years. But I think right now it seems premature to me to even start trying to regulate machine learning in particular, because we just don’t really know where to begin. I think it’s obvious that we do need to think about how we control the use of the technology, because it’s just so powerful and has so much potential for harm and misuse and accidents and so on. But I think how you actually go about doing that is a really unclear and difficult problem.

Ariel: So for me it’s sort of interesting, we’ve been debating a bit today about technological breakthroughs versus societal impacts, and whether 2018 actually had as many breakthroughs and all of that. But I would guess that all of us agree that AI is progressing a lot faster than government does.

David: Yeah.

Roman: That’s almost a tautology.

Ariel: So I guess as researchers, what concerns do you have regarding that? Like do you worry about the speed at which AI is advancing?

David: Yeah, I would say I definitely do. I mean, we were just talking about this issue with fakes and how that’s going to contribute to things like fake news and erosion of trust in media and authority and polarization of society. I mean, if AI wasn’t going so fast in that direction, then we wouldn’t have that problem. And I think the rate that it’s going, I don’t see us catching up—or I should say, I don’t see the government catching up on its own anytime soon—to actually control the use of AI technology, and do our best anyways to make sure that it’s used in a safe way, and a fair way, and so on.

I think in and of itself it’s maybe not bad that the technology is progressing fast. I mean, it’s really amazing; Scientifically there’s gonna be all sorts of amazing applications for it. But there’s going to be more and more problems as well, and I don’t think we’re really well equipped to solve them right now.

Roman: I’ll agree with David, I’m very concerned at its relative rate of progress. AI development progresses a lot faster than anything we see in AI safety. AI safety is just trying to identify problem areas, propose some general directions, but we have very little to show in terms of solved problems.

If you look at our work in adversarial fields, maybe a little bit cryptography, the good guys have always been a step ahead of the bad guys, whereas here you barely have any good guys as a percentage. You have like less than 1% of researchers working directly on safety full-time. Same situation with funding. So it’s not a very optimistic picture at this point.

David: I think it’s worth definitely distinguishing the kind of security risks that we’re talking about, in terms of fake news and stuff like that, from long-term AI safety, which is what I’m most interested in, and think is actually even more important, even though I think there’s going to be tons of important impacts we have to worry about already, and in the coming years.

And the long-term safety stuff is really more about artificial intelligence that becomes broadly capable and as smart or smarter than humans across the board. And there, there’s maybe a little bit more signs of hope if I look at how the fields might progress in the future, and that’s because there’s a lot of problems that are going to be relevant for controlling or aligning or understanding these kind of generally intelligent systems that are probably going to be necessary anyways in terms of making systems that are more capable in the near future.

So I think we’re starting to see issues with trying to get AIs to do what we want, and failing to, because we just don’t know how to specify what we want. And that’s, I think, basically the core of the AI safety problem—is that we don’t have a good way of specifying what we want. An example of that is what are called adversarial examples, which sort of demonstrate that computer vision systems that are able to do a really amazing job at classifying images and seeing what’s in an image and labeling images still make mistakes that humans just would never make. Images that look indistinguishable to humans can look completely different to the AI system, and that means that we haven’t really successfully communicated to the AI system what our visual concepts are. And so even though we think we have done a good job of telling it what to do, it’s like, “tell us what this picture is of”—the way that it found to do that really isn’t the way that we would do it and actually there’s some very problematic and unsettling differences there. And that’s another field that, along with the ones that I mentioned, like generative models and GANs, has been receiving a lot more attention in the last couple of years, which is really exciting from the point of view of safety and specification.

Ariel: So, would it be fair to say that you think we’ve had progress or at least seen progress in addressing long-term safety issues, but some of the near-term safety issues, maybe we need faster work?

David: I mean I think to be clear, we have such a long way to go to address the kind of issues we’re going to see with generally intelligent and super intelligent AIs, that I still think that’s an even more pressing problem, and that’s what I’m personally focused on. I just think that you can see that there are going to be a lot of really big problems in the near term as well. And we’re not even well equipped to deal with those problems right now.

Roman: I’ll generally agree with David. I’m more concerned about long-term impacts. There are both more challenging and more impactful. It seems like short-term things may be problematic right now, but the main difficulty is that we didn’t start working on them in time. So problems like algorithmic fairness, bias, technological unemployment, are social issues which are quite solvable; They are not really that difficult from engineering or technical points of view. Whereas long-term control of systems which are more intelligent than you are—very much unsolved at this point in any even toy model. So I would agree with the part about bigger concerns but I think current problems we have today, they are already impacting people, but the good news is we know how to do better.

David: I’m not sure that we know how to do better exactly. Like I think a lot of these problems, it’s more of a problem of willpower and developing political solutions, so the ones that you mentioned. But with the deep fakes, this is something that I think requires a little bit more of a technical solution in the sense of how we organize our society so that people are either educated enough to understand this stuff, or so that people actually have someone they trust and have a reason to trust, who they can take their word for it on that.

Roman: That sounds like a great job, I’ll take it.

Ariel: It almost sounds like something we need to have someone doing in person, though.

So going back to this past year: were there, say, groups that formed, or research teams that came together, or just general efforts that, while maybe they didn’t produce something yet, you think could produce something good, either in safety or AI in general?

David: I think something interesting is happening in terms of the way AI safety is perceived and talked about in the broader AI and machine learning community. It’s a little bit like this phenomenon where once we solve something people don’t consider it AI anymore. So I think machine learning researchers, once they actually recognize the problem that the safety community has been sort of harping on and talking about and saying like, “Oh, this is a big problem”—once they say, “Oh yeah, I’m working on this kind of problem, and that seems relevant to me,” then they don’t really think that it’s AI safety, and they’re like, “This is just part of what I’m doing, making something that actually generalizes well and learns the right concept, or making something that is actually robust, or being able to interpret the model that I’m building, and actually know how it works.”

These are all things that people are doing a lot of work on these days in machine learning that I consider really relevant for AI safety. So I think that’s like a really encouraging sign, in a way, that the community is sort of starting to recognize a lot of the problems, or at least instances of a lot of the problems that are going to be really critical for aligning generally intelligent AIs.

Ariel: And Roman, what about you? Did you see anything sort of forming in the last year that maybe doesn’t have some specific result, but that seemed hopeful to you?

Roman: Absolutely. So I’ve mentioned that there is very few actual AI safety researchers as compared to the number of AI developers, researchers directly creating more capable machines. But the growth rate is much better I think. The number of organizations, the number of people who show interest in it, the number of papers I think is growing at a much faster rate, and it’s encouraging because as David said, it’s kind of like this convergence if you will, where more and more people realize, “I cannot say I built an intelligent system if it kills everyone.” That’s just not what an intelligent system is.

So safety and security become integral parts of it. I think Stuart Russell has a great example where he talks about bridge engineering. We don’t talk about safe bridges and secure bridges—there’s just bridges. If it falls down, it’s not a bridge. Exactly the same is starting to happen here: People realize, “My system cannot fail and embarrass the company, I have to make sure it will not cause an accident.”

David: I think that a lot of people are thinking about that way more and more, which is great, but there is a sort of research mindset, where people just want to understand intelligence, and solve intelligence. And I think that’s kind of a different pursuit. Solving intelligence doesn’t mean that you make something that is safe and secure, it just means you make something that’s really intelligent, and I would like it if people who had that mindset were still, I guess, interested in or respectful of or recognized that this research is potentially dangerous. I mean, not right now necessarily, but going forward I think we’re going to need to have people sort of agree on having that attitude to some extent of being careful.

Ariel: Would you agree though that you’re seeing more of that happening?

David: Yeah, absolutely, yeah. But I mean it might just happen naturally on its own, which would be great.

Ariel: Alright, so before I get to my very last question, is there anything else you guys wanted to bring up about 2018 that we didn’t get to yet?

David: So we were talking about AI safety and there’s kind of a few big developments in the last year. I mean, there’s actually too many I think for me to go over all of them, but I wanted to talk about something which I think is relevant to the specification problem that I was talking about earlier.

Ariel: Okay.

David: So, there are three papers in the last year, actually, on what I call superhuman feedback. The idea motivating these works is that even specifying what we want on a particular instance in some particular scenario can be difficult. So typically the way that we would think about training an AI that understands our intentions is to give it a bunch of examples, and say, “In this situation, I prefer if you do this. This is the kind of behavior I want,” and then the AI is supposed to pick up on the patterns there and sort of infer what our intentions are more generally.

But there can be some things that we would like AI systems to be competent at doing, ideally, that are really difficult to even assess individual instances of. Two examples that I like to use are designing a transit system for a large city, or maybe for a whole country, or the world or something. That’s something that right now is done by a massive team of people. Using that whole team to sort of assess a proposed design that the AI might make would be one example of superhuman feedback, because it’s not just a single human. But you might want to be able to do this with just a single human and a team of AIs helping them, instead of a team of humans. And there’s a few proposals for how you could do that that have come out of the safety community recently, which I think are pretty interesting.

Ariel: Why is it called superhuman feedback?

David: Actually, this is just my term for it. I don’t think anyone else is using this term.

Ariel: Okay.

David: Sorry if that wasn’t clear. The reason I use it is because there are three different, like, lines of work here. So there’s these two papers from OpenAI on what’s called amplification and debate, and then another paper from DeepMind on reward learning and recursive reward learning. And I like to view these as all kind of trying to solve the same problem. How can we assist humans and enable them to make good judgements and informed judgements that actually reflect what their preferences are when they’re not capable of doing that by themselves unaided. So it’s superhuman in the sense that it’s better than a single human can do. And these proposals are also aspiring to do things I think that even teams of humans couldn’t do by having AI helpers that sort of help you do the evaluation.

An example that Yan—who’s the lead author on the DeepMind paper, which I also worked on—gives is assessing an academic paper. So if you yourself aren’t familiar with the field and don’t have the expertise to assess this paper, you might not be able to say whether or not it should be published. But if you can decompose that task into things like: is the paper valid? Are the proofs valid? Are the experiments following a reasonable protocol? Is it novel? Is it formatted correctly for the venue where it’s submitted? And you got answers to all of those from helpers, then you could make the judgment. You’d just be like okay, it meets all of the criteria, so it should be published. The idea would be to get AI helpers to do those sorts of evaluations for you across a broad range of tasks, and allow us to explain to AIs, or teach AIs what we want across a broad range of tasks in that way.

Ariel: So, okay, and so then were there other things that you wanted to mention as well?

David: I do feel like I should talk about another thing that was, again, not developed last year, but really sort of took off last year—is this new kind of neural network architecture called the transformer, which is basically being used in a lot of places where convolutional neural networks and recurrent neural networks were being used before. And those were kind of the two main driving factors behind the deep learning revolution in terms of vision, where you use convolutional networks and things that have a sequential structure, like speech, or text, where people were using recurrent neural networks. And this architecture is actually motivated originally by the same sort of scaling consideration because it allowed them to remove some of the most computationally heavy parts of running these kind of models in the context of translation, and basically make it a hundred times cheaper to train a translation model. But since then it’s also been used in a lot of other contexts and has shown to be a really good replacement for these other kinds of models for a lot of applications.

And I guess the way to describe what it’s doing is it’s based on what’s called an attention mechanism, which is basically a way of giving a neural network the ability to pay more attention to different parts of an input than other parts. So like to look at one word that is most relevant to the current translation task. So if you’re imagining outputting words one at a time, then because different languages have words in different order, it doesn’t make sense to sort of try and translate the next word. You want to look through the whole input sentence, like a sentence in English, and find the word that corresponds to whatever word should come next in your output sentence.

And that was sort of the original inspiration for this attention mechanism, but since then it’s been applied in a bunch of different ways, including paying attention to different parts of the model’s own computation, paying attention to different parts of images. And basically just using this attention mechanism in the place of the other sort of neural architectures that people thought were really important to give you temporal dependencies across something sequential like a sentence that you’re trying to translate, turned out to work really well.

Ariel: So I want to actually pass this to Roman real quick. Did you have any comments that you wanted to add to either the superhuman feedback or the transformer architecture?

Roman: Sure, so superhuman feedback: I like the idea and I think people should be exploring that, but we can kind of look at similar examples previously. So, for a while we had situation where teams of human chess players and machines did better than just unaided machines or unaided humans. That lasted about ten years. And then machines became so much better, humans didn’t really contribute anything, it was kind of just like an additional bottleneck to consult with them. I wonder if long term this solution will face similar problems. It’s very useful right now, but it seems like, I don’t know if it will scale.

David: Well I want to respond to that, because I think it’s—the idea here is, in my mind, to have something that actually scales in the way that you’re describing, where it can sort of out-compete pure AI systems. Although I guess some people might be hoping that that’s the case, because that would make the strategic picture better in terms of people’s willingness to use safer systems. But this is more about just how can we even train systems—if we have the willpower, if people want to build a system that has the human in charge, and ends up doing what the human wants—how can we actually do that for something that’s really complicated?

Roman: Right. And as I said, I think it’s a great way to get there. So this part I’m not concerned about. It’s a long-term game with that.

David: Yeah, no, I mean I agree that that is something to be worried about as well.

Roman: There is a possibility of manipulation if you have a human in the loop, and that itself makes it not safer but more dangerous in certain ways.

David: Yeah, one of the biggest concerns I have for this whole line of work is that the human needs to really trust the AI systems that are assisting it, and I just don’t see that we have good enough mechanisms for establishing trust and building trustworthy systems right now, to really make this scale well without introducing a lot of risk for things like manipulation, or even just compounding of errors.

Roman: But those approaches, like the debate approach, it just feels like they’re setting up humans for manipulation from both sides, and who’s better at breaking the human psychological model.

David: Yep, I think it’s interesting, and I think it’s a good line of work. But I think we haven’t seen anything that looks like a convincing solution to me yet.

Roman: Agreed.

Ariel: So, Roman, was there anything else that you wanted to add about things that happened in the last year that we didn’t get to?

Roman: Well, as a professor, I can tell you that students stop learning after about 40 minutes. So I think at this point we’re just being counterproductive.

Ariel: So for what it’s worth, our most popular podcasts have all exceeded two hours. So, what are you looking forward to in 2019?

Roman: Are you asking about safety or development?

Ariel: Whatever you want to answer. Just sort of in general, as you look toward 2019, what relative to AI are you most excited and hopeful to see, or what do you predict we’ll see?

David: So I’m super excited for people to hopefully pick up on this reward learning agenda that I mentioned that Jan and me and people at DeepMind worked on. I was actually pretty surprised how little work has been done on this. So the idea of this agenda at a high level is just: we want to learn a reward function—which is like a score, that tells an agent how well it’s doing—learn reward functions that encode what we want the AI to do, and that’s the way that we’re going to specify tasks to an AI. And I think from a machine learning researcher point of view this is kind of the most obvious solution to specification problems and to safety—is just learner reward function. But very few people are really trying to do that, and I’m hoping that we’ll see more people trying to do that, and encountering and addressing some of the challenges that come up.

Roman: So I think by definition we cannot predict short-term breakthroughs. So what we’ll see is a lot of continuation of 2018 work, and previous work scaling up. So, if you have, let’s say, Texas hold ’em poker: so for two players, we’ll take it to six players, ten players, something like that. And you can make similar projections for other fields, so the strategy games will be taken to new maps, involve more players, maybe additional handicaps will be introduced for the bots. But that’s all we can really predict, kind of gradual improvement.

Protein folding will be even more efficient in terms of predicting actual structures: Any type of accuracy rates, if they were climbing from 80% to 90%, will hit 95, 96. And this is a very useful way of predicting what we can anticipate, and I’m trying to do something similar with accidents. So if we can see historically what was going wrong with systems, we can project those trends forward. And I’m happy to say that there is now at least two or three different teams working and collecting those examples and trying to analyze them and create taxonomies for them. So that’s very encouraging.

David: Another thing that comes to mind is—I mentioned adversarial examples earlier, which are these imperceptible differences to a human that change how the AI system perceives something like an image. And so far, for the most part, the field has been focused on really imperceptible changes. But I think now people are starting to move towards a broader idea of what counts as an adversarial example. So basically anything that a human thinks clearly should belong to this class and the AI system thinks clearly should belong to this other class that has sort have been constructed deliberately to create that kind of a difference.

And I think this going to be really interesting and exciting to see how the field tries to move in that direction, because as I mentioned, I think it’s hard to define how humans decide whether or not something is a picture of a cat or something. And the way that we’ve done it so far is just by giving lots of examples of things that we say are cats. But it turns out that that isn’t sufficient, and so I think this is really going to push a lot of people closer towards thinking about some of the really core safety challenges within the mainstream machine learning community. So I think that’s super exciting.

Roman: It is a very interesting topic, and I am in particular looking at a side subject in that, which is adversarial inputs for humans, and machines developing which I guess is kind of like optical illusions, and audio illusions, where a human is mislabeling inputs in a predictable way, which is allowing for manipulation.

Ariel: Along very similar lines, I think I want to modify my questions slightly, and also ask: coming up in 2019, what are you both working on that you’re excited about, if you can tell us?

Roman: Sure, so there has been a number of publications looking at particular limitations, either through mathematical proofs or through well known economic models, and what is possible in fact, from computational, complexity points of view. And I’m trying to kind of integrate those into a single model showing—in principle, not in practice, but even in principle—what can we do with the AI control problem? How solvable is it? Is it solvable? Is it not solvable? Because I don’t think there is a mathematically rigorous proof, or even a rigorous argument either way. So I think that will be helpful, especially with kind of arguing about importance of a problem and resource allocation.

David: I’m trying to think what I can talk about. I guess right now I have some ideas for projects that are not super well thought out, so I won’t talk about those. And I have a project that I’m trying to finish off which is a little bit hard to describe in detail, but I’ll give the really high level motivation for it. And it’s about something that people in the safety community like to call capability control. I think Nick Bostrom has these terms, capability control and motivation control. And so what I’ve been talking about most of the time in terms of safety during this podcast was more like motivation control, like getting the AI to want to do the right thing, and to understand what we want. But that might end up being too hard, or sort of limited in some respect. And the alternative is just to make AIs that aren’t capable of doing things that are dangerous or catastrophic.

A lot of people in the safety community sort of worry about capability control approaches failing because if you have a very intelligent agent, it will view these attempts to control it as undesirable, and try and free itself from any constraints that we give it. And I think a way of sort of trying to get around that problem is to sort of look at capability control from the lens of motivation control. So to basically make an AI that doesn’t want to influence certain things, and maybe doesn’t have some of these drives to influence the world, or to influence the future. And so in particular I’m trying to see how can we design agents that really don’t try to influence the future, and really only care about doing the right thing, right now. And if we try and do that in a sort of naïve way, or there ways that can fail, and we can get some sort of emergent drive to still try and optimize over the long term, or try and have some influence in the future. And I think to the extent we see things like that, that’s problematic from this perspective of let’s just make AIs that aren’t capable or motivated to influence the future.

Ariel: Alright! I think I’ve kept you both on for quite a while now. So, David and Roman, thank you so much for joining us today.

David: Yeah, thank you both as well.

Roman: Thank you so much.

AI Alignment Podcast: The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi (Beneficial AGI 2019)

Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals?

Byzantine fault is “a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the “Byzantine Generals’ Problem”, developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable.

The Byzantine Generals’ Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space, and the tradeoffs we will have to make for varying degrees of efficiency or safety.

The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi is the ninth podcast in the AI Alignment Podcast series, hosted by Lucas Perry. El Mahdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

Topics discussed in this episode include:

  • The Byzantine Generals’ Problem
  • What this has to do with artificial intelligence and machine learning
  • Everyday situations where this is important
  • How systems and models are to update in the context of asynchrony
  • Why it’s hard to do Byzantine resilient distributed ML.
  • Why this is important for long-term AI alignment

An overview of Adversarial Machine Learning and where Byzantine-resilient Machine Learning stands on the map is available in this (9min) video . A specific focus on Byzantine Fault Tolerant Machine Learning is available here (~7min)

In particular, El Mahdi argues in the first interview (and in the podcast) that technical AI safety is not only relevant for long term concerns, but is crucial in current pressing issues such as social media poisoning of public debates and misinformation propagation, both of which fall into Poisoning-resilience. Another example he likes to use is social media addiction, that could be seen as a case of (non) Safely Interruptible learning. This value misalignment is already an issue with the primitive forms of AIs that optimize our world today as they maximize our watch-time all over the internet.

The latter (Safe Interruptibility) is another technical AI safety question El Mahdi works on, in the context of Reinforcement Learning. This line of research was initially dismissed as “science fiction”, in this interview (5min), El Mahdi explains why it is a realistic question that arises naturally in reinforcement learning

“El Mahdi’s work on Byzantine-resilient Machine Learning and other relevant topics is available on his Google scholar profile. A modification of the popular machine learning library TensorFlow, to make it Byzantine-resilient (and also support communication over UDP channels among other things) has been recently open-sourced on Github by El Mahdi’s colleagues based on his algorithmic work we mention in the podcast.

To connect with him over social media

You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast series. I’m Lucas Perry, and today we’ll be speaking with El Mahdi El Mhamdi on the Byzantine problem, Byzantine tolerance, and poisoning in distributed learning and computer networks. If you find this podcast interesting or useful, please give it a like and follow us on your preferred listing platform. El Mahdi El Mhamdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. With that, El Mahdi’s going to start us off with a thought experiment.

El Mahdi: Imagine you are part of a group of three generals, say, from the Byzantine army surrounding a city you want to invade, but you also want to retreat if retreat is the safest choice for your army. You don’t want to attack when you will lose, so those three generals that you’re part of are in three sides of the city. They sent some intelligence inside the walls of the city, and depending on this intelligence information, they think they will have a good chance of winning and they would like to attack, or they think they will be defeated by the city, so it’s better for them to retreat. Your final decision would be a majority vote, so you communicate through some horsemen that, let’s say, are reliable for the sake of this discussion. But there might be one of you who might have been corrupt by the city.

The situation would be problematic if, say, there are General A, General B, and General C. General A decided to attack. General B decided to retreat based on their intelligence for some legitimate reason. A and B are not corrupt, and say that C is corrupt. Of course, A and B, they can’t figure out who was corrupt. Say C is corrupt. What this general would do they thing, so A wanted to attack. They will tell them, “I also want to attack. I will attack.” Then they will tell General B, “I also want to retreat. I will retreat.” A receives two attack votes and one retreat votes. General B receives two retreat votes and only one attack votes. If they trust everyone, they don’t do any double checking, this would be a disaster.

A will attack alone; B would retreat; C, of course, doesn’t care because he was corrupt by the cities. You can tell me they can circumvent that by double checking. For example, A and B can communicate on what C told them. Let’s say that every general communicates with every general on what he decides and on also what’s the remaining part of the group told them. A will report to B, “General C told me to attack.” Then B would tell C, “General C told me to retreat.” But then A and B wouldn’t have anyway of concluding whether the inconsistency is coming from the fact that C is corrupt or that the general reporting on what C told them is corrupt.

I am General A. I have all the valid reasons to think with the same likelihood that C is maybe lying to me or also B might also be lying to me. I can’t know if you are misreporting what C told you enough for the city to corrupt one general if there are three. It’s impossible to come up with an agreement in this situation. You can easily see that this will generalize to having more than three generals, like I say 100, as soon as the non-corrupt one are less than two-thirds because what we saw with three generals would happen with the fractions that are not corrupt. Say that you have strictly more than 33 generals out of 100 who are corrupt, so what they can do is they can switch the majority votes on each side.

But worse than that, say that you have 34 corrupt generals and the remaining 66 not corrupt generals. Say that those 66 not corrupt generals were 33 on the attack side, 33 on the retreat side. The problem is that when you are in some side, say that you are in the retreat side, you have in front of you a group of 34 plus 33 in which there’s a majority of malicious ones. This majority can collude. It’s part of the Byzantine hypothesis. The malicious ones can collude and they will report a majority of inconsistent messages on the minority on the 33 ones. You can’t provably realize that the inconsistency is coming from the group of 34 because they are a majority.

Lucas: When we’re thinking about, say, 100 persons or 100 generals, why is it that they’re going to be partitioned automatically into these three groups? What if there’s more than three groups?

El Mahdi: Here we’re doing the easiest form of Byzantine agreement. We want to agree on attack versus retreat. When it’s become multi-dimensional, it gets even messier. There are more impossibility results and impossibility results. Just like with the binary decision, there is an impossibility theorem on having agreement if you have unsigned messages to horsemen. Whenever the corrupt group exceeds 33%, you provably cannot come up with an agreement. There are many variants to this problem, of course, depending on what hypothesis you can assume. Here, without even mentioning it, we were assuming bounded delays. The horsemen would always arrive eventually. If the horsemen could die on the way and you don’t have any way to check whether they arrive or not or you can be waiting forever because you don’t have any proof that the horsemen died on the way.

You don’t have any mechanism to tell you, “Stop waiting for the horsemen. Stop waiting for the message from General B because the horsemen died.” You can be waiting forever and there are theorems that shows that when you have unbounded delays, and by the way, like in distributed computing, whenever you have in bounded delays, we speak about asynchrony. If you have a synchronous communication, there is a very famous theorem that tells you consensus is impossible, not even in the malicious case, but just like in …

Lucas: In the mundane normal case.

El Mahdi: Yes. It’s called the Fischer Lynch Patterson theorem theorem .

Lucas: Right, so just to dive down into the crux of the problem, the issue here fundamentally is that when groups of computers or groups of generals or whatever are trying to check who is lying amongst discrepancies and similarities of lists and everyone who’s claiming what is when there appears to be a simple majority within that level of corrupted submissions, then, yeah, you’re screwed.

El Mahdi: Yes. It’s impossible to achieve agreement. There are always fractions of malicious agents above which is provably impossible to agree. Depending on the situation, it will be a third or sometimes or a half or a quarter, depending on your specifications.

Lucas: If you start tweaking the assumptions behind the thought experiment, then it changes what number of corrupted machines or agents that are required in order to flip the majority and to poison the communication.

El Mahdi: Exactly. But for example, you mentioned something very relevant to today’s discussion, which is what if we were not agreeing on two decisions, retreat, attack. What if we were agreeing on some multi-dimensional decision? Attack or retreat on one dimension and then …

Lucas: Maybe hold, keep the siege going.

El Mahdi: Yeah, just like add possibilities or dimensions and multi-dimensional agreements. They’re even more hopeless results in that direction

Lucas: There are more like impossibility theorems and issues where these distributed systems are vulnerable to small amounts of systems being corrupt and screwing over the entire distributed network.

El Mahdi: Yes. Maybe now we can slightly move to machine learning.

Lucas: I’m happy to move into machine learning now. We’ve talked about this, and I think our audience can probably tell how this has to do with computers. Yeah, just dive in what this has to do with machine learning and AI and current systems today, and why it even matters for AI alignment.

El Mahdi: As a brief transition, solving the agreement problem besides this very nice historic thought experiment is behind consistencies of safety critical systems like banking systems. Imagine we have a shared account. Maybe you remove 10% of the amount and then she or he added some $10 to the accounts. You remove the $10 in New York and she or he put the $10 in Los Angeles. The banking system has to agree on the ordering because minus $10 plus 10% is not the same result as plus 10% then minus $10. The final balance of the account would not be the same.

Lucas: Right.

El Mahdi: The banking systems routinely are solving decisions that fall into agreement. If you work on some document sharing platform, like Dropbox or Google Docs, whatever, and we collaboratively are writing the document, me and you. The document sharing platform has to, on real time, solve agreements about the ordering of operations so that me and you always keep seeing the same thing. This has to happen while some of the machines that are interconnecting us are failing, whether just like failing because there was a electric crash or something. Data center has lost some machines or if it was like restart, a bug or a take away. What we want in distributed computing is that we would like communications schemes between machines that’s guarantee this consistency that comes from agreement as long as some fraction of machines are reliable. What this has to do with artificial intelligence and machine learning reliability is that with some colleagues, we are trying to encompass one of the major issues in machine learning reliability inside the Byzantine fault tolerance umbrella. For example, you take, for instance, poisoning attacks.

Lucas: Unpack what poisoning attacks are.

El Mahdi: For example, imagine you are training a model on what are good videos to recommend given some key word search. If you search for “medical advice for young parents on vaccine,” this is a label. Let’s assume for the sake of simplicity that a video that tells you not to take your kid for vaccines is not what we mean by medical advice for young parents on vaccine because that’s what medical experts agree on. We want our system to learn that anitvaxers, like anti-vaccine propaganda is not what people are searching for when they type those key words, so I suppose a world where we care about accuracy, okay? Imagine you want to train a machine learning model that gives you accurate results of your search. Let’s also for the sake of simplicity assume that a majority of people on the internet are honest.

Let’s assume that more than 50% of people are not actively trying to poison the internet. Yeah, this is very optimistic, but let’s assume that. What we can show and what me and my colleagues started this line of research with is that you can easily prove that one single malicious agent can provably poison a distributed machine learning scheme. Imagine you are this video sharing platform. Whenever people behave on your platform, this generates what we call gradients, so it updates your model. It only takes a few hyperactive accounts that could generate behavior that is powerful enough to pull what we call the average gradient because what distributed machine learning is using, at least up to today, if you read the source code of most distributed machine learning frameworks. Distributed machine learning is always averaging gradients.

Imagine you Lucas Perry just googled a video on the Parkland shootings. Then the video sharing platform shows you a video telling you that David Hogg and Emma Gonzalez and those kids behind the March for Our Lives movement are crisis actors. The video labels three kids as crisis actors. It obviously has a wrong label, so it is what I will call a poisoned data point. If you are non-malicious agents on the video sharing platform, you will dislike the video. You will not approve it. You’re likely to flag it. This should generate a gradient that pushes the model in that direction, so the gradient will update the model into a direction where it stops thinking that this video is relevant for someone searching “Parkland shooting survivors.” What can happen if your machine learning framework is just averaging gradients is that a bunch of hyperactive people on some topic could poison the average and pull it towards the direction where the models is enforcing this thinking that, “Yeah, those kids are crisis actors.”

Lucas: This is the case because the hyperactive accounts are seen to be given more weight than accounts which are less active in the same space. But this extra weighting that these accounts will get from their hyperactivity in one certain category or space over another, how is the weighting done? Is it just time spent per category or does it have to do with submissions that agree with the majority?

El Mahdi: We don’t even need to go into the details because we don’t know. I’m talking in a general setting where you have a video sharing platform aggregating gradients for behavior. Now, maybe let’s raise the abstraction level. You are doing gradient descents, so you have a lost function that you want to minimize. You have an error function. The error function is the mismatch between what you predict and what the user tells you. The user tells you this is a wrong prediction, and then you move to the direction where the users stop telling you this is the wrong direction. You are doing great in this sense minimizing the lost function. User behaves, and with their behavior, you generate gradients.

What you do now in the state of the arts way of distributed machine learning is that you average all those gradients. Averaging is well known not to be resilient. If you have a room of poor academics earning a few thousand dollars and then a billionaire jumps in the room, if your algorithm reasons with averaging, it will think that this is a room of millionaires because the average salary would be a couple of hundred millions. But then million is very obvious to do when it comes to salaries and numbers scalers because you can rank them.

Lucas: Right.

El Mahdi: You rank numbers and then decide, “Okay, this is the ordering. This is the number that falls in the middle. This is the upper half. This is the lower half and this is the median.” When it becomes high dimensional, the median is a bit tricky. It has some computational issues. Then even if you compute what we call the geometric median, an attacker can still know how to leverage the fact that you’re only approximating it because there’s no closed formula. There’s no closed form to compute the median in that dimension. But worse than that, what we showed in one of our follow up works is because of the fact that machine learning is done in very, very, very high dimensions, you would have a curse of the dimensionality issue that makes it possible for attackers to sneak in without being spot as a way of the median.

It can still look like the median vector. I take benefits from the fact that those vectors, those gradients, are extremely high dimensional. I would look for all the disagreements. Let’s say you have a group of a couple hundred gradients, and I’m the only malicious one. I would look at the group of correct vectors all updating you somehow in the same direction within some variants. On average, they’re like what we call unbiased estimators of the gradient. When you take out the randomness, the expected value they will give you is the real gradient of the loss function. What I will do as a malicious worker is I will look at the way they are disagreeing slightly on each direction.

I will sum that. I will see that they disagree by this much on direction one. They disagree by this much on direction two. They disagree by this much, epsilon one, epsilon two, epsilon three. I would look for all these small disagreements they have on all the components.

Lucas: Across all dimensions and high dimensional space. [crosstalk 00:16:35]

El Mahdi: Then add that up. It will be my budget, my leeway, my margin to attack you on another direction.

Lucas: I see.

El Mahdi: What we proved is that you have to mix ideas from geometric median with ideas from the traditional component-wise median, and that those are completely different things. The geometric median is a way to find a median by just minimizing the sum of distances between what you look for and all the vectors that were proposed, and then the component-wise median will do a traditional job of ranking the coordinates. It looks at each coordinate, and then rank all the propositions, and then look for the proposition that lies in the middle. Once we proved enough follow up work is that, yeah, the geometric median idea is elegant. It can make you converge, but it can make you converge to something arbitrarily bad decided by the attacker. When you train complex models like neural nets, the landscape you optimize inside is not convex. It’s not like a bowl or a cup where you just follow the descending slope you would end up in the lowest point.

Lucas: Right.

El Mahdi: It’s like a multitude of bowls with different heights.

Lucas: Right, so there’s tons of different local minima across the space.

El Mahdi: Exactly. So in the first paper what we showed is that ideas that look like the geometric median are enough to just converge. You converge. You provably converge, but in the follow up what we realized, like something we were already aware of, but not enough in my opinion, is that there is this square root D, this curse of dimensionality that will arise when you compute high dimensional distances. That the attacker can leverage.

So in what we call the hidden vulnerability of distributed learning, you can have correct vectors, agreeing on one component. Imagine in your head some three axis system.

Let’s say that they are completely in agreement on axis three. But then in axis one, two, so in the plane formed by the axis one and axis two, they have a small disagreement.

What I will do as the malicious agent, is that I will leverage this small disagreement, and inject it in axis three. And this will make you go to a bit slightly modified direction. And instead of going to this very deep, very good minima, you will go into a local trap that is just close ahead.

And that comes from the fact that loss functions of interesting models are clearly like far from being convex. The models are highly dimensional, and the loss function is highly un-convex, and creates a lot of leeway.

Lucas: It creates a lot of local minima spread throughout the space for you to attack the person into.

El Mahdi: Yeah. So convergence is not enough. So we started this research direction by formulating the following question, what does it take to guarantee convergence?

And any scheme that aggregates gradients, and guarantee convergence is called Byzantine resilient. But then you can realize that in very high dimensions, and highly non-convex loss functions, is convergence enough? Would you just want to converge?

There are of course people arguing the deep learning models, like there’s this famous paper by Anna Choromanska, and Yann LeCun, and  Gérard Ben Arous, about the landscape of neural nets, that basically say that, “Yeah, very deep local minimum of neural nets are some how as good.”

From an overly simplified point of view, it’s an optimistic paper, that tells you that you shouldn’t worry too much when you optimize neural nets about the fact that gradient descent would not necessarily go to a global like-

Lucas: To a global minima.

El Mahdi: Yeah. Just like, “Stop caring about that.”

Lucas: Because the local minima are good enough for some reason.

El Mahdi: Yeah. I think that’s a not too unfair way to summarize the paper for the sake of this talk, for the sake of this discussion. What we empirically illustrate here, and theoretically support is that that’s not necessarily true.

Because we show that with very low dimensional, not extremely complex models, trained on CIFAR-10 and MNIST, which are toy problems, very easy toy problems, low dimensional models etc. It’s already enough to have those amounts of parameters, let’s say 100,000 parameters or less, so that an attacker would always find a direction to take you each time away, away, away, and then eventually find an arbitrarily bad local minimum. And then you just converge to that.

So convergence is not enough. Not only you have to seek an aggregation rule that guarantees convergence, but you have to seek some aggregation rules that guarantee that you would not converge to something arbitrarily bad. You would keep converging to the same high quality local minimum, whatever that means.

The hidden vulnerability is this high dimensional idea. It’s the fact that because the loss function is highly non-convex, because there’s the high dimensionality, as an attacker I would always find some direction, so the attack goes this way.

Here the threat model is that an attacker can spy on your gradients, generated by the correct workers but cannot talk on their behalf. So I cannot corrupt the messages. Since you asked about the reliability of horsemen or not.

So horsemen are reliable. I can’t talk on your behalf, but I can spy on you. I can see what are you sending to the others, and anticipate.

So I would as an attacker wait for correct workers to generate their gradients, I will gather those vectors, and then I will just do a linear regression on those vectors to find the best direction to leverage the disagreement on the D minus one remaining directions.

So because there would be this natural disagreement, this variance in many directions, I will just do some linear regression and find what is the best direction to keep? And use the budget I gathered, those epsilons I mentioned earlier, like this D time epsilon on all the directions to inject it the direction that will maximize my chances of taking you away from local minima.

So you will converge, as proven in the early papers, but not necessarily to something good. But what we showed here is that if you combine ideas from multidimensional geometric medians, with ideas from single dimensional component-wise median, you improve your robustness.

Of course it comes with a price. You require three quarters of the workers to be reliable.

There is another direction where we expanded this problem, which is asynchrony. And asynchrony arises when as I said in the Byzantine generals setting, you don’t have a bounded delay. In the bounded delay setting, you know that horses arrive at most after one hour.

Lucas: But I have no idea if the computer on the other side of the planet is ever gonna send me that next update.

El Mahdi: Exactly. So imagine you are doing machine learning on smartphones. You are leveraging a set of smartphones all around the globe, and in different bandwidths, and different communication issues etc.

And you don’t want each time to be bottlenecked by the slowest one. So you want to be asynchronous, you don’t want to wait. You’re just like whenever some update is coming, take it into account.

Imagine some very advanced AI scenario, where you send a lot of learners all across the universe, and then they communicate with the speed of light, but some of them are five light minutes away, but some others are two hours and a half. And you want to learn from all of them, but not necessarily handicap the closest one, because there are some other learners far away.

Lucas: You want to run updates in the context of asynchrony.

El Mahdi: Yes. So you want to update whenever a gradient is popping up.

Lucas: Right. Before we move on to illustrate the problem again here is that the order matters, right? Like in the banking example. Because the 10% plus 10 is different from-

El Mahdi: Yeah. Here the order matters for different reasons. You update me so you are updating me on the model you got three hours ago. But in the meanwhile, three different agents updated me on the models, while getting it three minutes ago.

All the agents are communicating through some abstraction they call the server maybe. Like this server receives updates from fast workers.

Lucas: It receives gradients.

El Mahdi: Yeah, gradients. I also call them updates.

Lucas: Okay.

El Mahdi: Because some workers are close to me and very fast, I’ve done maybe 1000 updates, while you were still working and sending me the message.

So when your update arrive, I can tell whether it is very stale, very late, or malicious. So what we do in here is that, and I think it’s very important now to connect a bit back with classic distributed computing.

Is that Byzantine resilience in machine learning is easier than Byzantine resilience in classical distributed computing for one reason, but it is extremely harder for another reason.

The reason is that we know what we want to agree on. We want to agree on a gradient. We have a toolbox of calculus that tells us how this looks like. We know that it’s the slope of some loss function that is most of today’s models, relatively smooth, differentiable, maybe Lipschitz, bounded, whatever curvature.

So we know that we are agreeing on vectors that are gradients of some loss function. And we know that there is a majority of workers that will produce vectors that will tell us what does a legit vector look like.

You can find some median behavior, and then come up with filtering criterias that will get away with the bad gradients. That’s the good news. That’s why it’s easier to do Byzantine resilience in machine learning than to do Byzantine agreement. Byzantine agreement, because agreement is a way harder problem.

The reason why Byzantine resilience is harder in machine learning than in the typical settings you have in distributed computing is that we are dealing with extremely high dimensional data, extremely high dimensional decisions.

So a decision here is to update the model. It is triggered by a gradient. So whenever I accept a gradient, I make a decision. I make a decision to change the model, to take it away from this state, to this new state, by this much.

But this is a multidimensional update. And Byzantine agreement, or Byzantine approximate agreement in higher dimension has been provably hopeless by Hammurabi Mendes, and Maurice Herlihy in an excellent paper in 2013, where they show that you can’t do Byzantine agreement in D dimension with N agents in less than N to the power D computations, per agent locally.

Of course in their paper, they were meaning Byzantine agreement on positions. So they were framing it with a motivations saying, “This is N to the power D, but the typical cases we care about in distributed computing are like robots agreeing on a position on a plane, or on a position in a three dimensional space.” So D is two or three.

So N to the power two or N to the power three is fine. But in machine learning D is not two and three, D is a billion or a couple of millions. So N to the power a million is just like, just forget.

And not only that, but also they require … Remember when I tell you that Byzantine resilience computing would always have some upper bound on the number malicious agents?

Lucas: Mm-hmm (affirmative).

El Mahdi: So the number of total agents should exceed D times the number of malicious agents.

Lucas: What is D again sorry?

El Mahdi: Dimension.

Lucas: The dimension. Okay.

El Mahdi: So if you have to agree on D dimension, like on a billion dimensional decision, you need at least a billion times the number of malicious agents.

So if you have say 100 malicious agents, you need at least 100 billion total number of agents to be resistant. No one is doing distributed machine learning on 100 billion-

Lucas: And this is because the dimensionality is really screwing with the-

El Mahdi: Yes. Byzantine approximate agreement has been provably hopeless. That’s the bad, that’s why the dimensionality of machine learning makes it really important to go away, to completely go away from traditional distributed computing solutions.

Lucas: Okay.

El Mahdi: So we are not doing agreement. We’re not doing agreement, we’re not even doing approximate agreement. We’re doing something-

Lucas: Totally new.

El Mahdi: Not new, totally different.

Lucas: Okay.

El Mahdi: Called gradient decent. It’s not new. It’s as old as Newton. And it comes with good news. It comes with the fact that there are some properties, like some regularity of the loss function, some properties we can exploit.

And so in the asynchronous setting, it becomes even more critical to leverage those differentiability properties. So because we know that we are optimizing a loss functions that has some regularities, we can have some good news.

And the good news has to do with curvature. What we do here in asynchronous setting, is not only we ask workers for their gradients, we ask them for their empirical estimate of the curvature.

Lucas: Sorry. They’re estimating the curvature of the loss function, that they’re adding the gradient to?

El Mahdi: They add the gradient to the parameter, not the loss function. So we have a loss function, parameter is the abscissa, you add the gradient to the abscissa to update the model, and then you end up in a different place of the loss function.

So you have to imagine the loss function as like a surface, and then the parameter space as the plane, the horizontal plane below the surface. And depending on where you are in the space parameter, you would be on different heights of the loss function.

Lucas: Wait sorry, so does the gradient depend where you are on this, the bottom plane?

El Mahdi: Yeah [crosstalk 00:29:51]-

Lucas: So then you send an estimate for what you think the slope of the intersection will be?

El Mahdi: Yeah. But for asynchrony, not only that. I will ask you to send me the slope, and your observed empirical growth of the slope.

Lucas: The second derivative?

El Mahdi: Yeah.

Lucas: Okay.

El Mahdi: But the second derivative again in high dimension is very hard to compute. You have to computer the Hessian matrix.

Lucas: Okay.

El Mahdi: That’s something like completely ugly to compute in high dimensional situations because it takes D square computations.

As an alternative we would like you to send us some linear computation in D, not a square computation in D.

So we would ask you to compute your actual gradient, your previous gradient, the difference between them, and normalize it by the difference between models.

So, “Tell us your current gradient, by how much it changed from the last gradient, and divide that by how much you changed the parameter.”

So you would tell us, “Okay, this is my current slope, and okay this is the gradient.” And you will also tell us, “By the way, my slope change relative to my parameter change is this much.”

And this would be some empirical estimation of the curvature. So if you are in a very curved area-

Lucas: Then the estimation isn’t gonna be accurate because the linearity is gonna cut through some of the curvature.

El Mahdi: Yeah but if you are in a very curved area of the loss function, your slope will change a lot.

Lucas: Okay. Exponentially changing the slope.

El Mahdi: Yeah. Because you did a very tiny change in the parameter and it takes a lot of the slope.

Lucas: Yeah. Will change the … Yeah.

El Mahdi: When you are in a non-curved area of the loss function, it’s less harmful for us that you are stale, because you will just technically have the same updates.

If you are in a very curved area of the loss function, your updates being stale is now a big problem. So we want to discard your updates proportionally to your curvature.

So this is the main idea of this scheme in asynchrony, where we would ask workers about their gradient, and their empirical growth rates.

And then of course I don’t want to trust you on what you declare, because you can plan to screw me with some gradients, and then declare a legitimate value of the curvature.

I will take those empirical, what we call in the paper empirical Lipschitz-ness. So we ask you for this empirical growth rate, that it’s a scalar, remember? This is very important. It’s a single dimensional number.

And so we ask you about this growth rate, and we ask all of you about growth rates, again assuming the majority is correct. So the majority of growth rates will help us set the median growth rate in a robust manner, because as long as a simple majority is not lying, the median growth rates will always be bounded between two legitimate values of the growth rate.

Lucas: Right because, are you having multiple workers inform you of the same part of your loss function?

El Mahdi: Yes. Even though they do it in an asynchronous manner.

Lucas: Yeah. Then you take the median of all of them.

El Mahdi: Yes. And then we reason by quantiles of the growth rates.

Lucas: Reason by quantiles? What are quantiles?

El Mahdi: The first third, the second third, the third third. Like the first 30%, the second 30%, the third 30%. We will discard the first 30%, discard the last 30%. Anything in the second 30% is safe.

Of course this has some level of pessimism, which is good for safety, but not very good for being fast. Because maybe people are not lying, so maybe the first 30%, and the last 30% are also values we could consider. But for safety reasons we want to be sure.

Lucas: You want to try to get rid of the outliers.

El Mahdi: Possible.

Lucas: Possible outliers.

El Mahdi: Yeah. So we get rid of the first 30%, the last 30%.

Lucas: So this ends up being a more conservative estimate of the loss function?

El Mahdi: Yes. That’s completely right. We explain that in the paper.

Lucas: So there’s a trade off that you can decide-

El Mahdi: Yeah.

Lucas: By choosing what percentiles to throw away.

El Mahdi: Yeah. Safety never comes for free. So here, depending on how good your estimates about the number of potential Byzantine actors is, your level of pessimism with translate into slowdown.

Lucas: Right. And so you can update the amount that you’re cutting off-

El Mahdi: Yeah.

Lucas: Based off of the amount of expected corrupted signals you think you’re getting.

El Mahdi: Yeah. So now imagine a situation where you know the number of workers is know. You know that you are leveraging 100,000 smartphones doing gradient descent for you. Let’s call that N.

You know that F of them might be malicious. We argue that if F is exceeding the third of N, you can’t do anything. So we are in a situation where F is less than a third. So less than 33,000 workers are malicious, then the slowdown would be F over N, so a third.

What if you are in a situation where you know that your malicious agents are way less than a third? For example you know that you have at most 20 rogue accounts in your video sharing platform.

And your video sharing platform has two billion accounts. So you have two billion accounts.

Lucas: 20 of them are malevolent.

El Mahdi: What we show is that the slowdown would be N minus F divided by N. N is the two billion accounts, F is the 20, and is again two billion.

So it would be two billion minus 20, so one million nine hundred billion, like something like 0.999999. So you would go almost as fast as the non-Byzantine resilient scheme.

So our Byzantine resilient scheme has a slowdown that is very reasonable in situations where F, the number of malicious agents is way less than N, the total number of agents, which is typical in modern…

Today, like if you ask social media platforms, they have a lot of a tool kits to prevent people from creating a billion fake accounts. Like you can’t in 20 hours create an army of several million accounts.

None of the mainstream social media platforms today are susceptible to this-

Lucas: Are susceptible to massive corruption.

El Mahdi: Yeah. To this massive account creation. So you know that the number of corrupted accounts are negligible to the number of total accounts.

So that’s the good news. The good news is that you know that F is negligible to N. But then the slowdown of our Byzantine resilient methods is also close to one.

But it has the advantage compared to the state of the art today to train distributed settings of not taking the average gradient. And we argued in the very beginning that those 20 accounts that you could create, it doesn’t take a bot army or whatever, you don’t need to hack into the machines of the social network. You can have a dozen human, sitting somewhere in a house manually creating 20 accounts, training the accounts over time, doing behavior that makes the legitimate for some topics, and then because you’re distributing machine learning scheme would average the gradients generated by people behavior and that making your command anti-vaccine or controversies, anti-Semitic conspiracy theories.

Lucas: So if I have 20 bad gradients and like, 10,000 good gradients for a video, why is it that with averaging 20 bad gradients are messing up the-

El Mahdi: The amplitude. It’s like the billionaire in the room of core academics.

Lucas: Okay, because the amplitude of each of their accounts is greater than the average of the other accounts?

El Mahdi: Yes.

Lucas: The average of other accounts that are going to engage with this thing don’t have as large of an amplitude because they haven’t engaged with this topic as much?

El Mahdi: Yeah, because they’re not super credible on gun control, for example.

Lucas: Yeah, but aren’t there a ton of other accounts with large amplitudes that are going to be looking at the same video and correcting over the-

El Mahdi: Yeah, let’s define large amplitudes. If you come to the video and just like it, that’s a small update. What about you like it, post very engaging comments-

Lucas: So you write a comment that gets a lot of engagement, gets a lot of likes and replies.

El Mahdi: Yeah, that’s how you increase your amplitude. And because you are already doing some good job in becoming the reference on that video-sharing platform when it comes to discussing gun control, the amplitude of your commands is by definition high and the fact that your command was very early on posted and then not only you commented the video but you also produced a follow-up video.

Lucas: I see, so the gradient is really determined by a multitude of things that the video-sharing platform is measuring for, and the metrics are like, how quickly you commented, how many people commented and replied to you. Does it also include language that you used?

El Mahdi: Probably. It depends on the social media platform and it depends on the video-sharing platform and, what is clear is that there are many schemes that those 20 accounts created by this dozen people in a house can try to find good ways to maximize the amplitude of their generated gradients, but this is a way easier problem than the typical problems we have in technical AI safety. This is not value alignment or value loading or coherent extrapolated volition. This is a very easy, tractable problem on which now we have good news, provable results. What’s interesting is the follow-up questions that we are trying to investigate here with my colleagues, the first of which is, don’t necessarily have a majority of people on the internet promoting vaccines.

Lucas: People that are against things are often louder than people that are not.

El Mahdi: Yeah, makes sense, and sometimes maybe numerous because they generate content, and the people who think vaccines are safe not creating content. In some topics it might be safe to say that we have a majority of reasonable, decent people on the internet. But there are some topics in which now even like polls, like the vaccine situation, there’s a surge now of anti-vaccine resentment in western Europe and the US. Ironically this is happening in the developed country now, because people are so young, they don’t remember the non-vaccinated person. My aunt, I come from Morocco. my aunt is handicapped by polio, so I grew up seeing what a non-vaccinated person looks like. So young people in the more developed countries never had a living example of non-vaccinated past.

Lucas: But they do have examples of people that end up with autism and it seems correlated with vaccines.

El Mahdi: Yeah, the anti-vaccine content may just end up being so click baits, and so provocative that it gets popular. So this is a topic where the majority hypothesis which is crucial to poisoning resilience does not hold. An open follow up we’re onto now is how to combine ideas from reputation metrics, PageRank, et cetera, with poisoning resilience. So for example you have the National Health Institute, the John Hopkins Medical Hospital, Harvard Medical School, and I don’t know, the Massachusetts General Hospital having official accounts on some video-sharing platform and then you can spot what they say on some topic because now we are very good at doing semantic analysis of contents.

And know that okay, on the tag vaccines, I know that there’s this bunch of experts and then what you want to make emerge on your platform is some sort of like epistocracy. The power is given to the knowledgeable, like we have in some fields, like in medical regulation. The FDA doesn’t do a majority vote. We don’t have a popular majority vote across the country to tell the FDA whether it should approve this new drug or not. The FDA does some sort of epistocracy where the knowledgeable experts on the topic would vote. So how about mixing ideas from social choice?

Lucas: And topics in which there are experts who can inform.

El Mahdi: Yeah. There’s also a general fall-off of just straight out trying to connect Byzantine resilient learning with social choice, but then there’s another set of follow ups that motivates me even more. We were mentioning workers, workers, people generate accounts on social media, accounts generation gradients. That’s all I can implicitly assume in that the server, the abstraction that’s gathering those gradients is reliable. What about the aggregated platform itself being deployed on rogue machines? So imagine you are whatever platform doing learning. By the way, whatever always we have said from the beginning until now applies as long as you do gradient-based learning. So it can be recommended systems. It can be training some deep reinforcement learning of some super complicated tasks to beat, I don’t know the word, champion in poker.

We do not care as long as there’s some gradient generation from observing some state, some environmental state, and some reward or some label. It can be supervised, reinforced, as long as gradient based or what you say apply. Imagine now you have this platform leveraging distributed gradient creators, but then the platform itself for security reasons is deployed on several machines for fault tolerance. But then those machines themselves can fail. You have to make the servers agree on the model, so despite the fact that a fraction of the workers are not reliable and now a fraction of the servers themselves. This is the most important follow up i’m into now and I think there would be something on archive maybe in February or March on that.

And then a third follow up is practical instances of that, so I’ve been describing speculative thought experiments on power poisoning systems is actually brilliant master students working which means exactly doing that, like on typical recommended systems, datasets where you could see that it’s very easy. It really takes you a bunch of active agents to poison, a hundred thousand ones or more. Probably people working on big social media platforms would have ways to assess what I’ve said, and so as researchers in academia we could only speculate on what can go wrong on those platforms, so what we could do is just like we just took state of the art recommender systems, datasets, and models that are publicly available, and you can show that despite having a large number of reliable recommendation proposers, a small, tiny fraction of proposers can make, I don’t know, like a movie recommendation system recommend the most suicidal triggering film to the most depressed person watching through your platform. So I’m saying, that’s something you don’t want to have.

Lucas: Right. Just wrapping this all up, how do you see this in the context of AI alignment and the future of machine learning and artificial intelligence?

El Mahdi: So I’ve been discussing this here with people in the Beneficial AI conference and it seems that there are two schools of thought. I am still hesitating between the two because I switched within the past three months from the two sides like three times. So one of them thinks that an AGI is by definition resilient to poisoning.

Lucas: Aligned AGI might be by definition.

El Mahdi: Not even aligned. The second school of thought, aligned AGI is Byzantine resilient.

Lucas: Okay, I see.

El Mahdi: Obviously aligned AGI would be poisoning resilience, but let’s just talk about super intelligent AI, not necessarily aligned. So you have a super intelligence, would you include poisoning resilience in the super intelligence definition or not? And one would say that yeah, if you are better than human in whatever task, it means you are also better than human into spotting poison data.

Lucas: Right, I mean the poison data is just messing with your epistemics, and so if you’re super intelligent your epistemics would be less subject to interference.

El Mahdi: But then there is that second school of thought which I switched back again because I find that most people are in the first school of thought now. So I believe that super intelligence doesn’t necessarily include poisoning resilience because of what I call practically time constrained superintelligence. If you have a deadline because of computational complexity, you have to learn something, which can sometimes-

Lucas: Yeah, you want to get things done.

El Mahdi: Yeah, so you want to get it done in a finite amount of time. And because of that you will end up leveraging to speed up your learning. So if a malicious agent just put up bad observations of the environment or bad labeling of whatever is around you, then it can make you learn something else than what you would like as an aligned outcome. I’m strongly on the second side despite many disagreeing with me here. I don’t think super intelligence includes poisoning resilience, because super intelligence would still be built with time constraints.

Lucas: Right. You’re making a tradeoff between safety and computational efficiency.

El Mahdi: Right.

Lucas: It also would obviously seem to matter the kind of world that the ASI finds itself in. If it knows that it’s in a world with no, or very, very, very few malevolent agents that are wanting to poison it, then it can just throw all of this out of the window, but the problem is that we live on a planet with a bunch of other primates that are trying to mess up our machine learning. So I guess just as a kind of fun example in taking it to an extreme, imagine it’s the year 300,000 AD and you have a super intelligence which has sort of spread across space-time and it’s beginning to optimize its cosmic endowment, but it gives some sort of uncertainty over space-time to whether or not there are other super intelligences there who might want to poison its interstellar communication in order to start taking over some of its cosmic endowment. Do you want to just sort of explore?

El Mahdi: Yeah, that was like a closed experiment I proposed earlier to Carl Shulman from the FHI. Imagine some super intelligence reaching the planets where there is a smart form of life emerging from electric communication between plasma clouds. So completely non-carbon, non-silicon based.

Lucas: So if Jupiter made brains on it.

El Mahdi: Yeah, like Jupiter made brains on it just out of electric communication through gas clouds.

Lucas: Yeah, okay.

El Mahdi: And then this turned to a form of communication is smart enough to know that this is a super intelligence reaching the planet to learn about this form of life, and then it would just start trolling it.

Lucas: It’ll start trolling the super intelligence?

El Mahdi: Yeah. So they would come up with an agreement ahead of time, saying, “Yeah, this super intelligence coming from earth throughout our century to discover how we do things here. Let’s just behave dumbly, or let’s just misbehave. And then the super intelligence will start collecting data on this life form and then come back to earth saying, Yeah, they’re just a dumb plasma passive form of nothing interesting.

Lucas: I mean, you don’t think that within the super intelligence’s model, I mean, we’re talking about it right now so obviously a super intelligence will know this when it leaves that there will be agents that are going to try and trick it.

El Mahdi: That’s the rebuttal, yes. That’s the rebuttal again. Again, how much time does super intelligence have to do inference and draw conclusions? You will always have some time constraints.

Lucas: And you don’t always have enough computational power to model other agents efficiently to know whether or not they’re lying, or …

El Mahdi: You could always come up with thought experiment with some sort of other form of intelligence, like another super intelligence is trying to-

Lucas: There’s never, ever a perfect computer science, never.

El Mahdi: Yeah, you can say that.

Lucas: Security is never perfect. Information exchange is never perfect. But you can improve it.

El Mahdi: Yeah.

Lucas: Wouldn’t you assume that the complexity of the attacks would also scale? We just have a ton of people working on defense, but if we have an equal amount of people working on attack, wouldn’t we have an equally complex method of poisoning that our current methods would just be overcome by?

El Mahdi: That’s part of the empirical follow-up I mentioned. The one Isabella and I were working on, which is trying to do some sort of min-max game of poisoner versus poisoning resilience learner, adversarial poisoning setting where like a poisoner and then there is like a resilient learner and the poisoner tries to maximize. And what we have so far is very depressing. It turns out that it’s very easy to be a poisoner. Computationally it’s way easier to be the poisoner than to be-

Lucas: Yeah, I mean, in general in the world it’s easier to destroy things than to create order.

El Mahdi: As I said in the beginning, this is a sub-topic of technical AI safety where I believe it’s easier to have tractable formalizable problems for which you can probably have a safe solution.

Lucas: Solution.

El Mahdi: But in very concrete, very short term aspects of that. In March we are going to announce a major update in Tensor Flow which is the standout frameworks today to do distributed machine learning, open source by Google, so we will announce hopefully if everything goes right in sys ML in the systems for machine learning conference, like more empirically focused colleagues, so based on the algorithms I mentioned earlier which were presented at NuerIPS and ICML from the past two years, they will announce a major update where they basically changed every averaging insight in terms of flow by those three algorithms I mentioned, Krum and Bulyan and soon Kardam which constitute our portfolio of Byzantine resilience algorithms.

Another consequence that comes for free with that is that distributed machinery frameworks like terms of flow use TCPIP as a communication protocol. So TCPIP has a problem. It’s reliable but it’s very slow. You have to repeatedly repeat some messages, et cetera, to guarantee reliability, and we would like to have a faster communication protocol, like UDP. We don’t need to go through those details. But it has some package drop, so so far there was no version of terms of flow or any distributed machine learning framework to my knowledge using UDP. The old used TCPIP because they needed reliable communication, but now because we are Byzantine resilient, we can afford having fast but not completely reliable communication protocols like UDP. So one of the things that come for free with Byzantine resilience is that you can move from heavy-

Lucas: A little bit more computation.

El Mahdi: -yeah, heavy communication protocols like TCPIP to lighter, faster, more live communication protocols like UDP.

Lucas: Keeping in mind you’re trading off.

El Mahdi: Exactly. Now we have this portfolio of algorithms which can serve many other applications besides just making faster distributed machine learning, like making poisoning resilience. I don’t know, recommended systems for social media and hopefully making AGI learning poisoning resilience matter.

Lucas: Wonderful. So if people want to check out some of your work or follow you on social media, what is the best place to keep up with you?

El Mahdi: Twitter. My handle is El Badhio, so maybe you would have it written down on the description.

Lucas: Yeah, cool.

El Mahdi: Yeah, Twitter is the best way to get in touch.

Lucas: All right. Well, wonderful. Thank you so much for speaking with me today and I’m excited to see what comes out of all this next.

El Mahdi: Thank you. Thank you for hosting this.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.

Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings—most Americans, for example, don’t trust Facebook—were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.

This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University’s political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.

In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:

  • Demographic differences in perceptions of AI
  • Discrepancies between expert and public opinions
  • Public trust (or lack thereof) in AI developers
  • The effect of information on public perceptions of scientific issues

Research and publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi there. I’m Ariel Conn with the Future of Life Institute. Today, I am doing a special podcast, which I hope will be just the first in a continuing series, in which I talk to researchers about the work that they’ve just published. Last week, a report came out called Artificial Intelligence: American Attitudes and Trends, which is a survey that looks at what Americans think about AI. I was very excited when the lead author of this report agreed to come join me and talk about her work on it, and I am actually now going to just pass this over to her, and let her introduce herself, and just explain a little bit about what this report is and what prompted the research.

Baobao: My name is Baobao Zhang. I’m a PhD candidate in Yale University’s political science department, and I’m also a research affiliate with the Center for the Governance of AI at the University of Oxford. We conducted a survey of 2,000 American adults in June 2018 to look at what Americans think about artificial intelligence. We did so because we believe that AI will impact all aspects of society, and therefore, the public is a key stakeholder. We feel that we should study what Americans think about this technology that will impact them. In this survey, we covered a lot of ground. In the past, surveys about AI tend to have very specific focus, for instance on automation and the future of work. What we try to do here is cover a wide range of topics, including the future of work, but also lethal autonomous weapons, how AI might impact privacy, and trust in various actors to develop AI.

So one of the things we found is Americans believe that AI is a technology that should be carefully managed. In fact, 82% of Americans feel this way. Overall, Americans express mixed support for developing AI. 41% somewhat support or strongly support the development of AI, while there’s a smaller minority, 22%, that somewhat or strongly opposes it. And in terms of the AI governance challenges that we asked—we asked about 13 of them—Americans think all of them are quite important, although they prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake news online, preventing AI cyber attacks, and protecting data privacy.

Ariel: Can you talk a little bit about what the difference is between concerns about AI governance and concerns about AI development and more in the research world?

Baobao: In terms of the support for developing AI, we saw that as a general question in terms of support—we didn’t get into the specifics of what developing AI might look like. But in terms of the governance challenges, we gave quite detailed, concrete examples of governance challenges, and these tend to be more specific.

Ariel: Would it be fair to say that this report looks specifically at governance challenges as opposed to development?

Baobao: It’s a bit of both. I think we ask both about the R&D side, for instance we ask about support for developing AI and which actors the public trusts to develop AI. On the other hand, we also ask about the governance challenges. Among the 13 AI governance challenges that we presented to respondents, Americans tend to think all of them are quite important.

Ariel: What were some of the results that you expected, that were consistent with what you went into this survey thinking people thought, and what were some of the results that surprised you?

Baobao: Some of the results that surprised us is how soon the public thinks that high-level machine intelligence will be developed. We find that they think it will happen a lot sooner than what experts predict, although some past research suggests similar results. What didn’t surprise me, in terms of the AI governance challenge question, is how people are very concerned about data privacy and digital manipulation. I think these topics have been in the news a lot recently, given all the stories about hacking or digital manipulation on Facebook.

Ariel: So going back real quick to your point about the respondents expecting high-level AI happening sooner: how soon do they expect it?

Baobao: In our survey, we asked respondents about high-level machine intelligence, and we defined it as when machines are able to perform almost all tasks that are economically relevant today better than the median human today at each task. My co-author, Allan Dafoe, and some of my other team members, we’ve done a survey asking AI researchers—this was back in 2016—a similar question, and there we had a different definition of high-level machine intelligence that required a higher bar, so to speak. So that might have caused some difference. We’re trying to ask this question again to AI researchers this year. We’re doing continuing research, so hopefully the results will be more comparable. Even so, I think the difference is quite large.

I guess one more caveat is—we have in the footnote—we did ask the same definition as we asked AI experts in 2016 in a pilot survey on the American public, and we also found that the public thinks high-level machine intelligence will happen sooner than experts predict. So it might not just be driven by the definition itself, but the public and experts have different assessments. But to answer your question, the median respondent in our American public sample predicts that there’s a 54% probability of high-level machine intelligence being developed within the next 10 years, which is quite high of a probability.

Ariel: I’m hesitant to ask this, because I don’t know if it’s a very fair question, but do you have thoughts on why the general public thinks that high-level AI will happen sooner? Do you think it is just a case that there’s different definitions that people are referencing, or do you think that they’re perceiving the technology differently?

Baobao: I think that’s a good question, and we’re doing more research to investigate these results and to probe at it. One thing is that the public might have a different perception of what AI is compared to experts. In future surveys, we definitely want to investigate that. Another potential explanation is that the public lacks understanding of what goes into AI R&D.

Ariel: Have there been surveys that are as comprehensive as this in the past?

Baobao: I’m hesitant to say that there are surveys that are as comprehensive as this. We certainly relied on a lot of past survey research when building our surveys. The Eurobarometer had a couple of good surveys on AI in the past, but I think we cover both sort of the long-term and the short-term AI governance challenges, and that’s something that this survey really does well.

Ariel: Okay. The reason I ask that is I wonder how much people’s perceptions or misperceptions of how fast AI is advancing would be influenced by just the fact that we have had significant advancements just in the last couple of years that I don’t think were quite as common during previous surveys that were presented to people.

Baobao: Yes, that certainly makes sense. One part of our survey tries to track responses over time, so I was able to dig up some surveys going all the way back to the 1980s that were conducted by the National Science Foundation on the question of automation—whether automation will create more jobs or eliminate more jobs. And we find that compared with the historical data, the percentage of people who think that automation will create more jobs than it eliminates—that percentage has decreased, so this result could be driven by people reading in the news about all these advances in AI and thinking, “Oh, AI is getting really good these days at doing tasks normally done by humans,” but again, you would need much more data to sort of track these historical trends. So we hope to do that. We just recently received a grant from the Ethics and Governance of AI Fund, to continue this research in the future, so hopefully we will have a lot more data, and then we can really map out these historical trends.

Ariel: Okay. We looked at those 13 governance challenges that you mentioned. I want to more broadly ask the same two-part question of: looking at the survey in its entirety, what results were most expected and what results were most surprising?

Baobao: In terms of the AI governance challenge question, I think we had expected some of the results. We’d done some pilot surveys in the past, so we were able to have a little bit of a forecast, in terms of the governance challenges that people prioritize, such as data privacy, cyber attacks, surveillance, and digital manipulation. These were also things that respondents in the pilot surveys had prioritized. I think some of the governance challenges that people still think of as important, but don’t view as likely to impact large numbers of people in the next 10 years, such as critical AI systems failure—these questions are sort of harder to ask in some ways. I know that AI experts think about it a lot more than, say, the general public.

Another thing that sort of surprised me is how much people think value alignment— which is sort of an abstract concept—how much people think that’s quite important, and also likely to impact large numbers of people within the next 10 years. It’s up there with safety of autonomous vehicles or biased hiring algorithms, so that was somewhat surprising.

Ariel: That is interesting. So if you’re asking people about value alignment, were respondents already familiar with the concept, or was this something that was explained to them and they just had time to consider it as they were looking at the survey?

Baobao: We explained to them what it meant, and we said that it means to make sure that AI systems are safe, trustworthy, and aligned with human values. Then we gave a brief paragraph definition. We think that maybe people haven’t heard of this term before, or it could be quite abstract, so therefore we gave a definition.

Ariel: I would be surprised if it was a commonly known term. Then looking more broadly at the survey as a whole, you looked at lots of different demographics. You asked other questions too, just in terms of things like global risks and the potential for global risks, or generally about just perception of AI in general, and whether or not it was good, and whether or not advanced AI was good or bad, and things like that. So looking at the whole survey, what surprised you the most? Was it still answers within the governance challenges, or did anything else jump out at you as unexpected?

Baobao: Another thing that jumped out at me is that respondents who have computer science or engineering degrees tend to think that the AI governance challenges are less important across the board than people who don’t have computer science or engineering degrees. These people with computer science or engineering degrees also are more supportive of developing AI. I suppose that result is not totally unexpected, but I suppose in the news there is a sense that people who are concerned about AI safety, or AI governance challenges, tend to be those who have a technical computer background. But in reality, what we see are people who don’t have a tech background who are concerned about AI. For instance, women, those with low levels of education, or those who are low-income, tend to be the least supportive of developing AI. That’s something that we want to investigate in the future.

Ariel: There’s an interesting graph in here where you’re showing the extent to which the various groups consider an issue to be important, and as you said, people with computer science or engineering degrees typically don’t consider a lot of these issues very important. I’m going to list the issues real quickly. There’s data privacy, cyber attacks, autonomous weapons, surveillance, autonomous vehicles, value alignment, hiring bias, criminal justice bias, digital manipulation, US-China arms race, disease diagnosis, technological unemployment, and critical AI systems failure. So as you pointed out, the people with the CS and engineering degrees just don’t seem to consider those issues nearly as important, but you also have a category here of people with computer science or programming experience, and they have very different results. They do seem to be more concerned. Now, I’m sort of curious what the difference was between someone who has experience with computer science and someone who has a degree in computer science.

Baobao: I don’t have a very good explanation for the difference between the two, except for I can say that the people with experience, that’s a lower bar, so there are more people in the sample who have computer science or programming experience—and in fact, there’s 735 of them, compared to people who have computer science or engineering undergrad or graduate degrees, and that’s 195 people. I suppose those who have the CS or programming experience, that comprises a greater number of people. Going forward, in future surveys, we want to probe at this a bit more. We might look at what industries various people are working in, or how much experience they have either using AI or developing AI.

Ariel: And then I’m also sort of curious—I know you guys still have more work that you want to do—but I’m curious what you know now about how American perspectives are either different or similar to people in other countries.

Baobao: The most direct comparison that we can make is with respondents in the EU, because we have a lot of data based on the Eurobarometer surveys, and we find that Americans share similar concerns with Europeans about AI. So as I mentioned earlier, 82% of Americans think that AI is a technology that should be carefully managed, and that percentage is similar to what the EU respondents have expressed. Also, we find similar demographic trends, in that women, those with lower levels of income or lower levels of education, tend to be not as supportive of developing AI.

Ariel: I went through this list, and one of the things that was on it is the potential for a US-China arms race. Can you talk a little bit about the results that you got from questions surrounding that? Do Americans seem to be concerned about a US-China arms race?

Baobao: One of the interesting findings from our survey is that Americans don’t necessarily think the US or China is the best at AI R&D, which is surprising, given that these two countries are probably the best. That’s a curious fact that I think we need to be cognizant of.

Ariel: I want to interject there, and then we can come back to my other questions, because I was really curious about that. Is that a case of the way you asked it—it was just, you know, “Is the US in the lead? Is China in the lead?”—as opposed to saying, “Do you think the US or China are in the lead?” Did respondents seem confused by possibly the way the question was asked, or do they actually think there’s some other country where there’s even more research happening?

Baobao: We asked this question in a way that it has been asked about general scientific achievements that Pew Research Center has asked about, so we did it such that it’s a survey experiment where half of the respondents were randomly assigned to consider the US and half of the respondents were randomly assigned to consider China. We wanted to ask this question in this manner, so we get more specific distribution of responses. When you just ask who is in the lead, you’re only allowed to put down one, whereas we give respondents a number of choices, so you can be either best in the world or above average, et cetera.

In terms of people underestimating US R&D, I think this is reflective of the public underestimating US scientific achievements in general. Pew had a similar question in a 2015 survey, and while 45% of the scientists they interviewed think that scientific achievement in the US are the best in the world, only 15% of Americans expressed the same opinion. So this could just be reflecting this general trend.

Ariel: I want to go back to my questions about the US-China arms race, and I guess it does make sense, first, to just define what you are asking about with a US-China arms race. Is that focused more on R&D, or were you also asking about a weapons race?

Baobao: This is actually a survey experiment, where we present different messages to respondents about a potential US-China arms race, and we asked both about investment in AI military capabilities as well as developing AI in a more peaceful manner, and cooperation between the US and China in terms of general R&D. We found that Americans seem to both support the US investing more in AI military capabilities, to make sure that it doesn’t fall behind China’s, even though it would exacerbate a AI military arms race. On the other hand, they also support the US working hard with China to cooperate to avoid the dangers of a AI arms race, and they don’t seem to understand that there’s a trade-off between the two.

I think this result is important for policymakers trying to not exacerbate an arms race, or to prevent one, when communicating with the public—to communicate these trade-offs, although we find that messages that explain the risks of an arm race tend to decrease respondent support for the US investing more in AI military capabilities, but the other information treatments don’t seem to change public perceptions.

Ariel: Do you think it’s a misunderstanding of the trade-offs, or maybe just hopeful thinking that there’s some way to maintain military might while still cooperating?

Baobao: I think this is a question that involves further investigation. I apologize that I keep saying this.

Ariel: That’s the downside to these surveys. I end up with far more questions than get resolved.

Baobao: Yes, and we’re one of the first groups who are asking these questions, so we’re just at the beginning stages of probing this very important policy question.

Ariel: With a project like this, do you expect to get more answers or more questions?

Baobao: I think in the beginning stages, we might get more questions than answers, although we are certainly getting some important answers—for instance that the American public is quite concerned about the societal impacts of AI. With that result, then we can probe and get more detailed answers hopefully. What are they concerned about? What can policymakers do to alleviate these concerns?

Ariel: Let’s get into some of the results that you had regarding trust. Maybe you could just talk a little bit about what you asked the respondents first, and what some of their responses were.

Baobao: Sure. We asked two questions regarding trust. We asked about trust in various actors to develop AI, and we also asked about trust in various actors to manage the development and deployment of AI. These actors include parts of the US government, international organizations, companies, and other groups such as universities or nonprofits. We found that among the actors that are most trusted to develop AI, these include university researchers and the US military.

Ariel: That was a rather interesting combination, I thought.

Baobao: I would like to give it some context. In general, trust in institutions is low among the American public. Particularly, there’s a lot of distrust in the government, and university researchers and the US military are the most trusted institutions across the board, when you ask about other trust issues.

Ariel: I would sort of wonder if there’s political sides with which people are more likely to trust universities and researchers versus trust the military. Is that across the board respondents on either side of the political aisle trusted both, or were there political demographics involved in that?

Baobao: That’s something that we can certainly look into with our existing data. I would need to check and get back to you.

Ariel: The other thing that I thought was interesting with that—and we can get into the actors that people don’t trust in a minute—but I know I hear a lot of concern that Americans don’t trust scientists. As someone who does a lot of science communication, I think that concern is overblown. I think there is actually a significant amount of trust in scientists; There’s just some certain areas where it’s less, and I was sort of wondering what you’ve seen in terms of trust in science, and if the results of this survey have impacted that at all.

Baobao: I would like to add that among the actors that we asked who are currently building AI or planning to build AI, trust is relatively low amongst all these groups.

Ariel: Okay.

Baobao: So, even with university scientists: 50% of respondents say that they have a great amount of confidence or a fair amount of confidence in university researchers developing AI in the interest of the public, so that’s better than some of these other organizations, but it’s not super high, and that is a bit concerning. And in terms of trust in science in general—I used to work in the climate policy space before I moved into AI policy, and there, it’s a question that we struggle with in terms of trust in expertise with regards to climate change. I found that in my past research, communicating the scientific consensus in climate change is actually an effective messaging tool, so your concerns about distrust in science being overblown, that could be true. So I think going forward, in terms of effective scientific communication, having AI researchers deliver an effective message: I think that could be important in bringing the public to trust AI more.

Ariel: As someone in science communication, I would definitely be all for that, but I’m also all for more research to understand that better. I also want to go into the organizations that Americans don’t trust.

Baobao: I think in terms of tech companies, they’re not perceived as untrustworthy across the board. I think trust is still relatively high for tech companies, besides Facebook. People really don’t trust Facebook, and that could be because of all the recent coverage of Facebook violating data privacy, the Cambridge Analytica scandal, digital manipulation on Facebook, et cetera. So we conducted this survey a few months after the Cambridge Analytica Facebook scandal had been in the news, but we’ve also run some pilot surveys before all that press coverage of the Cambridge Analytica Facebook scandal had broke, and we also found that people distrust Facebook. So it might be something particular to the company, although that’s a cautionary tale for other tech companies, that they should work hard to make sure that the public trusts its products.

Ariel: So I’m looking at this list, and under the tech companies, you asked about Microsoft, Google, Facebook, Apple, and Amazon. And I guess one question that I have—the trust in the other four, Microsoft, Google, Apple, and Amazon appears to be roughly on par, and then there’s very limited trust in Facebook. But I wonder, do you think it’s just—since you’re saying that Facebook also wasn’t terribly trusted beforehand—do you think that has to do with the fact that we have to give so much more personal information to Facebook? I don’t think people are aware of giving as much data to even Google, or Microsoft, or Apple, or Amazon.

Baobao: That could be part of it. So, I think going forward, we might want to ask more detailed questions about how people use certain platforms, or whether they’re aware that they’re giving data to particular companies.

Ariel: Are there any other reasons that you think could be driving people to not trust Facebook more than the other companies, especially as you said, with the questions and testing that you’d done before the Cambridge Analytica scandal broke?

Baobao: Before the Cambridge Analytica Facebook scandal, there were a lot of news coverage around the 2016 elections of vast digital manipulation on Facebook, and on social media, so that could be driving the results.

Ariel: Okay. Just to be consistent and ask you the same question over and over again, with this, what did you find surprising and what was on par with your expectations?

Baobao: I suppose I don’t find the Facebook results that unsurprising, given its negative press coverage, and also from our pilot results. What I did find surprising is the high levels of trust in the US military to develop AI, because I think some of us in the AI policy community are concerned about military applications of AI, such as lethal autonomous weapons. But on the other hand, Americans seem to place a high general level of trust in the US military.

Ariel: Yeah, that was an interesting result. So if you were going to move forward, what are some questions that you would ask to try to get a better feel for why the trust is there?

Baobao: I think I would like to ask some questions about particular uses or applications of AI these various actors are developing. Sometimes people aren’t aware that the US military is perhaps investing in this application of AI that they might find problematic, or that some tech companies are working on some other applications. I think going forward, we might do more of these survey experiments, where we give information to people and see if that increases or decreases trust in the various actors.

Ariel: What did Americans think of high-level machine learning and AI?

Baobao: What we found is that the public thinks, on balance, it will be more bad than good: So we have 15% of respondents who think it will be extremely bad, possibly leading to human extinction, and that’s a concern. On the other hand, only 5% thinks it will be extremely good. There’s a lot of uncertainty. To be fair, it is about a technology that a lot of people don’t understand, so 18% said, “I don’t know.”

Ariel: What do we take away from that?

Baobao: I think this also reflects on our previous findings that I talked about, where Americans expressed concern about where AI is headed: that there are people with serious reservations about AI’s impact on society. Certainly, AI researchers and policymakers should take these concerns seriously, invest a lot more research into how to prevent the bad outcomes and how to make sure that AI can be beneficial to everyone.

Ariel: Were there groups who surprised you by either being more supportive of high-level AI and groups who surprised you by being less supportive of high-level AI?

Baobao: I think the results for support of developing high-level machine intelligence versus support for developing AI, they’re quite similar. The correlation is quite high, so I suppose nothing is entirely surprising. Again, we find that people with CS or engineering degrees tend to have higher levels of support.

Ariel: I find it interesting that people who have higher incomes seem to be more supportive as well.

Baobao: Yes. That’s another result that’s pretty consistent across the two questions. We also performed analysis looking at these different levels of support for developing high-level machine intelligence, controlling for support of developing AI, and what we find there is that those with CS or programming experience have greater support of developing high-level machine intelligence, even controlling for support of developing AI. So there, it seems to be another tech optimism story, although we need to investigate further.

Ariel: And can you explain what you mean when you say that you’re analyzing the support for developing high-level machine learning with respect to the support for AI? What distinction are you making there?

Baobao: Sure. So we use a multiple linear regression model, where we’re trying to predict support for developing high-level machine intelligence using all these demographic characteristics, but also including respondent’s support for developing AI, to see if there’s something driving the support for developing high-level machine intelligence in spite of controlling for developing AI. And we find that controlling for support for developing AI, having CS or programming experience is further correlated with support of developing high-level machine intelligence. I hope that makes sense.

Ariel: For the purposes of the survey, how do you distinguish between AI and high-level machine learning?

Baobao: We defined AI as computer systems that perform tasks or make decisions that usually require human intelligence. So that’s a more general definition, versus high-level machine intelligence defined in such a way where the AI is doing most economically relevant tasks at the level of the median human.

Ariel: Were there inconsistencies between those two questions, where you were surprised to find support for one and not support for the other?

Baobao: We can sort of probe it further, to see if there’s people who answer differently for those two questions. We haven’t looked into it, but certainly that’s something that we can with our existing data.

Ariel: Were there any other results that you think researchers specifically should be made aware of, that could potentially impact the work that they’re doing in terms of developing AI?

Baobao: I guess here’s some general recommendations. I think it’s important for researchers or people working in an adjacent space to do a lot more scientific communication to explain to the public what they’re doing—particularly maybe AI safety researchers, because I think there’s a lot of hype about AI in the news, either how scary it is or how great it will be, but I think some more nuanced narratives would be helpful for people to understand the technology.

Ariel: I’m more than happy to do what I can to try to help there. So for you, what are your next steps?

Baobao: Currently, we’re working on two projects. We’re hoping to run a similar survey in China this year, so we’re currently translating the questions into Chinese and changing the questions to have more local context. So then we can compare our results—the US results with the survey results from China—which will be really exciting. We’re also working on surveying AI researchers about various aspects of AI, both looking at their predictions for AI development timelines, but also their views on some of these AI governance challenge questions.

Ariel: Excellent. Well, I am very interested in the results of those as well, so I hope you’ll keep us posted when those come out.

Baobao: Yes, definitely. I will share them with you.

Ariel: Awesome. Is there anything else you wanted to mention?

Baobao: I think that’s it.

Ariel: Thank you so much for joining us.

Baobao: Thank you. It’s a pleasure talking to you.

 

 

AI Alignment Podcast: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)

What motivates cooperative inverse reinforcement learning? What can we gain from recontextualizing our safety efforts from the CIRL point of view? What possible role can pre-AGI systems play in amplifying normative processes?

Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell is the eighth podcast in the AI Alignment Podcast series, hosted by Lucas Perry and was recorded at the Beneficial AGI 2018 conference in Puerto Rico. For those of you that are new, this series covers and explores the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, Lucas will speak with technical and non-technical researchers across areas such as machine learning, governance,  ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with Dylan Hadfield-Menell. Dylan is a 5th year PhD student at UC Berkeley advised by Anca Dragan, Pieter Abbeel and Stuart Russell, where he focuses on technical AI alignment research.

Topics discussed in this episode include:

  • How CIRL helps to clarify AI alignment and adjacent concepts
  • The philosophy of science behind safety theorizing
  • CIRL in the context of varying alignment methodologies and it’s role
  • If short-term AI can be used to amplify normative processes
You can follow Dylan here and find the Cooperative Inverse Reinforcement Learning paper here. You can listen to the podcast above or read the transcript below.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast series. I’m Lucas Perry and today we will be speaking for a second time with Dylan Hadfield-Menell on cooperative inverse reinforcement learning, the philosophy of science behind safety theorizing, CIRL in the context of varying alignment methodologies, and if short term AI can be used to amplify normative processes. This time it just so happened to be an in person discussion and Beneficial AGI 2019, FLI’s sequel to the Beneficial AI 2017 conference at Asilomar.

I have a bunch of more conversations that resulted from this conference to post soon and you can find more details about the conference in the coming weeks. As always, if you enjoy this podcast, please subscribe or follow us on your preferred listening platform. As many of you will already know, Dylan is a fifth year Ph.D. student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel, and Stuart Russell, where he focuses on technical AI Alignment research. And so without further ado, I’ll give you Dylan.

Thanks so much for coming on the podcast again, Dylan, that’s been like a year or something. Good to see you again.

Dylan: Thanks. It’s a pleasure to be here.

Lucas: So just to start off, we can go ahead and begin speaking a little bit about your work on cooperative inverse reinforcement learning and whatever sorts of interesting updates or explanation you have there.

Dylan: Thanks. For me, working in cooperative IRL has been a pretty long process, it really sort of dates back to the start of my second year in PhD when my advisor came back from a yearlong sabbatical and suggested that we entirely changed the research direction we were thinking about.

That was to think about AI Alignment and AI Safety and associated concerns that, that might bring. And our first attempt at a really doing research in that area was to try to formalize what’s the problem that we’re looking at, what are the space of parameters and the space of solutions that we should be thinking about in studying that problem?

And so it led us to write Cooperative Inverse Reinforcement Learning. Since then I’ve had a large amount of conversations where I’ve had incredible difficulty trying to convey what it is that we’re actually trying to do here and what exactly that paper and idea represents with respect to AI Safety.

One of the big updates for me and one of the big changes since we’ve spoken last, is getting a little bit of a handle on really what’s the value of that as the system. So for me, I’ve come around to the point of view that really what we were trying to do with cooperative IRL was to propose an alternative definition of what it means for an AI system to be effective or rational in some sense.

And so there’s a story you can tell about artificial intelligence, which is that we started off and we observed that people were smart and they were intelligent in some way, and then we observed that we could get computers to do interesting things. And this posed the question of can we get computers to be intelligent? We had no idea what that meant, no idea how to actually nail it down and we discovered that in actually trying to program solutions that looked intelligent, we had a lot of challenges.

So one of the big things that we did as a field was to look over next door into the economics department in some sense, to look at those sort of models that they have of decision theoretic rationality and really looking at homoeconomicous as an ideal to shoot for. From that perspective, actually a lot of the field of AI has shifted to be about effective implementations of homoeconomicous.

In my terminology, this is about systems that are effectively individually rational. These are systems that are good at optimizing for their goals, and a lot of the concerns that we have about AI Safety is that systems optimizing for their own goals could actually lead to very bad outcomes for the rest of us. And so what cooperative IRL attempts to do is to understand what it would mean for a human robot system to behave as a rational agent.

In the sense, we’re moving away from having a box drawn around the AI system or the artificial component of the system to having that agent box drawn around the person and the system together, and we’re trying to model the sort of important parts of the value alignment problem in our formulation here. And in this case, we went with the simplest possible set of assumptions which are basically that we have a static set of preferences that are the humans preferences that they’re trying to optimize. This is effectively the humans welfare.

The world is fully observable and the robot and the person are both working to maximize the humans welfare, but there is this information bottlenecking. This information asymmetry that’s present that we think is a fundamental component of the value alignment problem. And so really what cooperative IRL, is it’s a definition of how a human and a robot system together can be rational in the context of fixed preferences in a fully observable world state.

Lucas: There’s a point of metatheory or coming up with models and theory. It seems like the fundamental issue is given how and just insanely complex AI Alignment is trying to converge on whatever the most efficacious model is, is very, very difficult. People keep flicking back and forth about theoretically how we’re actually going to do this. Even in very grid world or toy environments. So it seemed very, very hard to isolate the best variables or what variables can be sort of modeled and tracked in ways that is going to help us most.

Dylan: So, I definitely think that this is not an accurate model of the world and I think that there are assumptions here which, if not appropriately reexamined, would lead to a mismatch between the real world and things that work in theory.

Lucas: Like human beings having static preferences.

Dylan: So for example, yes, I don’t claim to know what human preferences really are and this theory is not an attempt to say that they are static. It is an attempt to identify a related problem to the one that we’re really faced with, that we can actually make technical and theoretical progress on. That will hopefully lead to insights that may transfer out towards other situations.

I certainly recognize that what I’m calling a theta in that paper is not really the same thing that everyone talks about when we talk about preferences. I, in talking with philosophers, I’ve discovered, I think it’s a little bit more closer to things like welfare in like a moral philosophy context, which maybe you could think about as being a more static object that you would want to optimize.

In some sense theta really is an encoding of what you would like the system to do, in general is what we’re assuming there.

Lucas: Because it’s static.

Dylan: Yes, and to the extent that you want to have that be changing over time, I think that there’s an interesting theoretical question as to how that actually is different, and what types of changes that leads to and whether or not you can always reduce something with non-static preferences to something with static preferences from a mathematical point of view.

Lucas: I can see how moving from static to changing over time just makes it so much more insanely complex.

Dylan: Yeah, and it’s also really complex of the level of its Philosophically unclear what the right thing to do.

Lucas: Yeah, that’s what I mean. Yeah, you don’t even know what it even means to be aligning as the values are changing, like whether or not the agent even thinks that they just moved in the right direction or not.

Dylan: Right, and I also even think I want to point out how uncertain all of these things are. We as people are hierarchical organizations have different behaviors and observation systems and perception systems. And we believe we have preferences, we have a name to that, but there is a sense in which that is ultimately a fiction of some kind.

It’s a useful tool that we have to talk about ourselves to talk about others that facilitates interaction and cooperation. And so given that I do not know the answer to these philosophical questions, what can I try to do as a technical researcher to push the problem forward and to make actual progress?

Lucas: Right, and so it’s sort of again, like a metatheoretical point and what people are trying to do right now in the context of AI Alignment, it seems that the best thing for people to be doing is sort of to be coming up with these theoretical models and frameworks, which have a minimum set of assumptions which may be almost like the real world but are not, and then making theoretical progress there that will hopefully in the future transfer, as you said to other problems as ML and deep learning gets better and the other tools are getting better so that it’ll actually have the other tools to make it work with more complicated assumptions.

Dylan: Yes, I think that’s right. The way that I view this as we had AI, is this broad, vague thing. Through the course of AI research, we kind of got to Markov decision processes as a sort of coordinating theory around what it means for us to design good agents, and cooperative IRL is an attempt to take a step from markup decision processes more closely towards the set of problems that we want to study.

Lucas: Right, and so I think this is like a really interesting point that I actually haven’t talked to anyone else about and if you have a few more words about it, I think it would be really interesting. So just in terms of being a computer scientist and being someone who is working on the emerging theory of a field. I think it’s often unclear what the actual theorizing process is behind how people get to CIRL. How did someone get to debate? How did someone get to iterated amplification?

It seems like you first identify problems which you see to be crucial and then there are some sorts of epistemic and pragmatic heuristics that you apply to try and begin to sculpt a model that might lead to useful insight. Would you have anything to correct or unpack here?

Dylan: I mean, I think that is a pretty good description of a pretty fuzzy process.

Lucas: But like being a scientist or whatever?

Dylan: Yeah. I don’t feel comfortable speaking for scientists in general here, but I could maybe say a little bit more about my particular process, which is that I try to think about how I’m looking at the problem differently from other people based on different motivations and different goals that I have. And I try to lean into how that can push us in different directions. There’s a lot of other really, really smart people who have tried to do lots of things.

You have to maintain an amount of intellectual humility about your ability to out think the historical components of the field. And for me, I think that in particular for AI Safety, it’s thinking about reframing what is the goal that we’re shooting towards as a field.

Lucas: Which we don’t know.

Dylan: We don’t know of those goals are, absolutely. And I think that there is a sense in which the field has not re-examined those goals incredibly deeply. For a little bit, I think that it’s so hard to do anything that looks intelligent in the real world that we’ve been trying to focus on that individually rational Markov decision process model. And I think that a lot of the concerns about AI Safety are really a call for AI as a field to step back and think about what we’re trying to accomplish in the world and how can we actually try to achieve beneficial outcomes for society.

Lucas: Yeah, and I guess like a sociological phenomenon within the scientists or people who are committed to empirical things. In terms of reanalyzing what the goal of AI Alignment is, the sort of area of moral philosophy and ethics and other things, which for empirical leaning rational people can be distasteful because you can’t just take a telescope to the universe and see like a list of what you ought to do.

And so it seems like people like to defer on these questions. I don’t know. Do you have anything else to add here?

Dylan: Yeah. I think computer scientists in particular are selected to be people who like having boxed off problems that they know how to solve and feel comfortable with, and that leaning into getting more people with a humanities bent into computer science and broadly AI in particular, AI Safety especially is really important and I think that’s a broad call that we’re seeing come from society generally.

Lucas: Yeah, and I think it also might be wrong though to model the humanities questions as those which are not in boxes and cannot be solved. That’s sort of like a logical positivist thing to say, that on one end we have the hard things and you just have to look at the world enough and you’ll figure it out and then there’s the soft squishy things which deal with abstractions that I don’t have real answers, but people with fluffy degrees need to come up with things that seem right but aren’t really right.

Dylan: I think it would be wrong to take what I just said in that direction, and if that’s what it sounds like I definitely want to correct that. I don’t think there is a sense in which computer science is a place where there are easy right answers, and that the people in humanities are sort of waving their hands and sort of fluffing around.

This is sort of leaning into making this a more AI value alignment kinds of framing or thinking about it. But when I think about being AI systems into the world, I think about what things can you afford to get wrong in your specification and which things can you not afford to get wrong in your specifications.

In this sense, specifying physics incorrectly is much, much better than specifying the objective incorrectly, at least by default. And the reason for that is what happens to the world when you push it, is a question that you can answer from your observations. And so if you start off in the wrong place, as long as you’re learning and adapting, I can reasonably expect my systems do correct to that. Or at least the goal of successful AI research is that your systems will effectively adapt to that.

However, the past that your system is supposed to do is sort of arbitrary in a very fundamental sense. And from that standpoint, it is on you as the system designer to make sure that objective is specified correctly. When I think about what we want to do as a field, I ended up taking a similar lens and that there’s a sense in which we as researchers and people and society and philosophers and all of it are trying to figure out what we’re trying to do and what we want to task the technology with, and the directions that we want to push it in. And then there are questions of what will the technology be like and how should it function that will be informed by that and shaped by that.

And I think that there is a sense in which that is arbitrary. Now, what is right? That I don’t really know the answer to and I’m interested in having those conversations, but they make me feel uneasy. I don’t trust myself on those questions, and that could mean that I should learn how to feel more uneasy and think about it more and in doing this research I have been kind of forced into some of those conversations.

But I also do think that for me at least I see a difference between what can we do and what should we do. And thinking about what should we do as a really, really hard question that’s different than what can we do.

Lucas: Right. And so I wanna move back towards CIRL, but just to sort of wrap up here on our philosophy of science musings, a thought I had while you were going through that was, at least for now, what I think is fundamentally shared between fields that deal with things that matter, are their concepts deal with meaningfully relevant reference in the world? Like do your concepts refer to meaningful things?

Putting ontology aside, whatever love means or whatever value alignment mean. These are meaningful referents for people and I guess for now if our concepts are actually referring to meaningful things in the world, then it seems important.

Dylan: Yes, I think so. Although, I’m not totally sure I understood that.

Lucas: Sure, that’s fine. People will say that humanities or philosophy doesn’t have these boxes with like well-defined problems and solutions because they either don’t deal with real things in the world or the concepts are so fuzzy that the problems are sort of invented and illusory. Like how many angels can stand on the head of a pin? Like the concepts don’t work, aren’t real and don’t have real referents, but whatever.

And I’m saying the place where philosophy and ethics and computer science and AI Alignment should at least come together for now is where the referents have, where the concepts of meaningful referents in the world?

Dylan: Yes, that is something that I absolutely buy. Yes, I think there’s a very real sense in which those questions are harder, but that doesn’t mean they’re less real or less important.

Lucas: Yes, that’s because it’s the only point I wanted to push against logical positivism.

Dylan: No, I don’t mean to say that the answers are wrong, it’s just that they are harder to prove in a real sense.

Lucas: Yeah. I mean, I don’t even know if they have answers or if they do or if they’re just all wrong, but I’m just open to it and like more excited about everyone coming together thing.

Dylan: Yes, I absolutely agree with that.

Lucas: Cool. So now let’s turn it back into the CIRL. So you began by talking about how you and your advisers were having this conceptual shift and framing, then we got into the sort of philosophy of science behind how different models and theories of alignment go. So from here, whatever else you have to say about CIRL.

Dylan: So I think for me the upshot of concerns about advanced AI systems and negative consequence there in really is a call to recognize that the goal of our field is AI Alignment. That almost any AI that’s not AI Alignment is solving a sub problem and viewing it only in solving that sub problem is a mistake.

Ultimately, we are in the business of building AI systems that integrate well with humans and human society. And if we don’t take that as a fundamental tenant of the field, I think that we are potentially in trouble and I think that that is a perspective that I wish was more pervasive throughout artificial intelligence generally,

Lucas: Right, so I think I do want to move into this view where safety is a normal thing, and like Stuart Russell will say, “People who build bridges all care about safety and there aren’t a subsection of bridge builders who work in bridge safety, everyone is part of the bridge safety.” And I definitely want to get into that, but I also sort of want to get a little bit more into CIRL and why you think it’s so motivating and why this theoretical framing and shift is important or illuminating, and what the specific content of it is.

Dylan: The key thing is that what it does is point out that it doesn’t make sense to talk about how well your system is doing without talking about the way in which it was instructed and the type of information that it got. No AI system exists on its own, every AI system has a designer, and it doesn’t make sense to talk about the functioning of that system without also talking about how that designer built it, evaluated it and how well it is actually serving those ends.

And I don’t think this is some brand new idea that no one’s ever known about, I think this is something that is incredibly obvious to practitioners in the field once you pointed out. The process whereby a robot learns to navigate a maze or vacuum a room is not, there is an objective and it optimizes it and then it does it.

What it is that there is a system designer who writes down an objective, selects an optimization algorithm, observes the final behavior of that optimization algorithm, goes back, modifies the objectives, modifies the algorithm, changes hyper parameters, and then runs it again. And there’s this iterative process whereby your system eventually ends up getting to the behavior that you wanted to have. And AI researchers have tended to draw a box around. The thing that we call AI is the sort of final component of that.

Lucas: Yeah, it’s because at least subjectively and I guess this is sort of illuminated by meditation and Buddhism, is that if you’re a computer scientist and you’re just completely identified with the process of doing computer science, you’re just identified with the problem. And if you just have a little bit of mindfulness and you’re like, “Okay, I’m in the context of a process where I’m an agent and trying to align another agent,” and if you’re not just completely identified with the process and you see the unfolding of the process, then you can do sort of like more of a meta-analysis which takes a broader view of the problem and can then, I guess hopefully work on improving it.

Dylan: Yeah, I think that’s exactly right, or at least as I understand that, that’s exactly right. And to be a little bit specific about this, we have had these engineering principles and skills that are not in the papers, but they are things that are passed down from Grad student to Grad student within a lab. Their institutional knowledge that exists within a company for how you actually verify and validate your systems, and cooperative IRLs and attempt to take all of that sort of structure that AI systems have existed within and try to bring that into the theoretical frameworks that we actually work with.

Lucas: So can you paint a little picture of what the CIRL model looks like?

Dylan: It exists in a sequential decision making context and we assume we have states of the world and a transition diagram that basically tells us how we get to another state given the previous state and actions from the human and the robot. But the important conceptual shift that it makes is the space of solutions that we’re dealing with are combinations of a teaching strategy and a learning strategy.

There is a commitment on the side of the human designers or users of the systems to provide data that is in some way connected to the objectives that they want to be fulfilled. That data can take many forms, it could be in the form of writing down a reward function that ranks a set of alternatives, it could be in the form of providing demonstrations that you expect your system to imitate. It could be in the form of providing binary comparisons between two clearly identified alternatives.

And the other side of the problem is what is the learning strategy that we use? And this is the question of how the robot is actually committing to respond to the observations that we’re giving it about what we wanted to do, in the case of a pre-specified proxy reward going to a literal interpretation and a reinforcement learning system, let’s say. What the system is committing to doing is optimizing under that set of trajectory rankings and preferences based off the simulation environment that it’s in, or the actual physical environment that it’s exploring.

When we shift to something like inverse reward design, which is a paper that we released last year, what that says is we’d like the system to look at this ranking of alternatives and actually try to blow that up into a larger uncertainty set over the set of possible consistent rankings with that, and then when you go into deployment, you may be able to leverage that uncertainty to avoid catastrophic failures or generally just unexpected behavior.

Lucas: So this other point I think that you and I discussed briefly, maybe it was actually with Rohan, but it seems like often in terms of AI Alignment, it’s almost like we’re reasoning from nowhere about abstract agents and that sort of makes the problem extremely difficult. Often, if you just look at human examples, it just becomes super mundane and easy. This sort of conceptual shift can almost I think be framed super simply as like the difference between a teacher trying to teach someone and then a teacher realizing that the teacher is a person that is teaching another student and the teacher can think better about how to teach and then also the process between the teacher and the student and how to improve that at a higher level of attraction.

Dylan: I think that’s the direction that we’re moving in. What I would say is it’s as AI practitioners, we are teaching our systems how to behave and we have developed our strategies for doing that.

And now that we’ve developed a bunch of strategies that sort of seem to work. I think it’s time for us to develop a more rigorous theory of actually how those teaching strategies interact with the final performance of the system.

Lucas: Cool. Is there anything else here that you would like say about CIRL, or any really important points you would like people to get people who are interested in technical AI Alignment or CS students?

Dylan: I think the main point that I would make is that research and thinking about powerful AI systems is valuable, even if you don’t think that that’s what’s going to happen. You don’t need to be motivated by those sets of problems in order to recognize that this is actually just basic research into the science of artificial intelligence.

It’s got an incredible amount of really interesting problems and the perspectives that you adopt from this framing can be incredibly useful as a comparative advantage over other researchers in the field. I think that’d be my final word here.

Lucas: If I might just ask you one last question. We’re at beneficial AGI 2019 right now and we’ve heard a lot of overviews of different research agendas and methodologies and models and framings for how to best go forth with AI Alignment, which include a vast range of things which work on corrigibility and interpretability and robustness and other things, and the different sort of research agendas and methodologies of places like MIRI who is come out with this new framing on embedded agency, and also different views at OpenAI and DeepMind.

And Eric Drexler has also newly proposed these services based conception of AI where we remove the understanding of powerful AI systems or regular AI systems as agents, which sort of gets us away from a lot of the x-risky problems and global catastrophic risks problems and value alignment problems.

From your point of view, as someone who’s worked a lot in CIRL and is the technical alignment researcher, how do you view CIRL in this context and how do you view all of these different emerging approaches right now in AI Alignment?

Dylan: For me, and you know, I should give a disclaimer. This is my research area and so I’m obviously pretty biased to thinking it’s incredibly important and good, but for me at least, cooperative IRL is a uniting framework under which I can understand all of those different approaches. I believe that a services type solution to AI Safety or AI Alignment that’s actually arguing for a particular type of learning strategy and implementation strategies of CIRL, and I think it can be framed within that system.

Similarly, I had some conversations with people about debate. I believe debate fits really nicely into the framework and we commit to a human strategy of judging debates from systems and we commit to a robot strategy and just putting yourself into two systems and working towards that direction. So for me, it’s a way in which I can sort of identify the commonalities between these different approaches and compare and contrast them and then under a set of assumptions about what the world is like, what the space of possible preferences is like and what the space of strategies that people can implement possibly get out some information about which one is better or worse, or which type of strategy is vulnerable to different types of mistakes or errors.

Lucas: Right, so I agree with all of that, the only place that I might want to push back is, it seems that maybe the MIRI embedded agency stuff subsumes everything else. What do you think about that?

Because the framing is like whenever AI researchers draw these models, there are these conceptions of these information channels, right, which are selected by the researchers and which we control, but the universe is really just a big non-dual happening of stuff and agents are embedded in the environment and are almost an identical process within the environment and it’s much more fuzzy where the dense causal streams are and where a little causal streams are and stuff like that. It just seems like the MIRI stuff seems to maybe subsume the CIRL and everything else a little bit more, but I don’t know.

Dylan: I certainly agree that that’s the one that’s hardest to fit into the framework, but I would also say that in my mind, I don’t know what an agent is. I don’t know how to operationalize an agent, I don’t actually know what that means in the physical world and I don’t know what it means to be an agent. What I do know is that there is a strategy of some sort that we can think of as governing the ways that the system is perform and behave.

I want to be very careful about baking in assumptions in beforehand. And it feels to me like embedded agency is something that I don’t fully understand the set of assumptions being made in that framework. I don’t necessarily understand how they relate to the systems that we’re actually going to build.

Lucas: When people say that an agent is like a fuzzy concept, I think that, that might be surprising to a lot of people who have thought somewhat about the problem because it’s like, obviously I know what an agent is, it’s different than all the other dead stuff in the world that has goals and it’s physically confined and unitary.

If you just like imagine like abiogenesis, how life began. It is the first relatively self-replicating chain of hydrocarbons and agent and you can go from a really small systems to really big systems, which can exhibit certain properties or principles that feel a little bit agenty, but may not be useful. And so I guess if we’re going to come up with a definition of it, it should just be something useful for us or something.

Dylan: I think I’m not sure is the most accurate word we can use here. I wish I had a better answer for what this was, maybe I can share one of the thought experiments that convinced me, I was pretty confused about what an agent is.

Lucas: Yeah, sure.

Dylan: It came from thinking about what value alignment is. So if we think about values alignment between two agents and those are both perfectly rational actors, making decisions in the world perfectly in accordance with their values, with full information. I can sort of write down a definition of value alignment, which is basically you’re using the same ranking over alternatives that I am.

But a question that we really wanted to try to answer that feels really important is what does it mean to be value aligned in a partial context? If you were a bounded agent, if you’re not a perfectly rational agent, what does it actually mean for you to be value aligned? That was the question that we also didn’t really know how to answer.

Lucas: My initial reaction is the kind of agent that tries its best with its limited rationality to be like the former thing that you talked about.

Dylan: Right, so that leads to a question that we thought about, so as opposed I have a chess playing agent and it is my chess playing agent and so I wanted to win the game for me. Suppose it’s using the correct goal test, so it is actually optimizing for my values. Let’s say it’s only searching out to depth three, so it’s pretty dumb as far as chess players go.

Do I think that that is an agent that is value aligned with me? Maybe. I mean, certainly I can tell the story in one way that it sounds like it is. It’s using the correct objective function, it’s doing some sort of optimization thing. If it ever identifies a checkmate move in three moves, I will always find that get that back to me. And so that’s a sense in which it feels like it is a value aligned agent.

On the other hand, what if it’s using a heuristic function which is chosen poorly, or and something closer to an adversarial manner. So now it’s a depth three agent that is still using the correct goal test, but it’s searching in a way that is adversarially selected. Is that a partially value aligned agent?

Lucas: Sorry, I don’t understand what it means to have the same objective function, but be searching in three depth in an adversarial way.

Dylan: In particular, when you’re doing a chess search engine, there is your sort of goal tests that you run on your leaves of your search to see if you’ve actually achieved winning the game. But because you’re only doing a partial search, you often have to rely on using a heuristic of some sort to like rank different positions.

Lucas: To cut off parts of the tree.

Dylan: Somewhat to cut off parts of the tree, but also just like you’ve got different positions, neither of which are winning and you need to choose between those.

Lucas: All right. So there’s a heuristic, like it’s usually good to take the center or like the queen is something that you should always probably keep.

Dylan: Or these things that are like values of pieces that you can add up was I think one of the problems …

Lucas: Yeah, and just as like an important note now in terms of the state of machine learning, the heuristics are usually chosen by the programmer. Are system is able to collapse on heuristics themselves?

Dylan: Well, so I’d say one of the big things in like AlphaZero or AlphaGo as an approach is that they applied sort of learning on the heuristic itself and they figured out a way to use the search process to gradually improve the heuristic and have the heuristic actually improving the search process.

And so there’s sort of a feedback loop set up in those types of expert iteration systems. What my point here is that when I described that search algorithm to you, I didn’t mention what heuristic it was using at all. And so you had no reason to tell me whether or not that system was partially value aligned or not because actually with heuristic is 100 percent of what’s going to determine the final performance of the system and whether or not it’s actually helping you.

And then the sort of final point I have here that I might be able to confuse you with a little bit more is, what if we just sort of said, “Okay, forget this whole searching business. I’m just going to precompute all the solutions from my search algorithm and I’m going to give you a policy of when you’re in this position, do this move. When you’re in that position, do that move.” And what would it mean for that policy to be values aligned with me?

Lucas: If it did everything that you would have done if you were the one playing the chess game. Like is that value alignment?

Dylan: That certainly perfect imitation, and maybe we [crosstalk 00:33:04]

Lucas: Perfect imitation isn’t necessarily value alignment because you don’t want it to perfectly imitate you, you want it to win the game.

Dylan: Right.

Lucas: Isn’t the easiest way to just sort of understand this is that there are degrees of value alignment and value alignment is the extent to which the thing is able to achieve the goals that you want?

Dylan: Somewhat, but the important thing here is trying to understand what these intuitive notions that we’re talking about actually mean for the mathematics of sequential decision making. And so there’s a sense in which you and I can talk about partial value alignment and the agents that are trying to help you. But if we actually look at the math of the problem, it’s actually very hard to understand how that actually translates. Like mathematically I have lots of properties that I could write down and I don’t know which one of those I want to call partial value alignment.

Lucas: You know more about the math than I do, but the percentage chance of a thing achieving the goal is the degree to which its value aligned? If you’re certain that the end towards which is striving, and the end towards what you want it to strive?

Dylan: Right, but that striving term is a hard one, right? Because if your goals aren’t achievable then it’s impossible to be value aligned with you in that sense.

Lucas: Yeah, you have to measure the degree to which the end towards which it’s striving is the end towards what you want it to strive and then also measure the degree to which the way that it tries to get to what you want is efficacious or …

Dylan: Right. I think that intuitively I agree with you and I know what you mean, but it’s like I can do things like I can write down a reward function and I can say how well does this system optimize that reward function? And we could ask whether or not that means its value aligned with it or not. But to me, that just sounds like the question of like is your policy optimal and the sort of more standard context.

Lucas: All right, so have you written about how you think that CIRL subsumes all of these other methodologies? And if it does subsume these other AI Alignment methodologies. How do you think that will influence or affect the way we should think about the other ones?

Dylan: I haven’t written that explicitly, but when I’ve tried to convey is that it’s a formalization of the type of problem we’re trying to solve. I think describing this subsuming them is not quite right.

Lucas: It contextualizes them and it brings light to them by providing framing.

Dylan: It gives me a way to compare those different approaches and understand what’s different and what’s the same between them, and in what ways are they … like in what scenarios do we expect them to work out versus not? One thing that we’ve been thinking about recently is what happens when the person doesn’t know immediately and what they’re trying to do.

So if we imagine that there is in fact the static set of preferences, the person’s trying to optimize, so we’re still making that assumption, but assuming that those preferences are revealed to the person over time through experience or interaction with the world. That is a richer class of value alignment problems than cooperative IRL deals with. It’s really closer to what we are attempting to do right now.

Lucas: Yeah, and I mean that doesn’t even include value degeneracy, like what if I get hooked on drugs in the next three years and all my values go and my IRL agent works on assumptions that I’m always updating towards what I want, but you know …

Dylan: Yes, and I think that’s where you get these questions of changing preferences that make it hard to really think through things. I think there’s a philosophical stance you’re taking there, which is that your values have changed rather than your beliefs have changed there.

In the sense that wire-heading is a phenomenon that we see in people and in general learning agents, and if you are attempting to help it learning agent, you must be aware of the fact that wire-heading is a possibility and possibly bad. And then it’s incredibly hard to distinguish from someone who’s just found something that they really like and want to do.

When you should make that distinction or how you should make that distinction is a really challenging question, that’s not a purely technical computer science question.

Lucas: Yeah, but even at the same time, I would like to demystify it a bit. If your friend got hooked on drugs, it’s pretty obvious for you why it’s bad, it’s bad because he’s losing control, it’s bad because he’s sacrificing all of his other values. It’s bad because he’s shortening his life span by a lot.

I just mean to win again, in this way, it’s obvious in ways in which humans do this, so I guess if we take biological inspired approaches to understanding cognition and transferring how humans deal with these things into AI machines, at least at face value seems like a good way of doing it, I guess.

Dylan: Yes, everything that you said I agree with. My point is that those are in a very real sense, normative assumptions that you as that person’s friend are able to bring to the analysis of that problem, and in in some ways there is an arbitrariness to labeling that as bad.

Lucas: Yeah, so the normative issue is obviously very contentious and needs to be addressed more, but at the same time society has come to very clear solutions to normative problems like murder is basically a solved normative problem. There’s a degree to which it’s super obvious that certain normative questions are just answer it and we should I guess practice epistemic humility and whatever here obviously.

Dylan: Right, and I don’t disagree with you on that point, but I think what I’d say is, as a research problem there’s a real question to getting a better understanding of the normative processes whereby we got to solving that question. Like what is the human normative process? It’s a collective societal system. How does that system evolve and change? And then how should machines or other intelligent entities integrate into that system without either subsuming or destroying it in bad ways? I think that’s what I’m trying to get at when I make these points. There is something about what we’re doing here as a society that gets us to labeling these things in the ways that we do and calling them good or bad.

And on the one hand, as a person believe that there are correct answers and I know what I think is right versus what I think is wrong. And then as a scientist I want to try to take a little bit more of an outside view and try to understand like what is the process whereby we as a society or as genetic beings started doing that? Understanding what that process is and how that process evolves, and actually what that looks like in people now is a really critical research program.

Lucas: So one thing that I tried to cover in my panel yesterday on what civilization should strive for, is in the short, medium, to longterm the potential role that narrow to general AI systems might play in amplifying human moral decision making.

Solving as you were discussing this sort of deliberative, normative process that human beings undergo to total converge on an idea. I’m just curious to know like with more narrow systems, if you’re optimistic about ways in which AI can sort of help and elucidate our moral decision making at work to amplify it.

And before I let you start, I guess there’s one other thing that I said that I think Rohin Shah pointed out to me that was particularly helpful in one place. But beyond the moral decision making, the narrow AI systems can help us by making the moral decision make, the decisions that we implement them faster than we could.

Depending on the way a self-driving car decides to crash is like an expression of our moral decision making in like a fast computery way. I’m just saying like beyond ways in which AI systems make moral decisions for us faster than we can, I don’t know, maybe in courts or other things which seem morally contentious. Are there also other ways in which they can actually help the deliberative process examining massive amounts of moral information or like a value information or analyzing something like an aggregated well-being index where we try to understand more so how policies impact the wellbeing of people or like what sorts of moral decisions lead to good outcomes, whatever. So whatever you have to say to that.

Dylan: Yes, I definitely want to echo that. We can sort of get a lot of pre-deliberation into a fast timescale reaction with AI systems and I think that that is a way for us to improve how we act in the quality of the things that we do from a moral perspective. That you do see a real path and to actually bringing that to be in the world.

In terms of helping us actually deliberate better, I think that is a harder problem that I think is absolutely worth more people thinking about but I don’t know the answers here. What I do think is that if we have a better understanding of what the deliberative process is, I think there are correct questions to look at to try to get to that or not, the moral questions about what’s right and what’s wrong and what do we think is right and what do we think is wrong, but they are much more questions at the level of what is it about our evolutionary pathway that led us to thinking that these things are right or wrong.

What is it about society and the pressures that you’re gone and faced that led us to things where murder is wrong in almost every society in the world. I will say the death penalty is the thing, it’s just the type of sanctioned murder. So there is a sense in which I think it’s a bit more nuanced than just that. And there’s something to be said about like I guess if I had to make my claims, like what I think has sort of happened there.

So there’s something about us as creatures that evolved to coordinate and perform well in groups and pressures that, that placed on us that caused us to develop these normative systems whereby we say different things are right and wrong.

Lucas: Iterated game theory over millions of years or something.

Dylan: Something like that. Yeah, but there’s a sense in which us labeling things as right and wrong and developing the processes whereby we label things as right and wrong is a thing that we’ve been pushed towards.

Lucas: From my perspective, it feels like this is more tractable than people lead on, like AI is only going to be able to help in moral deliberation, once it’s general. It already helps us in regular deliberation and moral deliberation isn’t a special kind of deliberation and moral deliberation requires empirical facts about the world and in persons just like any other kind of actionable deliberation does and domains that aren’t considered to have to do with moral philosophy or ethics or things like that.

So I’m not an AI researcher, but it seems to me like this is more attractable than people lead onto be. The normative aspect of AI Alignment seems to be under researched.

Dylan: Can you say a little more about what you mean by that?

Lucas: What I meant was the normative deliberative process, the difficulty in coming to normative conclusions and what the appropriate epistemic and deliberative process is for arriving at normative solutions and how narrow AI systems can take us to a beautiful world where advanced AI systems actually lead us to post human ethics.

If we ever want to get to a place where general systems take us to post human ethics, why not start today with figuring out how narrow systems can work to amplify human moral decision making and deliberative processes.

Dylan: I think the hard part there is, I don’t exactly know what it means to amplify those processes. My perspective is that we as a species do not yet have a good understanding of what those deliberative processes actually represent and what formed the result actually does.

Lucas: It’s just like giving more information, providing tons of data, analyzing the data, potentially pointing out biases. The part where they’re literally amplifying cognitive implicit or explicit decision making process is more complicated and will require more advancement and cognition and deliberation and stuff. But yeah, I still think there are more mundane ways in which it can make us better moral reasoners and decision makers.

If I could give you like 10,000 more bits of information every day about moral decisions that you make, you would probably just be a better moral agent.

Dylan: Yes, one way to try to think about that is maybe things like VR approaches to increasing empathy. I think that that has a lot of power to make us better.

Lucas: Max always says that there’s a race between wisdom and the power of our technology and it seems like people really aren’t taking seriously ways in which we can amplify wisdom because wisdom is generally taken to be part of the humanities and like the soft sciences. Maybe we should be taking more seriously ways in which narrow current day AI systems can be used to amplify the progress at which the human species makes wisdom. Because otherwise we’re just gonna like continue how we always continue and the wisdom is going to go really slow and then we’re going to probably learn from a bunch of mistakes.

And it’s just not going to be as good until we’ll develop a rigorous science of making moral progress or like using technology to amplify the progress of wisdom and moral progress.

Dylan: So in principle, what you’re saying, I don’t really disagree with it, but I also don’t know how that would change what I’m working on either. In the sense that I’m not sure what it would mean. I do not know how I would do research on amplifying wisdom. I just don’t really know what that means. And that’s not to say it’s an impossible problem, we talked earlier about how I don’t know what partial value alignment means, that something that you and I can talk about it and we can intuitively I think align on a concept, but it’s not a concept I knew how to translate into actionable concrete research problems right now.

In the same way, the idea of amplifying wisdom and making people more wise is something that I think intuitively I understand what you mean, but when I try to think about how an AI system would make someone wiser, that feels difficult.

Lucas: It can seem difficult, but I feel like it would, obviously this is like an open research question, but if you were able to identify a bunch of variables that are most important for moral decision making and then if you could use AI systems to sort of gather aggregate and compile in certain ways and analyze moral information in this way, again, it just seems more tractable than people seem to be letting on.

Dylan: Yeah, although I wonder now is that different from value alignment does, we’re thinking about it, right? Concrete research thing I spend a while thinking about is, how do you identify the features that a person considers to be valuable? Say, we don’t know the relative tradeoffs between them.

One way you might try to solve value alignment is have a process that identifies the features that might matter in the world and then have a second process that identifies the appropriate tradeoffs between those features, and maybe something about diminishing returns or something like that. And that to me sounds like I just placed values with wisdom and I’ve got sort of what you’re thinking about. I think both of those terms are similarly diffuse. I wonder if what we’re talking about is semantics, and if it’s not, I’d like to know what the difference is.

Lucas: I guess, the more mundane definition of wisdom, at least in the way that Max Tegmark would use it would be like the ways in which we use our technology. I might have specific preferences, but just because I have specific preferences that I may or may not be aligning an AI system to does not necessarily mean that that total process, this like CIRL process is actually an expression of wisdom.

Dylan: Okay, can you provide a positive description of what a process would look like? Or like basically what I’m saying is I can hear the point of I have preferences and I aligned my system to it and that’s not necessarily a wise system and …

Lucas: Yeah, like I build a fire because I want to be hot, but then the fire catches my village on fire and no longer is … That’s still might be value alignment.

Dylan: But isn’t [crosstalk 00:48:39] some values that you didn’t take into account when you were deciding to build the fire.

Lucas: Yeah, that’s right. So I don’t know. I’d probably have to think about this more because I guess this is something that I just sort of throwing out right now as a reaction to what we’ve been talking about. So I don’t have a very good theory of it.

Dylan: And I don’t wanna say that you need to know the right answers to these things to not have that be a useful direction to push people.

Lucas: We don’t want to use different concepts to just reframe the same problem and just make a conceptual mess.

Dylan: That’s what I’m a little bit concerned about and that’s the thing I’m concerned about broadly. We’ve got a lot of issues that we’re thinking about in dealing with that we’re not really sure what they are.

For me, I think one of the really helpful things has been to frame the issue that I’m thinking about as if a person has a behavior that they want to implement into the world and that’s a complex behavior that they don’t know how to identify immediately. How do you actually go about building systems that allow you to implement that behavior effectively, evaluate that the behavior is actually been correctly implemented.

Lucas: Avoiding side effects, avoiding …

Dylan: Like all of these kinds of things that we sort of concerned about in AI Safety, in my mind fall a bit more into place when we frame the problem as I have a desired behavior that I want to exist, a response function, a policy function that I want to implement into the world. What are the technological systems I can use to implement that in a computer or a robot or what have you.

Lucas: Okay. Well, do you have anything else you’d like to wrap up on?

Dylan: No, I just, I want to say thanks for asking hard questions and making me feel uncomfortable because I think it’s important to do a lot of that as a scientist and in particular I think as people working on AI, we should be spending a bit more time being uncomfortable and talking about these things, because it does impact what we end up doing and it does I think impact the trajectories that we put the technology on.

Lucas: Wonderful. So if people want to read about cooperative inverse reinforcement learning, where can we find the paper or other work that you have on that? What do you think are the best resources? What are just general things you’d like to point people towards in order to follow you or keep up to date with AI Alignment?

Dylan: I tweet occasionally about AI Alignment and a bit of AI ethics questions, the Hadfield-Menell, my first initial, last name. And if you’re interested in getting a technical introduction to value alignment, I would say take a look at the 2016 paper on cooperative IRL. If you’d like a more general introduction, there’s a blog post from summer 2017 on the bear blog.

Lucas: All right, thanks so much Dylan, and maybe we’ll be sitting in a similar room again in two years for Beneficial Artificial Super Intelligence 2021.

Dylan: I look forward to it. Thanks a bunch.

Lucas: Thanks. See you, Dylan. If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

Podcast: Existential Hope in 2019 and Beyond

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable  just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.

The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts–Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg–about their views on the present, the future, and the path between them.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

We hope you’ll come away feeling inspired and motivated–not just to prevent catastrophe, but to facilitate greatness.

Topics discussed in this episode include:

  • How technology aids us in realizing personal and societal goals.
  • FLI’s successes in 2018 and our goals for 2019.
  • Worldbuilding and how to conceptualize the future.
  • The possibility of other life in the universe and its implications for the future of humanity.
  • How we can improve as a species and strategies for doing so.
  • The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
  • Existential hope and what it looks like now and far into the future.

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone. Welcome back to the FLI podcast. I’m your host, Ariel Conn, and I am truly excited to bring you today’s show. This month, we’re departing from our standard two-guest interview format because we wanted to tackle a big and fantastic topic for the end of the year that would require insight from a few extra people. It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine.

And so, as we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

I’m delighted to present Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark and Anders Sandberg, all of whom were kind enough to come on the show and talk about why they’re so hopeful for the future and just how amazing that future could be.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and she created the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

Over the course of a few days, I interviewed all six of our guests, and I have to say, it had an incredibly powerful and positive impact on my psyche. We’ve merged these interviews together for you here, and I hope you’ll all also walk away feeling a bit more hope for humanity’s collective future, whatever that might be.

But before we go too far into the future, let’s start with Anthony and Max, who can talk a bit about where we are today.

Anthony: I’m Anthony Aguirre, I’m one of the founders of the Future of Life Institute. And in my day job, I’m a Physicist at the University of California at Santa Cruz.

Max: I am Max Tegmark, a professor doing physics and AI research here at MIT, and also the president of the Future of Life Institute.

Ariel: All right. Thank you so much for joining us today. I’m going to start with sort of a big question. That is, do you think we can use technology to solve today’s problems?

Anthony: I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

Max: Take, for example, poverty. It’s not like we don’t have the technology right now to eliminate poverty. But we’re steering the technology in such a way that there are people who starve to death, and even in America there are a lot of children who just don’t get enough to eat, through no fault of their own.

Anthony: So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them. Now, that being said, there are things that are more amenable to better technology, and things that are less amenable. And there are technologies that tend to, rather than functioning as kind of an extension of our will, will take on a bit of a life of their own. If you think about technologies like medicine, or good farming techniques, those tend to be sort of overall beneficial and really are kind of accomplishing purposes that we set. You know, we want to be more healthy, we want to be better fed, we build the technology and it happens. On the other hand, there are obviously technologies that are just as useful or even more useful for negative purposes — socially negative or things that most people agree are negative things: landmines, for example, as opposed to vaccines. These technologies come into being because somebody is trying to accomplish their purpose — defending their country against an invading force, say — but once that technology exists, it’s kind of something that is easily used for ill purposes.

Max: Technology simply empowers us to do good things or bad things. Technology isn’t evil, but it’s also not good. It’s morally neutral. Right? You can use fire to warm up your home in the winter or to burn down your neighbor’s house. We have to figure out how to steer it and where we want to go with it. I feel that there’s been so much focus on just making our tech powerful right now — because that makes money, and it’s cool — that we’ve neglected the steering and the destination quite a bit. And in fact, I see the core goal of the Future of Life Institute: Help bring back focus on the steering of our technology and the destination.

Anthony: There are also technologies that are really tricky in that they give us what we think we want, but then we sort of regret having later, like addictive drugs, or gambling, or cheap sugary foods, or-

Ariel: Social media.

Anthony: … certain online platforms that will go unnamed. We feel like this is what we want to do at the time; We choose to do it. We choose to eat the huge sugary thing, or to spend some time surfing the web. But later, with a different perspective maybe, we look back and say, “Boy, I could’ve used those calories, or minutes, or whatever, better.” So who’s right? Is it the person at the time who’s choosing to eat or play or whatever? Or is it the person later who’s deciding, “Yeah, that wasn’t a good use of my time or not.” Those technologies I think are very tricky, because in some sense they’re giving us what we want. So we reward them, we buy them, we spend money, the industries develop, the technologies have money behind them. At the same time, it’s not clear that they make us happier.

So I think there are certain social problems, and problems in general, that technology will be tremendously helpful in improving as long as we can act to sort of wisely try to balance the effects of technology that have dual use toward the positive, and as long as we can somehow get some perspective on what to do about these technologies that take on a life of their own, and tend to make us less happy, even though we dump lots of time and money into them.

Ariel: This sort of idea of technologies — that we’re using them and as we use them we think they make us happy and then in the long run we sort of question that — is this a relatively modern problem, or are there examples of anything that goes further back that we can learn from from history?

Anthony: I think it goes fairly far back. Certainly drug use goes a fair ways back. I think there have been periods where drugs were used as part of religious or social ceremonies and in other kind of more socially constructive ways. But then, it’s been a fair amount of time where opiates and very addictive things have existed also. Those have certainly caused social problems back at least a few centuries.

I think a lot of these examples of technologies that give us what we seem to want but not really what we want are ones in which we’re applying the technology to a species — us — that developed in a very different set of circumstances, and that contrast between what’s available and what we evolutionarily wanted is causing a lot of problems. The sugary foods are an obvious example where we can now just supply huge plenitudes of something that was very rare and precious back in more evolutionary times — you know, sweet calories.

Drugs are something similar. We have a set of chemistry that helps us out in various situations, and then we’re just feeding those same chemical pathways to make ourselves feel good in a way that is destructive. And violence might be something similar. Violent technologies go way, way back. Those are another one that are clearly things that we want to invent to further our will and accomplish our goals. They’re also things that may at some level be addictive to humans. I think it’s not entirely clear exactly how — there’s a strange mix there, but I think there’s certainly something compelling and built into at least many humans’ DNA that promotes fighting and hunting and all kinds of things that were evolutionarily useful way back when and perhaps less useful now. It had a clear evolutionary purpose with tribes that had to defend themselves, with animals that needed to be killed for food. But feeding that desire to run around and hunt and shoot people, which most people aren’t doing in real life, but tons of people are doing in video games. So there’s clearly some built in mechanism that’s rewarding that behavior as being fun to do and compelling. Video games are obviously a better way to express that than running around and doing it in real life, but it tells you something about some circuitry that is still there and is left over from early times. So I think there are a number of examples like that — this connection between our biological evolutionary history and what technology makes available in large quantities — where we really have to think carefully about how we want to play that.

Ariel: So, as you look forward to the future, and sort of considering some of these issues that you’ve brought up, how do you envision us being able to use technology for good and maybe try to overcome some of these issues? I mean, maybe it is good if we’ve got people playing video games instead of going around shooting people in real life.

Anthony: Yeah. So there may be examples where some of that technology can fulfill a need in a less destructive way than it might otherwise be. I think there are also plenty of examples where a technology can root out or sort of change the nature of a problem that would be enormously difficult to do something about without a technology. So for example, I think eating meat, when you analyze it from almost any perspective, is a pretty destructive thing for humanity to be doing. Ecologically, ethically in terms of the happiness of the animals, health-wise: so many things are destructive about it. And yet, you really have the sense that it’s going to be enormously difficult — it would be very unlikely for that to change wholesale on a relatively short period of time.

However, there are technologies — clean meat, cultured meat, really good tasting vegetarian meat substitutes — that are rapidly coming to market. And you could imagine if those things were to get cheap and widely available and perhaps a little bit healthier, that could dramatically change that situation relatively quickly. I think if a non-ecologically destructive, non-suffering inducing, just as tasty and even healthier product were cheaper, I don’t think people would be eating meat. Very few people actually like, I think, intrinsically the idea of having an animal suffer in order for them to eat. So I think that’s an example of something that would be really, really hard to change through just social actions. Could be jump started quite a lot by technology — that’s one of the ones I’m actually quite hopeful about.

Global warming I think is a similar one — it’s on some level a social and economic problem. It’s a long-term planning problem, which we’re very bad at. It’s pretty clear how to solve the global warming issue if we really could think on the right time scales and weigh the economic costs and benefits over decades — it’d be quite clear that mitigating global warming now and doing things about it now might take some overall investment that would clearly pay itself off. But we seem unable to accomplish that.

On the other hand, you could easily imagine a really cheap, really power-dense, quickly rechargeable battery being invented and just utterly transforming that problem into a much, much more tractable one. Or feasible, small-scale nuclear fusion power generation that was cheap. You can imagine technologies that would just make that problem so much easier, even though it is ultimately kind of a social or political problem that could be solved. The technology would just make it dramatically easier to do that.

Ariel: Excellent. And so thinking more hopefully — even when we’re looking at what’s happening in the world today, news is usually focusing on all the bad things that have gone wrong — when you look around the world today, what do you think, “Wow, technology has really helped us achieve this, and this is super exciting?”

Max: Almost everything I love about today is the result of technology. It’s because of technology that we’ve more than doubled the lifespan that we humans used to have, most of human history. More broadly, I feel that the technology is empowering us. Ten thousand years ago, we felt really, really powerless; We were these beings, you know, looking at this great world out there and having very little clue about how it worked — it was largely mysterious to us — and even less ability to actually influence the world in a major way. Then technology enabled science, and vice versa. So the sciences let us understand more and more how the world works, and let us build this technology which lets us shape the world to better suit us. Helping produce much better, much more food, helping keep us warm in the winter, helping make hospitals that can take care of us, and schools that can educate us, and so on.

Ariel: Let’s bring on some of our other guests now. We’ll turn first to Gaia Dempsey. How do you envision technology being used for good?

Gaia: That’s a huge question.

Ariel: It is. Yes.

Gaia: I mean, at its essence I think technology really just means a tool. It means a new way of doing something. Tools can be used to do a lot of good — making our lives easier, saving us time, helping us become more of who we want to be. And I think technology is best used when it supports our individual development in the direction that we actually want to go — when it supports our deeper interests and not just the, say, commercial interests of the company that made it. And I think in order for that to happen, we need for our society to be more literate in technology. And to me that’s not just about understanding how computing platforms work, but also understanding the impact that tools have on us as human beings. Because they don’t just shape our behavior, they actually shape our minds and how we think.

So I think we need to be very intentional about the tools that we choose to use in our own lives, and also the tools that we build as technologists. I’ve always been very inspired by Douglas Engelbart’s work, and I think that — I was revisiting his original conceptual framework on augmenting human intelligence, which he wrote and published in 1962 — and I really think he had the right idea, which is that tools used by human beings don’t exist in a vacuum. They exist in a coherent system and that system involves language: the language that we use to describe the tools and understand how we’re using them; the methodology; and of course the training and education around how we learn to use those tools. And I think that as a tool maker it’s really important to think about each of those pieces of an overarching coherent system, and imagine how they’re all going to work together and fit into an individual’s life and beyond: you know, the level of a community and a society.

Ariel: I want to expand on some of this just a little bit. You mentioned this idea of making sure that the tool, the technology tool, is being used for people and not just for the benefit, the profit, of the company. And that that’s closely connected to making sure that people are literate about the technology. One, just to confirm that that is actually what you were saying. And, two, I mean one of the reasons I want to confirm this is because that is my own concern — that it’s being too focused for making profit and not enough people really understand what’s happening. My question to you is, then, how do we educate people? How do we get them more involved?

Gaia: I think for me, my favorite types of tools are the kinds of tools that support us in developing our thinking and that help us accelerate our ability to learn. But I think that some of how we do this in our society is not just about creating new tools or getting trained on new tools, but really doesn’t have very much to do with technology at all. And that’s in our education system, teaching critical thinking. And teaching, starting at a young age, to not just accept information that is given to you wholesale, but really to examine the motivations and intentions and interests of the creator of that information, and the distributor of that information. And I think these are really just basic tools that we need as citizens in a technological society and in a democracy.

Ariel: That actually moves nicely to another question that I have. Well, I actually think the sentiment might be not quite as strong as it once was, but I do still hear a lot of people who sort of approach technology as the solution to any of today’s problems. And I’m personally a little bit skeptical that we can only use technology. I think, again, it comes back to what you were talking about with it’s a tool so we can use it, but I think it just seems like there’s more that needs to be involved. I guess, how do you envision using technology as a tool, and still incorporating some of these other aspects like teaching critical thinking?

Gaia: You’re really hitting on sort of the core questions that are fundamental to creating the kind of society that we want to live in. And I think that we would do well to spend more time thinking deeply about these questions. I think technology can do really incredible, tremendous things in helping us solve problems and create new capabilities. But it also creates a new set of problems for us to engage with.

We’ve sort of coevolved with our technology. So it’s easy to point to things in the culture and say, “Well, this never would have happened without technology X.” And I think that’s true for things that are both good and bad. I think, again, it’s about taking a step back and taking a broader view, and really not just teaching critical thinking and critical analysis, but also systems level thinking. And understanding that we ourselves are complex systems, and we’re not perfect in the way that we perceive reality — we have cognitive biases, we cannot necessarily always trust our own perceptions. And I think that’s a lifelong piece of work that everyone can engage with, which is really about understanding yourself first. This is something that Yuval Noah Harari talked about in a couple of his recent books and articles that he’s been writing, which is: if we don’t do the work to really understand ourselves first and our own motivations and interests, and sort of where we want to go in the world, we’re much more easily co-opted and hackable by systems that are external to us.

There are many examples of recommendation algorithms and sentiment analysis — audience segmentation tools that companies are using to be able to predict what we want and present that information to us before we’ve had a chance to imagine that that is something we could want. And while that’s potentially useful and lucrative for marketers, the question is what happens when those tools are then utilized not just to sell us a better toothbrush on Amazon, but when it’s actually used in a political context. And so with the advent of these vast machine learning, reinforcement learning systems that can look at data and look at our behavior patterns and understand trends in our behavior and our interests, that presents a really huge issue if we are not ourselves able to pause and create a gap, and create a space between the information that’s being presented to us within the systems that we’re utilizing and really our own internal compass.

Ariel: You’ve said two things that I think are sort of interesting, especially when they’re brought together. And the first is this idea that we’ve coevolved with technology — which, I actually hadn’t thought of it in that phrase before, and I think it’s a really, really good description. But then when we consider that we’ve coevolved with technology, what does that mean in terms of knowing ourselves? And especially knowing ourselves as our biological bodies, and our limiting cognitive biases? I don’t know if that’s something that you’ve thought about much, but I think that combination of ideas is an interesting one.

Gaia: I mean, I know that I certainly already feel like I’m a cyborg. Part of knowing myself is — it does involve understanding the tools that I use, that feel that they are extensions of myself. That kind of comes back to the idea of technology literacy, and systems literacy, and being intentional about the kinds of tools that I want to use. For me, my favorite types of tools are the kind that I think are very rare: the kind that support us developing the capacity for long-term thinking, and for being true to the long-term intentions and goals that I set for myself.

Ariel: Can you give some examples of those?

Gaia: Yeah, I’ll give a couple examples. One example that’s sort of probably familiar to a lot of people listening to this comes from the book Ready Player One. And in this book the main character is interacting with his VR system that he sort of lives and breathes in every single day. And at a certain point the system asks him: do you want to activate your health module? I forgot exactly what it was called. And without giving it too much thought, he kind of goes, “Sure. Yeah, I’d like to be healthier.” And it instantiates a process whereby he’s not allowed to log into the OASIS without going through his exercise routine every morning. To me, what’s happening there is: there is a choice.

And it’s an interesting system design because he didn’t actually do that much deep thinking about, “Oh yeah, this is a choice I really want to commit to.” But the system is sort of saying, “We’re thinking through the way that your decision making process works, and we think that this is something you really do want to consider. And we think that you’re going to need about three months before you make a final decision as to whether this is something you want to continue with.”

So that three month period or whatever, and I believe it was three months in the book, is what’s known as an akrasia horizon. Which is a term that I learned through a different tool that is sort of a real life version of that, which is called Beeminder. And the akrasia horizon is, really, it’s a time period that’s long enough that it will sort of circumvent a cognitive bias that we have to really prioritize the near term at the expense of the future. And in the case of the Ready Player One example, the near term desire that he would have that would circumvent the future — his long-term health — is, “I don’t feel like working out today. I just want to get into my email or I just want to play a video game right now.” And a very similar sort of setup is created in this tool Beeminder, which I love to use to support some goals that I want to make sure I’m really very motivated to meet.

So it’s a tool where you can put in your goals and you can track them either yourself by entering the data manually, or you can connect to a number of different tracking capabilities like RescueTime and others. And if you don’t stay on track with your goals, they charge your credit card. It’s a very effective sort of motivating force. And so I sort of have a nickname: I call these systems time bridges. Which are really choices made by your long-term thinking self, that in some way supersedes the gravitational pull toward mediocrity inherent in your short-term impulses.

It’s about experimenting too. And this is one particular system that creates consequences and accountability. And I love systems. For me if I don’t have systems in my life that help me organize the work that I want to do, I’m hopeless. That’s why I like to collect and I’m sort of an avid taster of different systems, and I’ll try anything, and really collect and see what works. And I think that’s important. It’s a process of experimentation to see what works for you.

Ariel: Let’s turn to Allison Duettmann now, for her take on how we can use technology to help us become better versions of ourselves and to improve our societal interactions.

Allison: I think there are a lot of technological tools that we can use to aid our reasoning and sense making and coordination. So I think that technologies can be used to help with reasoning, for example, by mitigating trauma, or bias, or by augmenting our intelligence. That’s the whole point of creating AI in the first place. Technologies can also be used to help with collective sense-making, for example with truth-finding and knowledge management, and I think your hypertexts and prediction markets — something that Anthony’s working on — are really worthy examples here. I also think technologies can be used to help with coordination. Mark Miller, who I’m currently writing a book with, likes to say that if you lower the risks of cooperation, you’ll get a more cooperative world. I think that most cooperative interactions may soon be digital.

Ariel: That’s sort of an interesting idea, that there’s risks to cooperation. Can you maybe expand on that a little bit more?

Allison: Yeah, sure. I think that most of our interactions are already digital ones, for some of us at least, and they will be more and more so in the future. So I think that one step to lowering the risk of cooperation is establishing cybersecurity as a first step, because this would decrease the risk of digital coercion. But I do think that’s only part of it, because rather than just freeing us from the restraints that keep us from cooperating, we also need to equip us with the tools to cooperate, right?

Ariel: Yes.

Allison: I think some of those may be smart contracts to allow individuals to credibly commit, but there may be others too. I just think that we have to realize that the same technologies that we’re worried about in terms of risks are also the ones that may augment our abilities to decrease those risks.

Ariel: One of the things that came to mind as you were talking about this, using technology to improve cooperation — when we look at the world today, technology isn’t spread across the globe evenly. People don’t have equal access to these tools that could help. Do you have ideas for how we address various inequality issues, I guess?

Allison: I think inequality is a hot topic to address. I’m currently writing a book with Mark Miller and Christine Peterson on a few strategies to strengthen civilization. In this book we outline a few paths to do so, but also potential positive outcomes. One of the outcomes that we’re outlining is a voluntary world in which all entities can cooperate freely with each other to realize their interests. It’s kind of based on the premise that finding one utopia that works for everyone is hard, and is perhaps impossible, but that in the absence of knowing what’s in everyone’s interest, we shouldn’t try to impose any interests by one entity — whether that’s an AI or an organization or a state — but we should try to create a framework in which different entities, with different interests, whether they’re human or artificial, can pursue their interests freely by cooperating. And I think If you look at the strategy, it has worked pretty well so far. If you look at society right now it’s really not perfect, but by allowing humans to cooperate freely and engage in some mutually beneficial relationships, civilization already serves our interests quite well. And it’s really not perfect by far, I’m not saying this, but I think as a whole, our civilization at least tends imperfectly to plan for pareto-preferred paths. We have survived so far, and in better and better ways.

So a few ways that we propose to strengthen this highly involved process is by proposing kind of general recommendations for solving coordination problems, and then a few more specific ideas on reframing a few risks. But I do think that enabling a voluntary world in which different entities can cooperate freely with each other is the best we can do, given our limited knowledge of what is in everyone’s interests.

Ariel: I find that interesting, because I hear lots of people focus on how great intelligence is, and intelligence is great, but it does often seem — and I hear other people say this — that cooperation is also one of the things that our species has gotten right. We fail at it sometimes, but it’s been one of the things, I think, that’s helped.

Allison: Yeah, I agree. I hosted an event last year at the Internet Archive on different definitions of intelligence. Because in the paper that we wrote last year, we have this very grand, or broad conception of intelligence, which includes civilization as an intelligence. So I think you may be asking yourself the question of, what does it mean to be intelligent, and if what we care about is problem-solving ability then I think that civilization certainly classifies as a system that can solve more problems than any individual that is within it alone. So I do think this is part of the cooperative nature of the individual parts within civilization, and so I don’t think that cooperation and intelligence are mutually exclusive at all. Marvin Minsky wrote this amazing book, Society of Mind, and in much of this, has similar ideas.

Ariel: I’d like to take this idea and turn it around, and this is a question specifically for Max and Anthony: looking back at this past year, how has FLI helped foster cooperation and public engagement surrounding the issues we’re concerned about? What would you say were FLI’s greatest successes in 2018?

Anthony: Let’s see, 2018. What I’ve personally enjoyed the most, I would say, is starting the engagement between the technical researchers and the nonprofit community really starting to get more engaged with state and federal governments. So for example the Asilomar principles — which were generated at this nexus of business and nonprofit and academic thinkers about AI and related things — I think were great. But that conversation didn’t really include much from people in policy, and governance, and governments, and so on. So, starting to see that thinking, and those recommendations, and those aspirations of the community of people who know about AI and are thinking hard about it and what it should do and what it shouldn’t do — seeing that start to come into the political sphere, and the government sphere, and the policy sphere I think is really encouraging.

That seems to be happening in many places at some level. I think the local one that I’m excited about is the passage of the California legislature of a resolution endorsing the Asilomar principles. That felt really good to see that happen and really encouraging that there were people in the legislature that — we didn’t go and lobby them to do that, they came to us and said, “This is really important. We want to do something.” And we worked with them to do that. That was super encouraging, because it really made it feel like there is a really open door, and there’s a desire in the policy world to do something. This thing is getting on people’s radar, that there’s a huge transformation coming from AI.

They see that their responsibility is to do something about that. They don’t intrinsically know what they should be doing, they’re not experts in AI, they haven’t been following the field. So there needs to be that connection and it’s really encouraging to see how open they are and how much can be produced with honestly not a huge level of effort; Just communication and talking through things I think made a significant impact. I was also happy to see how much support there continues to be for controlling the possibility of lethal autonomous weapons.

The thing we’ve done this year, the lethal autonomous weapons pledge, I felt really good about the success of. So this was an idea that anybody who’s interested, but especially companies who are engaged in developing related technologies, drones, or facial recognition, or robotics, or AI in general — to get them to take that step themselves of saying, “No, we want to develop these technologies for good, and we have no interest in developing things that are going to be weaponized and used in lethal autonomous weapons.”

I think having a large number of people and corporations sign on to a pledge like that is useful not so much because they were planning to do all those things and now they signed a pledge, so they’re not going to do it anymore. I think that’s not really the model so much as it’s creating a social and cultural norm that these are things that people just don’t want to have anything to do with, just like biotech companies don’t really want to be developing biological weapons, they want to be seen as forces for good that are building medicines and therapies and treatments and things. Everybody is happy for biotech companies to be doing those things.

If biotech companies were building biological weapons also, you really start to wonder, “Okay, wait a minute, why are we supporting this? What are they doing with my information? What are they doing with all this genetics that they’re getting? What are they doing with the research that’s funded by the government? Do we really want to be supporting this?” So keeping that distinction in the industry between all the things that we all support — better technologies for helping people — versus the military applications, particularly in this rather destabilizing and destructive way: I think that is more the purpose — to really make clear that there are companies that are going to develop weapons for the military, and that’s part of the reality of the world.

We have militaries; We need, at the moment, militaries. I think I certainly would not advocate that the US should stop defending itself, or shouldn’t develop weapons, and I think it’s good that there are companies that are building those things. But there are very tricky issues when the companies building military weapons are the same companies that are handling all of the data of all of the people in the world or in the country. I think that really requires a lot of thought, how we’re going to handle it. And seeing companies engage with those questions and thinking about how are the technologies that we’re developing, how are they going to be used and for what purposes, and what purposes do we not want them to be used for is really, really heartening. It’s been very positive I think to see at least in certain companies those sort of conversations go on with our pledge or just in other ways.

You know, seeing companies come out with, “This is something that we’re really worried about. We’re developing these technologies, but we see that there could be major problems with them.” That’s very encouraging. I don’t think it’s necessarily a substitute for something happening at the regulatory or policy level, I think that’s probably necessary too, but it’s hugely encouraging to see companies being proactive about thinking about the societal and ethical implications of the technologies they’re developing.

Max: There are four things I’m quite excited about. One of them is that we managed to get so many leading companies and AI researchers and universities to pledge to not build lethal autonomous weapons, also known as killer robots. Second is that we were able to channel two million dollars, thanks to Elon Musk, to 10 research groups around the world to help figure out how to make artificial general intelligence safe and beneficial. Third is that the state of California decided to officially endorse the 23 Asilomar Principles. It’s really cool that these are getting more taken seriously now, even by policy makers. And the fourth is that we were able to track down the children of Stanislav Petrov in Russia, thanks to whom this year is not the 35th anniversary year of World War III, and actually give them the appreciation we feel that they deserve.

I’ll tell you a little more about this one because it’s something I think a lot of people still aren’t that aware of. But September 26th, 35 years ago, Stanislav Petrov was on shift and in charge of his Soviet early warning station, which showed five US nuclear missiles incoming, one after the other. Obviously, not what he was hoping that would happen at work that day and a really horribly scary situation where the natural response is to do what that system was built for: namely, warning the Soviet Union so that they would immediately strike back. And if that had happened, then thousands of mushroom clouds later, you know, you and I, Ariel, would probably not be having this conversation. Instead, he, mostly on gut instinct, came to the conclusion that there was something wrong and said, “This is a false alarm.” And we’re incredibly grateful for that level-headed action of him. He passed away recently.

His two children are living on very modest means outside of Moscow and we felt that when someone does something like this, or in his case abstains from doing something, that future generations really appreciate, we should show our appreciation, so that others in his situation later on know that if they sacrifice themselves for the greater good, they will be appreciated. Or if they’re dead, their loved ones will. So we organized a ceremony in New York City and invited them to it and bought air tickets for them and so on. And in a very darkly humorous illustration of how screwed up their relationships are at the global level now, the US decided that because — that the way to show appreciation for the US not having gotten nuked was to deny a visa to Stanislav’s son. So he could only join by Skype. Fortunately, his daughter was able to get a visa, even though the waiting period to even get a visa point for Moscow was 300 days. We had to fly her to Israel to get her the Visa.

But she came and it was her first time ever outside of Russia. She was super excited to come and see New York. It was very touching for me to see all the affection that the New Yorkers there deemed at her and see her reaction and her husband’s reaction and to get to give her this $50,000 award, which for them was actually a big deal. Although it’s of course nothing compared to the value for the rest of the world of what their father did. And it was a very sobering reminder that we’ve had dozens of near misses where we almost had a nuclear war by mistake. And even though the newspapers usually make us worry about North Korea and Iran, of course by far the most likely way in which we might get killed by a nuclear explosion is because another just stupid malfunction or error causing the US and Russia to start a war by mistake.

I hope that this ceremony and the one we did the year before also, for family of Vasili Arkhipov, can also help to remind people that hey, you know, what we’re doing here, having 14,000 hydrogen bombs and just relying on luck year after year isn’t a sustainable long-term strategy and we should get our act together and reduce nuclear arsenals down to the level needed for deterrence and focus our money on more productive things.

Ariel: So I wanted to just add a quick follow-up to that because I had the privilege of attending the ceremony and I got to meet the Petrovs. And one of the things that I found most touching about meeting them was their own reaction to New York, which was in part just an awe of the freedom that they felt. And I think, especially, this is sort of a US centric version of hope, but it’s easy for us to get distracted by how bad things are because of what we see in the news, but it was a really nice reminder of how good things are too.

Max: Yeah. It’s very helpful to see things through other people’s eyes and in many cases, it’s a reminder of how much we have to lose if we screw up.

Ariel: Yeah.

Max: And how much we have that we should be really grateful for and cherish and preserve. It’s even more striking if you just look at the whole planet, you know, in a broader perspective. It’s a fantastic, fantastic place, this planet. There’s nothing else in the solar system even remotely this nice. So I think we have a lot to win if we can take good care of it and not ruin it. And obviously, the quickest way to ruin it would be to have an accidental nuclear war, which — it would be just by far the most ridiculously pathetic thing humans have ever done, and yet, this isn’t even really a major election issue. Most people don’t think about it. Most people don’t talk about it. This is, of course, the reason that we, with the Future of Life Institute, try to keep focusing on the importance of positive uses of technology, whether it be nuclear technology, AI technology, or biotechnology, because if we use it wisely, we can create such an awesome future, like you said: Take the good things we have, make them even better.

Ariel: So this seems like a good moment to introduce another guest, who just did a whole podcast series exploring existential risks relating to AI, biotech, nanotech, and all of the other technologies that could either destroy society or help us achieve incredible advances if we use them right.

Josh: I’m Josh Clark. I’m a podcaster. And I’m the host of a podcast series called the End of the World with Josh Clark.

Ariel: All right. I am really excited to have you on the show today because I listened to all of the End of the World. And it was great. It was a really, really wonderful introduction to existential risks.

Josh: Thank you.

Ariel: I highly recommend it to anyone who hasn’t listened to it. But now that you’ve just done this whole series about how things can go horribly wrong, I thought it would be fun to bring you on and talk about what you’re still hopeful for after having just done that whole series.

Josh: Yeah, I’d love that, because a lot of people are hesitant to listen to the series because they’re like, well, “it’s got to be such a downer.” And I mean, it is heavy and it is kind of a downer, but there’s also a lot of hope that just kind of emerged naturally from the series just researching this stuff. There is a lot of hope — it’s pretty cool.

Ariel: That’s good. That’s exactly what I want to hear. What prompted you to do that series, The End of the World?

Josh: Originally, it was just intellectual curiosity. I ran across a Bostrom paper in like 2005 or 6, my first one, and just immediately became enamored with the stuff he was talking about — it’s just baldly interesting. Like anyone who hears about this stuff can’t help but be interested in it. And so originally, the point of the podcast was, “Hey, everybody come check this out. Isn’t this interesting? There’s like, people actually thinking about this kind of stuff and talking about it.” And then as I started to interview some of the guys at the Future of Humanity Institute, started to read more and more papers and research further, I realized, wait, this isn’t just like, intellectually interesting. This is real stuff. We’re actually in real danger here.

And so as I was creating the series, I underwent this transition for how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

Ariel: So you have two episodes that go into biotechnology and artificial intelligence, which are two — especially artificial intelligence — they’re both areas that we work on at FLI. And in them, what I thought was nice is that you do get into some of the reasons why we’re still pursuing these technologies, even though we do see these existential risks around them. And so, I was curious, as you were doing your research into the series, what did you learn about, where you were like, “Wow, that’s amazing, that I’m so psyched that we’re doing this, even though there are these risks.”

Josh: Basically everything I learned about. I had to learn particle physics to explain what’s going on in large Hadron Collider. I had to learn a lot about AI. I realized when I came into it, that my grasp of AI was beyond elementary. And it’s not like I could actually put together a AGI myself from scratch or anything like that now, but I definitely know a lot more than I did before. With biotech in particular, there was a lot that I learned that I found particularly jarring with the number of accidents that are reported every year, and then more than that, the fact that not every lab in the world has to report accidents. I found that extraordinarily unsettling.

So kind of from start to finish, I learned a lot more than I knew going into it, which is actually one of the main reasons why it took me well over a year to make the series because I would start to research something and then I’d realized I need to understand the fundamentals of this. So I’d go understand, I’d go learn that, and then there’d be something else I had to learn first, before I could learn something the next level up. So I kept having to kind of regressively research and I ended up learning quite a bit of stuff.

But I think to answer your question, the thing that struck me the most was learning about physics, about particle physics, and how tenuous our understanding of our existence is, but just how much we’ve learned so far in just the last like century or so, when we really dove into quantum physics, particle physics and just what we know about things. One of the things that just knocked my socks off was the idea that there’s no such thing as particles — like particles, as we think of them are just basically like shorthand. But the rest of the world outside of particle physics has said like, “Okay, particles, there’s like protons and neutrons and all that stuff. There’s electrons. And we understand that they kind of all fit into this model, like a solar system. And that’s how atoms work.”

That is not at all how atoms work, like a particle is just a pack of energetic vibrations and everything that we experience and see and feel, and everything that goes on in the universe is just the interaction of these energetic vibrations in force fields that are everywhere at every point in space and time. And just to understand that, like on a really fundamental level, changed my life actually, changed the way that I see the universe and myself and everything actually.

Ariel: I don’t even know where I want to go next with that. I’m going to come back to that because I actually think it connects really nicely to the idea of existential hope. But first I want to ask you a little bit more about this idea of getting people involved more. I mean, I’m coming at this from something of a bubble at this point where I am surrounded by people who are very familiar with the existential risks of artificial intelligence and biotechnology. But like you said, once you start looking at artificial intelligence, if you haven’t been doing it already, you suddenly realize that there’s a lot there that you don’t know.

Josh: Yeah.

Ariel: I guess I’m curious, now that you’ve done that, to what extent do you think everyone needs to? To what extent do you think that’s possible? Do you have ideas for how we can help people understand this more?

Josh: Yeah you know, that really kind of ties into taking on existential risks in general, is just being an interested curious person who dives into the subject and learns as much as you can, but that at this moment in time, as I’m sure you know, that’s easier said than done. Like you really have to dedicate a significant portion of your life to spending time focusing on that one issue whether it’s AI, it’s biotech or particle physics, or nanotech, whatever. You really have to immerse yourself into it because it’s not a general topic of national or global conversation, the existential risks that we’re facing, and certainly not the existential risks we’re facing from all the technology that everybody’s super happy that we’re coming out with.

And I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about it. Groups like yours, talking to the public, educating the public. I’m hoping that my series did something like that, just arousing curiosity in people, but also raising awareness of these things like, these are real things, these aren’t crackpots talking about this stuff. This is real, legitimate issues that are coming down the pike, that are being pointed out by real, legitimate scientists and philosophers and people who have given great thought about this. This isn’t like a chicken little situation; This is quite real. I think if you can pique someone’s curiosity just enough that they listen, stop and listen, do a little research, it sinks in after a minute that this is real. And that, oh, this is something that they want to be a part of doing something about.

And so I think just getting people talking about that just by proxy will interest other people who hear about it, and it will spread further and further out. And I think that that’s step one, is to just make it so it’s an okay thing to talk about, so you’re not nuts to raise this kind of stuff seriously.

Ariel: Well, I definitely appreciate you doing your series for that reason. I’m hopeful that that will help a lot.

Ariel: Now, Allison — you’ve got this website which, my understanding is that you’re trying to get more people involved in this idea that if we focus on these better ideals for the future, we stand a better shot at actually hitting them.

Allison: At ExistentialHope.com, I keep a map of reading, podcasts, organizations, and people that inspire an optimistic long-term vision for the future.

Ariel: You’re clearly doing a lot to try to get more people involved. What is it that you’re trying to do now, and what do you think we all need to be doing more of to get more people thinking this way?

Allison: I do think that it’s up to everyone, really, to try to, again, engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generating common knowledge to catalyze more directed coordination toward beautiful futures. I think that there’s a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on what to influence that. So I think we should try to map the space of both peril and promise which lie before us, but we should really try to aim for that this knowledge can empower each and every one of us to navigate toward the grand future.

For us currently on the website this involves orienting ourselves, so collecting useful models, and relevant broadcasts, and organizations that generate new insights, and then try to synthesize a map of where we came from, and a really kind of long perspective, and where we may go, and then which lenses of science and technology and culture are crucial to consider along the way. Then finally we would like to publish a living document that summarizes those models that are published elsewhere, to outline possible futures, and the idea is that this is a collaborative document. Even already, currently, the website links to a host of different Google docs in which we’re trying to really synthesize the current state of the art in the different focus areas. The idea is that this is collaborative. This is why it’s on Google docs, because everyone can just comment. And people do, and I think this should really be a collaborative effort.

Ariel: What are some of your favorite examples of content that, presumably, you’ve added to your website, that look at these issues?

Allison: There’s quite a host of things on there, I think, that a good start for people to go on the website is just to go on the overview. Because here I list kind of my top 10 lists about short pieces and long pieces, but my personal ones, I think, as a starting ground: I really like the metaethics sequence by Eliezer Yudkowsky. It contains a really good post, like Existential Angst Factory, and Reality as Fixed Computation. For me this is kind of like existentialism 2.0. Have to get your motivations and expectations right. What can I reasonably hope for? Then I think, relatedly, there’s also the Fan Sequence, also by Yudkowsky. But that together with, for example, Letter From Utopia by Nick Bostrom, or Hedonistic Imperative by David Pearce, or Post On Raikoth by Scott Alexander — they are really a nice next step because they actually lay out a few compelling positive versions of utopia.

Then if you want to get into the more nitty gritty there’s a longer section on civilization, its past and its future — so, what’s wrong and how to improve it. Here Nick Bostrom wrote this piece on the future of human evolution, which lays out two suboptimal paths for humanity’s future, and interestingly enough they don’t involve extinction. A similar one, I think, which probably many people are familiar with, is Scott Alexander’s Meditations On Moloch, and then some that people are less familiar with — Growing Children For Bostrom’s Disneyland. They are really interesting, because they are other pieces of this type, which are sketching out competitive and selective pressures that lead toward races to the bottom, as negative futures which don’t involve extinction per se. I think the really interesting thing, then, is that even those features are only bad if you think that the bottom is bad.

Next to them I list books, for example, like Robin Hanson, Age of M, which argues that living at subsistence may not be terrible, and in fact it’s pretty much what most of our past lives outside of the current dream time have always involved. So I think those are two really different lenses to make sense of the same reality, and I personally found this contrast so intriguing that I hosted a salon last year with Paul Christiano, Robin Hanson, Peter Eckersley, and a few others to kind of map out where we may be racing towards, so how bad those competitive equilibria actually are. I also link to those from the website.

To me it’s always interesting to map out one potentially possible future visions, and then try to find one either that contradicts or compliments it. I think having a good idea of an overview of those gives you a good map, or at least a space of possibilities.

Ariel: What do you recommend to people who are interested in trying to do more? How do you suggest they get involved?

Allison: One thing, an obvious thing, would be commenting on the Google Docs, and I really encourage everyone to do that. Another one would be just to join the mailing list. You can kind of indicate whether you want updates on me, or whether you want to collaborate, in which case we may be able to reach out to you. Or if you’re interested in meetups, they would only be in San Francisco so far, but I’m hoping that there may be others. I do think that currently the project is really in its infancy. We are relying on the community to help with this, so there should be a kind of collaborative vision.

I think that one of the main things that I’m hoping that people can get out of it for now is just to give some inspiration on where we may end up if we get it right, and on why work toward better futures, or even work toward preventing existential risks, is both possible and necessary. If you go on the website on the first section — the vision section — that’s what that section is for.

Secondly, then, if you are already opted in, if you’re already committed, I’m hoping that perhaps the project can provide some orientation. If someone would like to help but doesn’t really know where to start, the focus areas are an attempt to map out the different areas that we need to make progress on for better futures. Each area comes with an introductory text, and organizations that are working in that area that one can join or support, and Future of Life is in a lot of those areas.

Then I think finally, just apart from inspiration or orientation, it’s really a place for collaboration. The project is in its infancy and everyone should contribute their favorite pieces to our better futures.

Ariel: I’m really excited to see what develops in the coming year for existentialhope.com. And, naturally, I also want to hear from Max and Anthony about 2019. What are you looking forward to for FLI next year?

Max: For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth. At the nerdy level, I’m looking forward to more collaboration on AI’s safety research and also ways of making the economy, that keeps growing thanks to AI, actually make everybody better off, rather than some people poorer and angrier. And at the most global level really looking forward to working harder to get past this outdated us versus them attitude that we still have between the US and China and Russia and other major powers. Many of our political leaders are so focused on the zero sum game mentality that they will happily risk major risks of nuclear war and AI arms races and other outcomes where everybody would lose, instead of just realizing hey, you know, we’re actually in this together. What does it mean for America to win? It means that all Americans get better off. What does it mean for China to win? It means that the Chinese people all get better off. Those two things can obviously happen at the same time as long as there’s peace, and technology just keeps improving life for everybody.

In practice, I’m very eagerly looking forward to seeing if we can get scientists from around the world — for example, AI researchers — to converge on certain shared goals that are really supported everywhere in the world, including by political leaders and in China and the US and Russia and Europe and so on, instead of just obsessing about the differences. Instead of thinking us versus them, it’s all of us on this planet working together against the common enemy, which is our own stupidity and the tendency to make bad mistakes, so that we can harness this powerful technology to create a future where everybody wins.

Anthony: I would say I’m looking forward to more of what we’re doing now, thinking more about the futures that we do want. What exactly do those look like? Can we really think through pictures of the future that makes sense to us that are attractive, that are plausible, and yet aspirational, and where we can identify things and systems and institutions that we can build now toward the aim of getting us to those futures? I think there’s been a lot of, so far, thinking about what are the major problems that might arise, and I think that’s really, really important, and that project is certainly not over, and it’s not like we’ve avoided all of those pitfalls by any means, but I think it’s important not to just not fall into the pit, but to actually have a destination that we’d like to get to — you know, the resort at the other end of the jungle or whatever.

I find it frustrating a bit when people do what I’m doing now: they talk about talking about what we should and shouldn’t do. But they don’t actually talk about what we should and shouldn’t do. I think the time has come to actually talk about it in the same way that when… there was the first use of CRISPR in a embryo that came to term. So everybody’s talking about, “Well, we need to talk about what we should and shouldn’t do with this. We need to talk about that, we need to talk about it.” Let’s talk about it already.

So I’m excited about upcoming events that FLI will be involved in that are explicitly thinking about: let’s talk about what that future is that we would like to have and let’s debate it, let’s have that discussion about what we do want and don’t want, try to convince each other and persuade each other of different visions for the future. I do think we’re starting to actually build those visions for what institutions and structures in the future might look like. And if we have that vision, then we can think of what are the things we need to put in place to have that.

Ariel: So one of the reasons that I wanted to bring Gaia on is because I’m working on a project with her — and it’s her project — where we’re looking at this process of what’s known as worldbuilding, to sort of look at how we can move towards a better future for all. I was hoping you could describe it, this worldbuilding project that I’m attempting to help you with, or work on with you. What is worldbuilding, and how are you modifying it for your own needs?

Gaia: Yeah. Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series, for example. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So these huge connected systems of systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are these vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice. And in my personal definition of worldbuilding, the way that I’m thinking of it, and using it, is that it unfolds in four main stages. The first stage is: we develop a foundation of shared knowledge that’s grounded in science, and research, and relevant domain expertise. And the second phase is building on that foundation of knowledge. We engage in an exercise where we predict how the interconnected systems that have emerged in this knowledge database — we predict how they will evolve. And we imagine the state of their evolution at a specific point in the future. Then the third phase is really about capturing that state in all its complexity, and making that information useful to the people who need to interface with it. And that can be in the form of interlinked databases and particularly also in the form of visualizations, which help make these sort of abstract ideas feel more present and concrete. And then the fourth and final phase is then utilizing that resulting world as a tool that can be used to support scenario simulation, research, and development in many different areas including public policy, media production, education, and product development.

I mentioned that these techniques are being brought outside of the realm of entertainment. So rather than just designing fantasy worlds for the sole purpose of containing narrative fiction and stories, these techniques are now being used with communities, and Fortune 500 companies, and foundations, and NGOs, and other places, to create plausible future worlds. It’s fascinating to me to see how these are being used. For example, they’re being used to reimagine the mission of an organization. They’re being used to plan for the future, and plan around a collective vision of that future. They’re very powerful for developing new strategies, new programs, and new products. And I think to me one of the most interesting things is really around informing policy work. That’s how I see worldbuilding.

Ariel: Are there any actual examples that you can give or are they proprietary?

Gaia: There are many examples that have created some really incredible outcomes. One of the first examples of worldbuilding that I ever learned about was a project that was done with a native Alaskan tribe. And the comments that came from the tribe and about that experience were what really piqued my interest. Because they said things like, “This enabled us to sort of leap frog over the barriers in our current thinking and imagine possibilities that were sort of beyond what we had considered.” This project brought together several dozen members of the community, again, to engage in this collaborative design exercise, and actually visualize and build out those systems and understand how they would be interconnected. And it ended up resulting in, I think, some really incredible things. Like a partnership with MIT where they brought a digital fabrication lab onto their reservation, and created new education programs around digital design and digital fabrication for their youth. And there’s a lot of other things that are still coming out of that particular worldbuild.

There are other examples where Fortune 500 companies are building out really detailed, long-term worldbuilds that are helping them stay relevant, and imagine how their business model is going to need to transform in order to adapt to really plausible, probable futures that are just around the corner.

Ariel: I want to switch now to what you specifically are working on. The project we’re looking at is looking roughly 20 years into the future. And you’ve sort of started walking through a couple systems yourself while we’ve been working on the project. And I thought that it might be helpful if you could sort of walk through, with us, what those steps are to help understand how this process works.

Gaia: Maybe I’ll just take a quick step back, if that’s okay and just explain the worldbuild that we’re preparing for.

Ariel: Yeah. Please do.

Gaia: This is a project called Augmented Intelligence. The first Augmented Intelligence summit is happening in March in 2019. And our goal with this project is really to engage with and shift the culture, and also our mindset, about the future of artificial intelligence. And to bring together a multidisciplinary group of leaders from government, academia, and industry, and to do a worldbuild that’s focused on this idea of: what does our future world look like with advanced AI deeply integrated into it? And to go through the process of really imagining and predicting that world in a way that’s just a bit further beyond the horizon that we normally see and talk about. And that exercise, that’s really where we’re getting that training for long-term thinking, and for systems level thinking. And the world that results — our hope is that it will allow us to develop better intuitions, to experiment, to simulate scenarios, and really to have a more attuned capacity to engage in many ways with this future. And ultimately explore how we want to evolve our tools and our society to meet that challenge.

Gaia: What will come out of this process — it really is a generative process that will create assets and systems that are interconnected, that inhabit and embody a world. And this world should allow us to experiment, and simulate scenarios, and develop a more attuned capacity to engage with the future. And that means on both an intuitive level and also in a more formal structured way. And ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future. Our goal is to really bootstrap a steering mechanism that allows us to navigate more effectively toward outcomes that support human flourishing.

Ariel: I think that’s really helpful. I think an example to walk us through what that looks like would be helpful.

Gaia: Sure. You know, basically what would happen in a worldbuilding process is that you would have some constraints or some sort of seed information that you think is very likely — based on research, based on the literature, based on sort of the input that you’re getting from domain experts in that area. For example, you might say, “In the future we think that education is all going to happen in a virtual reality system that’s going to cover the planet.” Which I don’t think is actua