AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell

Stuart Russell is one of AI’s true pioneers and has been at the forefront of the field for decades. His expertise and forward thinking have culminated in his newest work, Human Compatible: Artificial Intelligence and the Problem of Control. The book is a cornerstone piece, alongside Superintelligence and Life 3.0, that articulates the civilization-scale problem we face of aligning machine intelligence with human goals and values. Not only is this a further articulation and development of the AI alignment problem, but Stuart also proposes a novel solution which bring us to a better understanding of what it will take to create beneficial machine intelligence.

 Topics discussed in this episode include:

  • Stuart’s intentions in writing the book
  • The history of intellectual thought leading up to the control problem
  • The problem of control
  • Why tool AI won’t work
  • Messages for different audiences
  • Stuart’s proposed solution to the control problem

Key points from Stuart: 

  •  “I think it was around 2013 that it really struck me that in fact we’d been thinking about AI the wrong way all together. The way we had set up the whole field was basically kind of a copy of human intelligence in that a human is intelligent, if their actions achieve their goals. And so a machine should be intelligent if its actions achieve its goals. And then of course we have to supply the goals in the form of reward functions or cost functions or logical goals statements. And that works up to a point. It works when machines are stupid. And if you provide the wrong objective, then you can reset them and fix the objective and hope that this time what the machine does is actually beneficial to you. But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that’s across purposes with our own. And we wouldn’t win that chess match.”
  • “So when a human gives an objective to another human, it’s perfectly clear that that’s not the sole life mission. So you ask someone to fetch the coffee, that doesn’t mean fetch the coffee at all costs. It just means on the whole, I’d rather have coffee than not, but you know, don’t kill anyone to get the coffee. Don’t empty out my bank account to get the coffee. Don’t trudge 300 miles across the desert to get the coffee. In the standard model of AI, the machine doesn’t understand any of that. It just takes the objective and that’s its sole purpose in life. The more general model would be that the machine understands that the human has internally some overall preference structure of which this particular objective fetch the coffee or take me to the airport is just a little local manifestation. And machine’s purpose should be to help the human realize in the best possible way their overall preference structure. If at the moment that happens to include getting a cup of coffee, that’s great or taking him to the airport. But it’s always in the background of this much larger preference structure that the machine knows and it doesn’t fully understand. One way of thinking about is to say that the standard model of AI assumes that the machine has perfect knowledge of the objective and the model I’m proposing assumes that the model has imperfect knowledge of the objective or partial knowledge of the objective. So it’s a strictly more general case.”
  • “The objective is to reorient the field of AI so that in future we build systems using an approach that doesn’t present the same risk as the standard model… That’s the message I think for the AI community is the first phase our existence maybe should come to an end and we need to move on to this other way of doing things. Because it’s the only way that works as machines become more intelligent. We can’t afford to stick with the standard model because as I said, systems with the wrong objective could have arbitrarily bad consequences.”

 

Important timestamps: 

0:00 Intro

2:10 Intentions and background on the book

4:30 Human intellectual tradition leading up to the problem of control

7:41 Summary of the structure of the book

8:28 The issue with the current formulation of building intelligent machine systems

10:57 Beginnings of a solution

12:54 Might tool AI be of any help here?

16:30 Core message of the book

20:36 How the book is useful for different audiences

26:30 Inferring the preferences of irrational agents

36:30 Why does this all matter?

39:50 What is really at stake?

45:10 Risks and challenges on the path to beneficial AI

54:55 We should consider laws and regulations around AI

01:03:54 How is this book differentiated from those like it?

 

Works referenced:

Human Compatible: Artificial Intelligence and the Problem of Control

Superintelligence

Life 3.0

Occam’s razor is insufficient to infer the preferences of irrational agents

Synthesizing a human’s preferences into a utility function with Stuart Armstrong

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas: Hey everyone, welcome back to the AI Alignment Podcast. I’m Lucas Perry and today we’ll be speaking with Stuart Russell about his new book, Human Compatible: Artificial Intelligence and The Problem of Control. Daniel Kahneman says “This is the most important book I have read in quite some time. It lucidly explains how the coming age of artificial super intelligence threatens human control. Crucially, it also introduces a novel solution and a reason for hope.”

Yoshua Bengio says that “This beautifully written book addresses a fundamental challenge for humanity: increasingly intelligent machines that do what we ask, but not what we really intend. Essential reading if you care about our future.”

I found that this book helped clarify both intelligence and AI to me as well as the control problem born of the pursuit of machine intelligence. And as mentioned, Stuart offers a reconceptualization of what it means to build beneficial and intelligent machine systems. That provides a crucial place of pivoting and how we ought to be building intelligent machines systems.

Many of you will already be familiar with Stuart Russell. He is a professor of computer science and holder of the Smith-Zadeh chair in engineering at the University of California, Berkeley. He has served as the vice chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is an Andrew Carnegie Fellow as well as a fellow of the Association for The Advancement of Artificial Intelligence, the Association for Computing Machinery and the American Association for the Advancement of Science.

He is the author with Peter Norvig of the definitive and universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach. And so without further ado, let’s get into our conversation with Stuart Russell.

Let’s start with a little bit of context around the book. Can you expand a little bit on your intentions and background for writing this book in terms of timing and inspiration?

Stuart: I’ve been doing AI since I was in high school and for most of that time the goal has been let’s try to make AI better because I think we’ll all agree AI is mostly not very good. When we wrote the first edition of the textbook, we decided to have a section called, What If We Do Succeed? Because it seemed to me that even though everyone was working on making AI equivalent to humans or better than humans, no one was thinking about what would happen if that turned out to be successful.

So that section in the first edition in 94 was a little equivocal, let’s say, you know, we could lose control or we could have a golden age and let’s try to be optimistic. And then by the third edition, which was 2010 the idea that we could lose control was fairly widespread, at least outside the AI communities. People worrying about existential risk like Steve Omohundro, Eliezer Yudkowsky and so on.

So we included those a little bit more of that viewpoint. I think it was around 2013 that it really struck me that in fact we’d been thinking about AI the wrong way all together. The way we had set up the whole field was basically kind of a copy of human intelligence in that a human is intelligent, if their actions achieve their goals. And so a machine should be intelligent if its actions achieve its goals. And then of course we have to supply the goals in the form of reward functions or cost functions or logical goals statements. And that works up to a point. It works when machines are stupid. And if you provide the wrong objective, then you can reset them and fix the objective and hope that this time what the machine does is actually beneficial to you. But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that’s across purposes with our own. And we wouldn’t win that chess match.

So I started thinking about how to solve that problem. And the book is a result of the first couple of years of thinking about how to do it.

Lucas: So you’ve given us a short and concise history of the field of AI alignment and the problem of getting AI systems to do what you want. One of the things that I found so great about your book was the history of evolution and concepts and ideas as they pertain to information theory, computer science, decision theory and rationality. Chapters one through three you sort of move sequentially through many of the most essential concepts that have brought us to this problem of human control over AI systems.

Stuart: I guess what I’m trying to show is how ingrained it is in intellectual thought going back a couple of thousand years. Even in the concept of evolution, this notion of fitness, you know we think of it as an objective that creatures are trying to satisfy. So in the 20th century you had a whole lot of disciplines, economics developed around the idea of maximizing utility or welfare or profit depending on which branch you look at. Control theory is about minimizing a cost function, so the cost function described some deviation from ideal behavior and then you build systems that minimize the cost. Operations research, which is dynamic programming and Markov decision processes is all about maximizing the sum of rewards. And statistics if you set it up in general, is about minimizing an expected loss function.

So all of these disciplines have the same bug if you like. It’s a natural way to set things up, but in the long run we’ll just see it as a bad cramped way of doing engineering. And what I’m proposing in the book actually is a way of thinking about it that’s much more in a binary rather than thinking about the machine and it’s objective.

You think about this coupled system with humans or you know, it could be any entity that wants a machine to do something good for it or another system to do something good for it. And then the system itself, which is supposed to do something good for the human or whatever else it is that wants something good to happen. So this kind of coupled system, don’t really see that in the intellectual tradition. Maybe one exception that I know of, which is the idea of principle agent games in economics. So a principal might be an employer and the agent might be the employee. And then the game is how does the employer get the employee to do something that the employer actually wants them to do, given that the employee, the agent has their own utility function and would rather be sitting home drinking beers and watching football on the telly.

How do you get them to show up at work and do all kinds of things they wouldn’t normally want to do? The simplest way is you pay them. But you know, there’s all kinds of other ideas about incentive schemes and status and then various kinds of sanctions if people don’t show up and so on. So the economists study that notion, which is a coupled system where one entity wants to benefit from the behavior of another.

So that’s probably the closest example that we have. And then maybe in ecology, look at symbiotic species or something like that. But there’s not very many examples that I’m aware of. In fact, maybe I can’t think of any, where the entity that’s supposedly in control, namely us, is less intelligent than the entity that it’s supposedly controlling, namely the machine.

Lucas: So providing some framing and context here for the listener, the first part of your book, chapters one through three explores the idea of intelligence in humans and in machines. There you give this historical development of ideas and I feel that this history you give of computer science and the AI alignment problem really helps to demystify both the person and evolution as a process and the background behind this problem.

Your second part of your book, chapters four through six discusses some of the problems arising from imbuing machines with intelligence. So this is a lot of the AI alignment problem considerations. And then the third part, chapter seven through ten suggests a new way to think about AI, to ensure that machines remain beneficial to humans forever.

You’ve begun stating this problem and readers can see in chapters one through three that this problem goes back a long time, right? The problem with computer science at its inception was that definition that you gave that a machine is intelligent in so far as it is able to achieve its objectives. In reaction to this, you’ve developed cooperative inverse reinforcement learning and inverse reinforcement learning, which is sort of part of the latter stages of this book where you’re arguing for new definition that is more conducive to alignment.

Stuart: Yeah. In the standard model as I call it in the book, the humans specifies the objective and plugs it into the machine. If for example, you get in your self driving car and it says, “Where do you want to go?” And you say, “Okay, take me to the airport.” For current algorithms as we understand them, understand built on this kind of model, that objective becomes the sole life purpose of the vehicle. It doesn’t necessarily understand that in fact that’s not your sole life purpose. If you suddenly get a call from the hospital saying, oh, you know, your child has just been run over and is in the emergency room. You may well not want to go to the airport. Or if you get into a traffic jam and you’ve already missed the last flight, then again you might not want to go to the airport.

So when a human gives an objective to another human, it’s perfectly clear that that’s not the sole life mission. So you ask someone to fetch the coffee, that doesn’t mean fetch the coffee at all costs. It just means on the whole, I’d rather have coffee than not, but you know, don’t kill anyone to get the coffee. Don’t empty out my bank account to get the coffee. Don’t trudge 300 miles across the desert to get the coffee.

In the standard model of AI, the machine doesn’t understand any of that. It just takes the objective and that’s its sole purpose in life. The more general model would be that the machine understands that the human has internally some overall preference structure of which this particular objective fetch the coffee or take me to the airport is just a little local manifestation. And machine’s purpose should be to help the human realize in the best possible way their overall preference structure.

If at the moment that happens to include getting a cup of coffee, that’s great or taking him to the airport. But it’s always in the background of this much larger preference structure that the machine knows and it doesn’t fully understand. One way of thinking about is to say that the standard model of AI assumes that the machine has perfect knowledge of the objective and the model I’m proposing assumes that the model has imperfect knowledge of the objective or partial knowledge of the objective. So it’s a strictly more general case.

When the machine has partial knowledge of the objective there’s whole lot of new things that come into play that simply don’t arise when the machine thinks it knows the objective. For example, if the machine knows the objective, it would never ask permission to do an action. It would never say, you know, is it okay if I do this because it believes that it’s already extracted all there is to know about human preferences in the form of this objective. And so whatever plan it formulates to achieve the objective must be the right thing to do.

Whereas a machine that knows that it doesn’t know the full objective could say, well, given what I know, this action looks okay, but I want to check with the boss before going ahead because it might be that this plan actually violate some part of the human preference structure that it doesn’t know about. So you get machines that ask permission, you get machines that, for example, allow themselves to be switched off because the machine knows that it might do something that will make the human unhappy. And if the human wants to avoid that and switches the machine off, that’s actually a good thing. Whereas a machine that has a fixed objective would never want to be switched off because that guarantees that it won’t achieve the objective.

So in the new approach you have a strictly more general repertoire of behaviors that the machine can exhibit. The idea of inverse reinforcement learning is this is the way for the machine to actually learn more about what the human preference structure is. By observing human behavior, which could be verbal behavior, like, could you fetch me a cup of coffee? That’s a fairly clear indicator about your preference structure, but it could also be that you know, you ask a human question and the human doesn’t reply. Maybe the human’s mad at you and is unhappy about the line of questioning that you’re pursuing.

 So human behavior means everything humans do and have done in the past. So everything we’ve ever written down, every movie we’ve made, every television broadcast contains information about human behavior and therefore about human preferences. Inverse reinforcement learning really means how do we take all that behavior and learn human preferences from it?

Lucas: What can you say about how tool AI as a possible path to AI alignment fits in this schema where we reject the standard model, as you call it, in favor of this new one?

Stuart: Tool AI is a notion, oddly enough, it doesn’t really occur within the field of AI. It’s a phrase that came from people who are thinking from the outside about possible risks from AI. And what it seems to mean is the idea that rather than buildings general purpose intelligence systems. If you are building AI systems designed for some specific purpose, then that’s sort of innocuous and doesn’t present any risks. And some people argue that in fact if you just have a large collection of these innocuous application specific AI systems, then there’s nothing to worry about.

My experience of tool AI is that when you build applications specific systems, you can kind of do it in two ways. One is you kind of hack it. In other words, you figure out how you would do this task and then you write a whole bunch of very, very special purpose code. So, for example, if you were doing handwriting recognition, you might think, oh, okay, well in order to find an ‘S’ I have to look for a line that’s curvy and I follow the line and it has to have three bends, it has to be arranged this way. And you know, you write a whole bunch of tests to check each characteristic of an ad that it has all these characteristics and it doesn’t have any loops and this, that and the other. And then you see okay, that’s an S.

And that’s actually not the way that people went about the problem of handwriting recognition. The way that they did it was to develop machine learning systems that could take images of characters that were labeled and then train a recognizer that could recognize new instances of characters. And in fact, Yann LeCun at AT&T was doing a system that was designed to recognize words and figures on checks. So very, very, very application specific, very tooley and order to do that he invented convolutional neural networks. Which is what we now call deep learning.

So, out of this very, very narrow piece of tool AI came this very, very general technique. Which has solved or largely solved object recognition, speech recognition, machine translation, and some people argue will produce general purpose AI. So I don’t think there’s any safety to be found in focusing on tool AI.

The second point is that people feel that somehow to tool AI is not an agent. So an agent meaning a system that you can think of as perceiving the world and then taking actions. And again, I’m not sure that’s really true. So a Go program is an agent. It’s an agent that operates in a small world, namely the Go board, but it perceives the board, the move that’s made and it takes action.

It chooses what to do next in many applications like this, this is the really the only way to build an effective tool is that it should be an agent. If it’s a little vacuum cleaning robot or lawn mowing robot, certainly a domestic robot that’s supposed to keep your house clean and look after the dog while you’re out. There’s simply no way to build those kinds of systems except as agents and as we improve the capabilities of these systems, whether it’s for perception or planning and behaving in the real physical world. We’re effectively going to be creating general purpose intelligent agents. I don’t really see salvation in the idea that we’re just going to build applications specific tools.

Lucas: So that helps to clarify that tool AI do not get around this update that you’re trying to do with regards to the standard model. So pivoting back to intentions surrounding the book, if you could distill the core message or the central objective in writing this book, how would you say that?

Stuart: The objective is to reorient the field of AI so that in future we build systems using an approach that doesn’t present the same risk as a standard model. I’m addressing multiple audiences. That’s the message I think for the AI community is the first phase our existence maybe should come to an end and we need to move on to this other way of doing things. Because it’s the only way that works as machines become more intelligent. We can’t afford to stick with the standard model because as I said, systems with the wrong objective could have arbitrarily bad consequences.

Then the other audience is the general public, people who are interested in policy, how things are going to unfold in future and technology and so on. For them, I think it’s important to actually understand more about AI rather than just thinking of AI as this kind of magic juice that triples the value of your startup company. It’s a collection of technologies and those technologies have been built within a framework, the standard model that has been very useful and is shared with these other fields, economic, statistics, operations of search, control theory. But that model does not work as we move forward and we’re already seeing places where the failure of the model is having serious negative consequences.

One example would be what’s happened with social media. So social media algorithms, content selection algorithms are designed to show you stuff or recommend stuff in order to maximize click-through. Clicking is what generates revenue for the social media platforms. And so that’s what they tried to do and I almost said they want to show you stuff that you will click on. And that’s what you might think is the right solution to that problem, right? If you want to maximize, click-through, then show people stuff they want to click on and that sounds relatively harmless.

Although people have argued that this creates a filter bubble or a little echo chamber where you only see stuff that you like and you don’t see anything outside of your comfort zone. That’s true. It might tend to cause your interests to become narrower, but actually that isn’t really what happened and that’s not what the algorithms are doing. The algorithms are not trying to show you the stuff you like. They’re trying to turn you into predictable clickers. They seem to have figured out that they can do that by gradually modifying your preferences and they can do that by feeding you material. That’s basically, if you think of a spectrum of preferences, it’s to one side or the other because they want to drive you to an extreme. At the extremes of the political spectrum or the ecological spectrum or whatever image you want to look at. You’re apparently a more predictable clicker and so they can monetize you more effectively.

So this is just a consequence of reinforcement learning algorithms that optimize click-through. And in retrospect, we now understand that optimizing click-through was a mistake. That was the wrong objective. But you know, it’s kind of too late and in fact it’s still going on and we can’t undo it. We can’t switch off these systems because there’s so tied in to our everyday lives and there’s so much economic incentive to keep them going.

So I want people in general to kind of understand what is the effect of operating these narrow optimizing systems that pursue these fixed and incorrect objectives. The effect of those on our world is already pretty big. Some people argue that operation’s pursuing the maximization of profit have the same property. They’re kind of like AI systems. They’re kind of super intelligent because they think over long time scales, they have massive information, resources and so on. They happen to have human components, but when you put a couple of hundred thousand humans together into one of these corporations, they kind of have this super intelligent understanding, manipulation capabilities and so on.

Lucas: This is a powerful and important update for research communities. I want to focus here in a little bit on the core messages of the book as per each audience because I think you can say and clarify different things for different people. So for example, my impressions are that for sort of laypersons who are not AI researchers, the history of ideas that you give clarifies the foundations of many fields and how it has led up to this AI alignment problem. As you move through and past single agent cases to multiple agent cases where we give rise to game theory and decision theory and how that all affects AI alignment.

So for laypersons, I think this book is critical for showing the problem, demystifying it, making it simple, and giving the foundational and core concepts for which human beings need to exist in this world today. And to operate in a world where AI is ever becoming a more important thing.

And then for the research community, as you just discussed, it seems like this rejection of the standard model and this clear identification of systems with exogenous objectives that are sort of singular and lack context and nuance. That when these things optimize for their objectives, they run over a ton of other things that we care about. And so we have to shift from this understanding where the objective is something inside of the exogenous system to something that the system is uncertain about and which actually exists inside of the person.

And I think the last thing that I sort of saw was for people who are not AI researchers, it says, here’s this AI alignment problem. It is deeply interdependent and difficult. It requires economists and sociologists and moral philosophers. And for this reason too, it is important for you to join in to help. Do you have anything here you’d like to hit on or expand on or anything I might’ve gotten wrong?

Stuart: I think that’s basically right. One thing that I probably should clarify, and it comes maybe from the phrase value alignment. The goal is not to build machines whose values are identical to those of humans. In other words, it’s not to just put in the right objective because I actually believe that that’s just fundamentally impossible to do that. Partly because humans actually don’t know their own preference structure. There’s lots of things that we might have a future positive or a negative reaction to that we don’t yet know, lots of foods that we haven’t yet tried. And in the book I give the example of the durian fruit, which some people really love and some people find utterly disgusting, and I don’t know which I am because I’ve never tried it. So I’m genuinely uncertain about my own preference structure.

It’s really not going to be possible for machines to be built with the right objective built in. They have to know that they don’t know what the objective is. And it’s that uncertainty that creates this deferential behavior. It becomes rational for that machine to ask permission and to allow itself to be switched off, which as I said, are things that a standard model machine would never do.

The reason why psychology, economics, moral philosophy become absolutely central, is that these fields have studied questions of human preferences, human motivation, and also the fundamental question which machines are going to face, of how do you act on behalf of more than one person? The version of the problem where there’s one machine and one human is relatively constrained and relatively straightforward to solve, but when you get one machine and many humans or many machines and many humans, then all kinds of complications come in, which social scientists have studied for centuries. That’s why they do it, because there’s more than one person.

And psychology comes in because the process whereby the machine is going to learn about human preferences requires that there be some connection between those preferences and the behavior that humans exhibit, because the inverse reinforcement learning process involves observing the behavior and figuring out what are the underlying preferences that would explain that behavior, and then how can I help the human with those preferences.

Humans, surprise, surprise, are not perfectly rational. If they were perfectly rational, we wouldn’t need to worry about psychology; we would do all this just with mathematics. But the connection between human preferences and human behavior is extremely complex. It’s mediated by our whole cognitive structure, and is subject to lots of deviations from perfect rationality. One of the deviations is that we are simply unable, despite our best efforts, to calculate what is the right thing to do given our preferences.

Lee Sedol, I’m pretty sure wanted to win the games of Go that he was playing against AlphaGo, but he wasn’t able to, because he couldn’t calculate the winning move. And so if you observe his behavior and you assume that he’s perfectly rational, the only explanation is that he wanted to lose, because that’s what he did. He made losing moves. But actually that would be obviously a mistake.

So we have to interpret his behavior in the light of his cognitive limitations. That becomes then a matter of empirical psychology. What are the cognitive limitations of humans, and how do they manifest themselves in the kind of imperfect decisions that we make? And then there’s other deviations from rationality. We’re myopic, we suffer from weakness of will. We know that we ought to do this, that this is the right thing to do, but we do something else. And we’re emotional. We do things driven by our emotional subsystems, when we lose our temper for example, that we later regret and say, “I wish I hadn’t done that.”

 All of this is really important for us to understand going forward, if we want to build machines that can accurately interpret human behavior as evidence for underlying human preferences.

Lucas: You’ve touched on inverse reinforcement learning in terms of human behavior. Stuart Armstrong was on the other week, and I believe his claim was that you can’t infer anything about behavior without making assumptions about rationality and vice versa. So there’s sort of an incompleteness there. I’m just pushing here and wondering more about the value of human speech, about what our revealed preferences might be, how this fits in with your book and narrative, as well as furthering neuroscience and psychology, and how all of these things can decrease uncertainty over human preferences for the AI.

Stuart: That’s a complicated set of questions. I agree with Stuart Armstrong that humans are not perfectly rational. I’ve in fact written an entire book about that. But I don’t agree that it’s fundamentally impossible to recover information about preferences from human behavior. Let me give the kind of straw man argument. So let’s take Gary Kasparov: chess player, was world champion in the 1990s, some people would argue the strongest chess player in history. You might think it’s obvious that he wanted to win the games that he played. And when he did win, he was smiling, jumping up and down, shaking his fists in triumph. And when he lost, he behaved in a very depressed way, he was angry with himself and so on.

Now it’s entirely possible logically that in fact he wanted to lose every single game that he played, but his decision making was so far from rational that even though he wanted to lose, he kept playing the best possible move. So he’s got this completely reversed set of goals and a completely reversed decision making process. So it looks on the outside as if he’s trying to win and he’s happy when he wins. But in fact, he’s trying to lose and he’s unhappy when he wins, but his attempt to appear unhappy again is reversed. So it looks on the outside like he’s really happy because he keeps doing the wrong things, so to speak.

This is an old idea in philosophy. Donald Davidson calls it radical interpretation: that from the outside, you can sort of flip all the bits and come up with an explanation that’s sort the complete reverse of what any reasonable person would think the explanation to be. The problem with that approach is that it then takes away the meaning of the word “preference” altogether. For example, let’s take the situation where Kasparov can checkmate his opponent in one move, and it’s blatantly obvious and in fact, he’s taken a whole sequence of moves to get to that situation.

If in all such cases where there’s an obvious way to achieve the objective, he simply does something different, in other words, let’s say he resigns, so whenever he’s in a position with an obvious immediate win, he instantly resigns, then in what sense is it meaningful to say that Kasparov actually wants to win the game if he always resigns whenever he has a chance of winning?

You simply vitiate the entire meaning of the word “preference”. It’s just not correct to say that a person who always resigns whenever they have a chance of winning really wants to win games. You can then kind of work back from there. So by observing human behavior in situations where the decision is kind of an obvious one that doesn’t require a huge amount of calculation, then it’s reasonable to assume that the preferences are the ones that they reveal by choosing the obvious action. If you offer someone a lump of coal or a $1,000 bill and they choose a $1,000 bill, it’s unreasonable to say, “Oh, they really prefer the lump of coal, but they’re just really stupid, so they keep choosing the $1,000 dollar bill.” That would just be daft. So in fact it’s quite natural that we’re able to gradually infer the preferences of imperfect entities, but we have to make some assumptions that we might call minimal rationality, which is that in cases where the choice is obvious, people will generally tend to make the obvious choice.

Lucas: I want to be careful here about not misrepresenting any of Stuart Armstrong’s ideas. I think this is in relation to the work Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents, if you’re familiar with that?

Stuart: Yeah.

Lucas: So then everything you said still suffices. Is that the case?

Stuart: I don’t think we radically disagree. I think maybe it’s a matter of emphasis. How important is it to observe the fact that there is this possibility of radical interpretation? It doesn’t worry me. Maybe it worries him, but it doesn’t worry me because we do a reasonably good job of inferring each other’s preferences all the time by just ascribing at least a minimum amount of rationality in human decision making behavior.

This is why economists, the way they try to elicit preferences, is by offering you direct choices. They say, “Here’s two pizzas. Are you going to have a bubblegum and pineapple pizza, or you can have ham and cheese pizza. Which one would you like?” And if you choose the ham and cheese pizza, they’ll infer that you prefer the ham and cheese pizza, and not the bubblegum and pineapple one, as seems pretty reasonable.

There may be real cases where there is genuine ambiguity about what’s driving human behavior. I am certainly not pretending that human cognition is no mystery; it still is largely a mystery. And I think for the long term, it’s going to be really important to try to unpack some of that mystery. Horribly to me, the biggest deviation from rationality that humans exhibit is the fact that our choices are always made in the context of a whole hierarchy of commitments that effectively put us into what’s usually a much, much smaller decision-making situation than the real problem. So the real problem is I’m alive, I’m in this enormous world, I’m going to live for a few more decades hopefully, and then my descendants will live for years after that and lots of other people on the world will live for a long time. So which actions do I do now?

And I could do anything. I could continue talking to you and recording this podcast. I could take out my phone and start trading stocks. I could go out on the street and start protesting climate change. I could set fire to the building and claim the insurance payment, and so on and so forth. I could do a gazillion things. Anything that’s logically possible I could do. And I continue to talk in the podcast because I’m existing in this whole network and hierarchy of commitments. I agreed that we would do the podcast, and why did I do that? Well, because you asked me, and because I’ve written the book and why did I write the book and so on.

So there’s a whole nested collection of commitments, and we do that because otherwise we couldn’t possibly manage to behave successfully in the real world at all. The real decision problem is not, what do I say next in this podcast? It’s what motor control commands do I send to my 600 odd muscles in order to optimize my payoff for the rest of time until the heat death of the universe? And that’s completely and utterly impossible to figure out.

I always, and we always, exist within what I think Savage called a small world decision problem. We are aware only of a small number of options. So if you want to understand human behavior, you have to understand what are the commitments and what is the hierarchy of activities in which that human is engaged. Because otherwise you might be wondering, well why isn’t Stuart taking out his phone and trading stocks? But that would be a silly thing to wonder. It’s reasonable to ask, well why is he answering the question that way and not the other way?

Lucas: And so “AI, please fetch the coffee,” also exists in such a hierarchy. And without the hierarchy, the request is missing much of the meaning that is required for the AI to successfully do the thing. So it’s like an inevitability that this hierarchy is required to do things that are meaningful for people.

Stuart: Yeah, I think that’s right. Requests are a very interesting special case of behavior, right? They’re just another kind of behavior. But up to now, we’ve interpreted them as defining the objective for the machine, which is clearly not the case. And people have recognized this for a long time. For example, my late colleague Bob Wilensky had a project called the Unix Consultant, which was a natural language system, and it was actually built as an agent, that would help you with Unix stuff, so managing files on your desktop and so on. You could ask it questions like, “Could you make some more space on my disk?”, and the system needs to know that RM*, which means “remove all files”, is probably not the right thing to do, that this request to make space on the disk is actually part of a larger plan that the user might have. And for that plan, most of the other files are required.

So a more appropriate response would be, “I found these backup files that have already been deleted. Should I empty them from the trash?”, or whatever it might be. So in almost no circumstances would a request be taken literally as defining the sole objective. If you asked for a cup of coffee, what happens if there’s no coffee? Perhaps it’s reasonable to bring a cup of tea or “Would you like a can of Coke instead?”, and not to … I think in the book I had the example that you stop at a gas station in the middle of the desert, 250 miles from the nearest town and they haven’t got any coffee. The right thing to do is not to trundle off across the desert and come back 10 days later with coffee from a nearby town. But instead to ask, well, “There isn’t any coffee. Would you like some tea or some Coca-Cola instead?”

 This is very natural for humans and in philosophy of language, my other late colleague Paul Grice, was famous for pointing out that many statements, questions, requests, commands in language have this characteristic that they don’t really mean what they say. I mean, we all understand if someone says, “Can you pass the salt?”, the correct answer is not, “Yes, I am physically able to pass the salt.” He became an adjective, right? So we talk about Gricean analysis, where you don’t take the meaning literally, but you look at the context in which it was said and the motivations of the speaker and so on to infer what is a reasonable course of action when you hear that request.

Lucas: You’ve done a wonderful job so far painting the picture of the AI alignment problem and the solution for which you offer, at least the pivoting which you’d like the community to take. So for laypersons who might not be involved or experts in AI research, plus the AI alignment community, plus potential researchers who might be brought in by this process or book, plus policymakers who may also listen to it, what’s at stake here? Why does this matter?

Stuart: I think AI, for most of its history, has been an interesting curiosity. It’s a fascinating problem, but as a technology it was woefully lacking. And it has found various niches where it’s useful, even before the current incarnation in terms of deep learning. But if we assume that progress will continue and that we will create machines with general purpose intelligence, that would be roughly speaking, the biggest event in human history.

History, our civilization, is just a consequence of the fact that we have intelligence, and if we had a lot more, it would be a radical step change in our civilization. If these were possible at all, it would enable other inventions that people have talked about as possibly the biggest event in human history, for example, creating the ability for people to live forever or much, much longer life span than we currently have, or creating the possibility for people to travel faster than light so that we could colonize the universe.

If those are possible, then they’re going to be much more possible with the help of AI. If there’s a solution to climate change, it’s going to be much more possible to solve climate change with the help of AI. It’s this fact that AI in the form of general purpose intelligence systems is this kind of über technology that makes it such a powerful development if and when it happens. So the upside is enormous. And then the downside is also enormous, because if you build things that are more intelligent than you, then you face this problem. You’ve made something that’s much more powerful than human beings, but somehow you’ve got to make sure that it never actually has any power. And that’s not completely obvious how to do that.

The last part of the book is a proposal for how we could do that, how you could change this notion of what we mean by an intelligent system so that rather than copying this sort of abstract human model, this idea of rationality, of decision making in the interest, in the pursuit of one’s own objectives, we have this other kind of system, this sort of coupled binary system where the machine is necessarily acting in the service of human preferences.

If we can do that, then we can reap the benefits of arbitrarily intelligent AI. Then as I said, the upside would be enormous. If we can’t do that, if we can’t solve this problem, then there are really two possibilities. One is that we need to curtail the development of artificial intelligence and for all the reasons that I just mentioned, it’s going to be very hard because the upside incentive is so enormous. It would be very hard to stop research and development in AI.

The third alternative is that we create general purpose, superhuman intelligent machines and we lose control of them, and they’re pursuing objectives that are ultimately mistaken objectives. There’s tons of science fiction stories that tell you what happens next, and none of them are desirable futures for the human race.

Lucas: Can you expand upon what you mean by if we’re successful in the control/alignment problem, what “tremendous” actually means? What actually are the conclusions or what is borne out of the process of generating an aligned super intelligence from that point on until heat death or whatever else?

Stuart: Assuming that we have a general purpose intelligence that is beneficial to humans, then you can think about it in two ways. I already mentioned the possibility that you’d be able to use that capability to solve problems that we find very difficult, such as eternal life, curing disease, solving the problem of climate change, solving the problem of faster than light travel and so on. You might think of these as sort of the science fiction-y upside benefits. But just in practical terms, when you think about the quality of life for most people on earth, let’s say it leaves something to be desired. And you say, “Okay, would be a reasonable aspiration?”, and put it somewhere like the 90th percentile in the US. That would mean a ten-fold increase in GDP for the world if you brought everyone on earth up to what we call a reasonably nice standard of living by Western standards.

General purpose AI can do that in the following way, without all these science fiction inventions and so on. So just deploying the technologies and materials and processes that we already have in ways that are much, much more efficient and obviously much, much less labor intensive.

The reason that things cost a lot and the reason that people in poor countries can’t afford them … They can’t build bridges or lay railroad tracks or build hospitals because they’re really, really expensive and they haven’t yet developed the productive capacities to produce goods that could pay for all those things. The reason things are really, really expensive is because they have a very long chain of production in which human effort is involved at every stage. The money all goes to pay all those humans, whether it’s the scientists and engineers who designed the MRI machine or the people who worked on the production line or the people who worked mining the metals that go into making the MRI machine.

All the money is really paying for human time. If machines are doing every stage of the production process, then you take all of those costs out, and to some extent it becomes like a digital newspaper, in the sense that you can have as much of it as you want. It’s almost free to make new copies of a digital newspaper, and it would become almost free to produce the material goods and services that constitute a good quality of life for people. And at that point, arguing about who has more of it is like arguing about who has more digital copies of the newspaper. It becomes sort of pointless.

That has two benefits. One is everyone is relatively much better off, assuming that we can get politics and economics out of the way, and also there’s then much less incentive for people to go around starting wars and killing each other, because there isn’t this struggle which has sort of characterized most of human history. The struggle for power, wealth and access to resources and so on. There are other reasons people kill each other, religion being one of them, but it certainly I think would help if this source of competition and warfare were removed.

Lucas: These are very important short-term considerations and benefits from getting this control problem and this alignment problem correct. One thing that the superintelligence will hopefully also do is reduce existential risk to zero, right?  And so if existential risk is reduced to zero, then basically what happens is the entire cosmic endowment, some hundreds of thousands of galaxies, become unlocked to us. Perhaps some fraction of it would have to be explored first in order to ensure existential risk is pretty close to zero. I find your arguments are pragmatic and helpful for the common person about why this is important.

For me personally, and why I’m passionate about AI alignment and existential risk issues, is that the reduction of existential risk to zero and having an aligned intelligence that’s capable of authentically spreading through the cosmic endowment, to me seems to potentially unlock a kind of transcendent object at the end of time, ultimately influenced by what we do here and now, which is directed and created by coming to better know what is good, and spreading that.

What I find so beautiful and important and meaningful about this problem in particular, and why anyone who’s reading your book, why it’s so important for them for core reading, and reading for laypersons, for computer scientists, for just everyone, is that if we get this right, this universe can be maybe one of the universes and perhaps the multiverse, where something like the most beautiful thing physically possible could be made by us within the laws of physics. And that to me is extremely awe-inspiring.

Stuart: I think that human beings being the way they are, will probably find more ways to get it wrong. We’ll need more solutions for those problems and perhaps AI will help us solve other existential risks, and perhaps it won’t. The control problem I think is very important. There are a couple of other issues that I think we still need to be concerned with. Well, I don’t think we need to be concerned with all of them, but a couple of issues that I haven’t begun to address or solve … One of those is obviously the problem of misuse, that we may find ways to build beneficial AI systems that remain under control in a mathematically guaranteed way. And that’s great. But the problem of making sure that only those kinds of systems are ever built and used, that’s a different problem. That’s a problem about human motivation and human behavior, which I don’t really have a good solution to. It’s sort of like the malware problem, except much, much, much, much worse. If we do go ahead developing general purpose intelligence systems that are beneficial and so on, then, parts of that technology, the general purpose intelligent capabilities could be put into systems that are not beneficial as it were, that don’t have a safety catch. And that misuse problem. If you look at how well we’re doing with malware, you’d have to say, more work needs to be done. We’re kind of totally failing to control malware and the ability of people to inflict damage on others by uncontrolled software that’s getting worse. We need an international response and a policing response. Some people argue that, oh, it’s fine. The super intelligent AI that we build will make sure that other nefarious development efforts are nipped in the bud.

This doesn’t make me particularly confident. So I think that’s an issue. The third issue is, shall we say enfeeblement. This notion that if we develop machines that are capable of running every aspect of our civilization, then that changes the dynamic that’s been in place since the beginning of human history or pre history. Which is that for our civilization to continue, we have had to pass on our knowledge and our skills to the next generation. That people have to learn what it is that the human race knows over and over again in every generation, just to keep things going. And if you add it all up, if you look, there’s about a hundred odd billion people who’ve ever lived and they spend each about 10 years learning stuff on average. So that’s a trillion person years of teaching and learning to keep our civilization going. And there’s a very good reason why we’ve done that because without it, things would fall apart very quickly.

But that’s going to change. Now. We don’t have to put it into the heads of the next generation of humans. We can put it into the heads of the machines and they can take care of the civilization. And then you get this almost irreversible process of enfeeblement, where humans no longer know how their own civilization functions. They lose knowledge of science, of engineering, even of the humanities of literature. If machines are writing books and producing movies, then we don’t even need to learn that. You see this in E. M. Forster’s story, The Machine Stops from 1909 which is a very prescient story about a civilization that becomes completely dependent on its own machines. Or if you like something more recent in WALL-E the human race is on a, sort of a cruise ship in space and they all become obese and stupid because the machines look after everything and all they do is consume and enjoy. And that’s not a future that I would want for the human race.

And arguably the machines should say, this is not the future you want, tie your shoelaces, but we are these, shortsighted. We may effectively override what the machines are telling us and say, “No, no, you have to tie my shoe laces for me.” So I think this is a problem that we have to think about. Again, this is a problem for infinity. Once you turn things over to the machines, it’s practically impossible, I think, to reverse that process, we have to keep our own human civilization going in perpetuity and that requires a kind of a cultural process that I don’t yet understand how it would work, exactly.

Because the effort involved in learning, let’s say going to medical school, it’s 15 years of school and then college and then medical school and then residency. It’s a huge effort. It’s a huge investment and at some point the incentive to undergo that process will disappear. And so something else other than… So at the moment it’s partly money, partly prestige, partly a desire to be someone who is in a position to help others. So somehow we got to make our culture capable of maintaining that process indefinitely when many of the incentive structures that have kept it in place go away.

Lucas: This makes me wonder and think about how from an evolutionary cosmological perspective, how this sort of transition from humans being the most intelligent form of life on this planet to machine intelligence being the most intelligent form of life. How that plays out in the very longterm. If we can do thought experiments where we imagine if monkeys had been actually creating humans and then had created humans, what the role of the monkey would still be.

Stuart: Yep. But we should not be creating the machine analog of humans, I.E. autonomous entities pursuing their own objectives. So we’ve pursued our objectives pretty much at the expense of the monkeys and the gorillas and we should not be producing machines that play an analogous role. That would be a really dumb thing to do.

Lucas: That’s an interesting comparison because the objectives of the human are exogenous to the monkey and that’s the key issue that you point out. If the monkey had been clever and had been able to control evolution, then they would have set the human uncertain as to the monkey’s preferences and then had him optimize those.

Stuart: Yeah, I mean they could imagine creating a race of humans that were intelligent but completely subservient to the interests of the monkeys. Assuming that they solved the enfeeblement problem and the misuse problem, then they’d be pretty happy with the way things turned out. I don’t see any real alternative. So Samuel Butler in 1863 wrote a book about a society that faces the problem of superintelligent machines and they take the other solution, which is actually to stop. They see no alternative but to just ban the construction of intelligent machines altogether. In fact, they ban all machines and in Frank Herbert’s Dune, the same thing. They have a catastrophic war in which humanity just survives in its conflict with intelligent machines. And then from then on, all intelligent machines, in fact, all computers are banned altogether. I can’t see that that’s a plausible direction, but it could be that we decide at some point that we cannot solve the control problem or we can’t solve the misuse problem or we can’t solve the enfeeblement problem.

And we decided that it’s in our best interests to just not go down this path at all. To me that just doesn’t feel like a possible direction. Things can change if we start to see bigger catastrophes. I think the click through catastrophe is already pretty big and it results from very, very simple minded algorithms that know nothing about human cognition or politics or anything else. They’re not even explicitly trying to manipulate us. It’s just, that’s what the code does in a very simple minded way. So we could imagine bigger catastrophes happening that we survived by the skin of our teeth as happened in Dune for example. And then that would change the way people think about the problem. And we see this over and over again with nuclear power, with fossil fuels and so on that by large technology is always seen as beneficial and more technology is therefore more beneficial.

And we pushed your head often ignoring the people who say “But, but, but what about this drawback? What about this drawback?” And maybe that starting to change with respect to fossil fuels. Several countries have now decided since Chernobyl and Fukushima to ban nuclear power, the EU has much stronger restriction on genetically modified foods than a lot of other countries, so there are pockets where people have pushed back against technological progress and said, “No, not all technology is good and not all uses of technology are good and so we need to exercise a choice.” But the benefits of AI are potentially so enormous. It’s going to take a lot to undo this forward progress.

Lucas: Yeah, absolutely. Whatever results from earth originating intelligent life at the end of time, that thing is up to us to create. I’m quoting you here, you say, “A compassionate and jubilant use of humanity’s cosmic endowment sounds wonderful, but we also have to reckon with the rapid rate of innovation in the malfeasance sector, ill intentioned people are thinking up new ways to misuse AI so quickly that this chapter is likely to be outdated even before an attains printed form. Think of it not as depressing reading. However, but as a call to act before it’s too late.”

Thinking about this and everything you just touched on. There’s obviously a ton for us to get right here that needs to be gotten right and it’s a question and problem for everyone in the human species to have a voice in.

Stuart: Yeah. I think we really need to start considering the possibility that there ought to be a law against it. For a long time the IT industry almost uniquely has operated in a completely unregulated way. The car industry for example, cars have to follow various kinds of design and safety rules. You have to have headlights and turn signals and brakes and so on. A car that’s designed in an unsafe way gets taken off the market, but software can do pretty much whatever it wants.

Every license agreement that you sign whenever you buy or use software tells you that it doesn’t matter what their software does. The manufacturer is not responsible for anything and so on. And I think it’s a good idea to actually take legislative steps, regulatory steps just to get comfortable with the idea that yes, I see we maybe do need regulation. San Francisco, for example, has banned the use of facial recognition in public or for policing. California has a ban on the impersonation of human beings by AI systems. I think that ban should be pretty much universal. But in California it’s primary area of applicability is in persuading people to vote in any particular direction in an election. So it’s a fairly narrow limitation. But when you think about it, why would you want to allow AI systems to impersonate human beings so that in other words, the human who’s in conversation, believes that if they’re talking to another human being, that they owe that other human being a whole raft of respect, politeness, all kinds of obligations that are involved in interacting with other humans.

But you don’t owe any of those things to an AI system. And so why should we allow people to effectively defraud humans by convincing them that in fact they’re engaged with another human when they aren’t? So I think it would be a good idea to just start things off with some basic common sense rules. I think the GDPR rule that says that you can’t use an algorithm to make a decision that has a significant legal effect on a person. So you can’t put them in jail simply as a result of an algorithm, for example. You can’t fire them from a job simply as a result of an algorithm. You can use the algorithm to advise, but a human has to be involved in the decision and the person has to be able to query the decision and ask for the reasons and in some sense have a right of appeal.

So these are common sense rules that almost everyone would agree with. And yet certainly in the U.S., there’s reluctance to put them into effect. And I think going forward, if we want to have safe AI systems, there’s at least going to be a role for regulations. There should also be standards as in I triple E standards. There should also be professional codes of conduct. People should be trained in how to recognize potentially unsafe designs for AI systems, but there should, I think, be a role for regulation where at some point you would say, if you want to put an AI system on the internet, for example, just as if you want to put software into the app store, it has to pass a whole bunch of checks to make sure that it’s safe to make sure that it won’t wreak havoc. So, we better start thinking about that. I don’t know yet what that regulation should say, but we shouldn’t be in principle opposed to the idea that such regulations might exist at some point.

Lucas: I basically agree that these regulations should be implemented today, but they seem pretty temporary or transient as the uncertainty in the AI system for the humans’ objective function or utility function decreases. So they become more certain about what we want. At some point it becomes unethical to have human beings governing these processes instead of AI systems. Right? So if we have timelines from AI researchers that range from 50 to a hundred years for AGI, we could potentially see laws and regulations like this go up in the next five to 10 and then disappear again somewhere within the next hundred to 150 years max.

Stuart: That’s an interesting viewpoint. And I think we have to be a little careful because autonomy is part of our preference structure. So although one might say, okay, know who gets to run the government? Well self, evidently it’s possible that machines could do a better job than the humans we currently have that would be better only in a narrow sense that maybe it would reduce crime, maybe it would increase economic output, we’d have better health outcomes, people would be more educated than they would with humans making those decisions, but there would be a dramatic loss in autonomy. And autonomy is a significant part of our preference structure. And so it isn’t necessarily the case that the right solution is that machines should be running the government. And this is something that the machines themselves will presumably recognize and this is the reason why parents at some point tell the child, “No, you have to tie your own shoe laces.” Because they want the child to develop autonomy.

The same thing will be true. The machines want humans to retain autonomy. As I said earlier, with respect to enfeeblement, right? It’s this conflict between our longterm best interest and our short term-ism in the choices that we tend to make. It’s always easier to say, “Oh no, I can’t be bothered at the time I shoelaces. Please could you do it?” But if you keep doing that, then the longterm consequences are bad. We have to understand how autonomy, which includes machines not making decisions, folds into our overall preference structure. And up to now there hasn’t been much of a choice, at least in the global sense. Of course it’s been humans making the decisions, although within any local context it’s only a subset of humans who are making the decisions and a lot of other people don’t have as much autonomy. To me, I think autonomy is a really important currency that to the extent possible, everyone should have as much of it as possible.

Lucas: I think you really hit the nail on the head. The problem is where autonomy fits in the hierarchy of our preferences and meta preferences. For me, it seems more instrumental than being an end goal in itself. Now this is an empirical question across all people where autonomy fits in their preference hierarchies and whether it’s like a terminal value or not, and whether under reflection and self idealization, our preferences distill into something else or not. Autonomy could possibly but not necessarily be an end goal. In so far as that it simply provides utility for all of our other goals. Because without autonomy we can’t act on what we think will best optimize our own preferences and end values. So definitely a lot of questions there. The structure of our preference hierarchy will certainly dictate, it seems, the longterm outcome of humanity and how enfeeblement unfolds.

Stuart: The danger would be that we misunderstand the entire nature of the human preference hierarchy. So sociologists and others have talked about the hierarchy of human needs in terms of food, shelter, physical security and so on. But they’ve always kind of assumed that you are a human being and therefore you’re the one deciding stuff. And so they tend not to think so much about fundamental properties of the ability to make your own choices for good or ill. And science fiction writers have had a field day with this. Pointing out that machines that do what you want are potentially disastrous because you lose the freedom of choice.

One could imagine that if we formulate things not quite right and the effect of the algorithms that we build is to make machines that don’t value autonomy in the right way or don’t have it folded into the overall preference structure in the right way, that we could end up with a subtle but gradual and very serious loss of autonomy in a way that we may not even notice as it happens. Like the slow boiling frog. If we could look ahead a hundred years and see how things turn out, he would say, “Oh my goodness, that is a terrible mistake”. We’re going to make sure that that doesn’t happen. So I think we need to be pretty careful. And again this is where we probably need the help of philosophers to make sure that we keep things straight and understand how these things fit together.

Lucas: Right, so seems like we simply don’t understand ourselves. We don’t know the hierarchy of our preferences. We don’t really know what preferences exactly are. Stuart Armstrong talks about how we haven’t figured out the symbol grounding problem. So there are issues with even understanding how preferences relate to one another ultimately and how the meaning there is born. And we’re building AI systems which will be more capable than us. Perhaps they will be conscious. You have a short subchapter I believe on that or at least on how you’re not going to talk about consciousness.

Stuart: Yeah. I have a paragraph saying I have nothing to say.

Lucas: So potentially these things will also be moral patients and we don’t know how to get them to do the things that we’re not entirely sure that we want them to do. So how would you differentiate this book from Superintelligence or Life 3.0 or other books on the AI alignment problem. And superintelligence in this space.

Stuart: I think the two major differences are one, I believe that to understand this whole set of issues or even just to understand what’s happening with AI and what’s going to happen, you have to understand something about AI. And I think that Superintelligence and Life 3.0 are to some extent, easier to grasp. If you already understand quite a bit about AI. And if you don’t, then it’s quite difficult to get as much out of those books as is in there. I think they are full of interesting points and ideas, but those points and ideas are easier to get out if you understand AI. So I wanted people to understand AI, understand, not just it as a technology, right? You could talk about how deep learning works, but that’s not the point. The point is really what is intelligence and how have we taken that qualitative understanding of what that means and turned it into this technical discipline where the standard model is machines that achieve fixed objectives.

And then the second major difference is that I’m proposing a solution for at least one of the big failure modes of AI. And as far as I can tell, that solution, I mean, it’s sort of mentioned in some ways in Superintelligence, I think the phrase there is normative uncertainty, but it has a slightly different connotation. And partly that’s because this approach of inverse reinforcement learning is something that we’ve actually worked on at Berkeley for a little over 20 years. It wasn’t invented for this purpose, but it happens to fit this purpose and then the approach of how we solve this problem is fleshed out in terms of understanding that it’s this coupled system between the human that has the preferences and the machine that’s trying to satisfy those preferences and doesn’t know what they are. So I think that part is different. That’s not really present in those other two books.

It certainly shares, I think the desire to convince people that this is a serious issue. I think both Superintelligence and Life 3.0 do a good job of that Superintelligence is sort of a bit more depressing. It’s such a good job of convincing you that things can go South, so many ways that you almost despair. Life 3.0 is a bit more cheerful. And also I think Life 3.0 does a good job of asking you what you want the outcome to be. And obviously you don’t want it to be catastrophic outcomes where we’re all placed in concrete coffins with heroin drips as Stuart Armstrong likes to put it.

But there are lots of other outcomes which are the ones you want. So I think that’s an interesting part of that book. And of course Max Tegmark, the author of Life 3.0 is a physicist. So he has lots of amazing stuff about the technologies of the future, which I don’t have so much. So those are the main differences. I think that wanting to convey the essence of intelligence, how that notion has developed, how is it really an integral part of our whole intellectual tradition and our technological society and how that model is fundamentally wrong and what’s the new model that we have to replace it with.

Lucas: Yeah, absolutely. I feel that you help to clarify intelligence for me, the history of intelligence from evolution up until modern computer science problems. I think that you really set the AI alignment problem up well resulting from there being intelligences and multi-agent scenarios, trying to do different things, and then you suggest a solution, which we’ve discussed here already. So thanks so much for coming on the podcast, Stuart, your book is set for release on October 8th?

Stuart: That’s correct.

Lucas: Great. We’ll include links for that in the description. Thanks so much for coming on.

 If you enjoyed this podcast, please subscribe. Give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI alignment series.

End of recorded material

AI Alignment Podcast: Synthesizing a human’s preferences into a utility function with Stuart Armstrong

In his Research Agenda v0.9: Synthesizing a human’s preferences into a utility function, Stuart Armstrong develops an approach for generating friendly artificial intelligence. His alignment proposal can broadly be understood as a kind of inverse reinforcement learning where most of the task of inferring human preferences is left to the AI itself. It’s up to us to build the correct assumptions, definitions, preference learning methodology, and synthesis process into the AI system such that it will be able to meaningfully learn human preferences and synthesize them into an adequate utility function. In order to get this all right, his agenda looks at how to understand and identify human partial preferences, how to ultimately synthesize these learned preferences into an “adequate” utility function, the practicalities of developing and estimating the human utility function, and how this agenda can assist in other methods of AI alignment.

Topics discussed in this episode include:

  • The core aspects and ideas of Stuart’s research agenda
  • Human values being changeable, manipulable, contradictory, and underdefined
  • This research agenda in the context of the broader AI alignment landscape
  • What the proposed synthesis process looks like
  • How to identify human partial preferences
  • Why a utility function anyway?
  • Idealization and reflective equilibrium
  • Open questions and potential problem areas

Last chance to take a short (4 minute) survey to share your feedback about the podcast.

 

Key points from Stuart: 

  • “There are two core parts to this research project essentially. The first part is to identify the humans’ internal models, figure out what they are, how we use them and how we can get an AI to realize what’s going on. So those give us the sort of partial preferences, the pieces from which we build our general preferences. The second part is to then knit all these pieces together into an overall preference for any given individual in a way that works reasonably well and respects as much as possible the person’s different preferences, meta-preferences and so on. The second part of the project is the one that people tend to have strong opinions about because they can see how it works and how the building blocks might fit together and how they’d prefer that it would be fit together in different ways and so on but in essence, the first part is the most important because that fundamentally defines the pieces of what human preferences are.”
  • “So, when I said that human values are contradictory, changeable, manipulable and underdefined, I was saying that the first three are relatively easy to deal with but that the last one is not. Most of the time, people have not considered the whole of the situation that they or the world or whatever is confronted with. No situation is exactly analogous to another, so you have to try and fit it in to different categories. So if someone dubious gets elected in a country and starts doing very authoritarian things, does this fit in the tyranny box which should be resisted or does this fit in the normal process of democracy box in which case it should be endured and dealt with through democratic means. What’ll happen is generally that it’ll have features of both, so it might not fit comfortably in either box and then there’s a wide variety for someone to be hypocritical or to choose one side or the other but the reason that there’s such a wide variety of possibilities is because this is a situation that has not been exactly confronted before so people don’t actually have preferences here. They don’t have a partial preference over this situation because it’s not one that they’ve ever considered… I’ve actually argued at some point in the research agenda that this is an argument for insuring that we don’t go too far from the human baseline normal into exotic things where our preferences are not well-defined because in these areas, the chance that there is a large negative seems higher than the chance that there’s a large positive… So, when I say not go too far, I don’t mean not embrace a hugely transformative future. I’m saying not embrace a hugely transformative future where our moral categories start breaking down.”
  • “One of the reasons to look for a utility function is to look for something stable that doesn’t change over time and there is evidence that consistency requirements will push any form of preference function towards a utility function and that if you don’t have a utility function, you just lose value. So, the desire to put this into a utility function is not out of an admiration for utility functions per se but our desire to get something that won’t further change or won’t further drift in a direction that we can’t control and have no idea about. The other reason is that as we start to control our own preferences better and have a better ability to manipulate our own minds, we are going to be pushing ourselves towards utility functions because of the same pressures of basically not losing value pointlessly.”
  • “Reflective equilibrium is basically you refine your own preferences, make them more consistent, apply them to yourself until you’ve reached a moment where your meta-preferences and your preferences are all smoothly aligned with each other. What I’m doing is a much more messy synthesis process and I’m doing it in order to preserve as much as possible of the actual human preferences. It is very easy to reach reflective equilibrium by just, for instance, having completely flat preferences or very simple preferences, these tend to be very reflectively in equilibrium with itself and pushing towards this thing is a push towards, in my view, excessive simplicity and the great risk of losing valuable preferences. The risk of losing valuable preferences seems to me a much higher risk than the gain in terms of simplicity or elegance that you might get. There is no reason that the kludgey human brain and it’s mess of preferences should lead to some simple reflective equilibrium. In fact, you could say that this is an argument against reflexive equilibrium because it means that many different starting points, many different minds with very different preferences will lead to similar outcomes which basically means that you’re throwing away a lot of the details of your input data.”
  • “Imagine that we have reached some positive outcome, we have got alignment and we haven’t reached it through a single trick and we haven’t reached it through the sort of tool AIs or software as a service or those kinds of approaches, we have reached an actual alignment. It, therefore, seems to me all the problems that I’ve listed or almost all of them will have had to have been solved, therefore, in a sense, much of this research agenda needs to be done directly or indirectly in order to achieve any form of sensible alignment. Now, the term directly or indirectly is doing a lot of the work here but I feel that quite a bit of this will have to be done directly.”

 

Important timestamps: 

0:00 Introductions 

3:24 A story of evolution (inspiring just-so story)

6:30 How does your “inspiring just-so story” help to inform this research agenda?

8:53 The two core parts to the research agenda 

10:00 How this research agenda is contextualized in the AI alignment landscape

12:45 The fundamental ideas behind the research project 

15:10 What are partial preferences? 

17:50 Why reflexive self-consistency isn’t enough 

20:05 How are humans contradictory and how does this affect the difficulty of the agenda?

25:30 Why human values being underdefined presents the greatest challenge 

33:55 Expanding on the synthesis process 

35:20 How to extract the partial preferences of the person 

36:50 Why a utility function? 

41:45 Are there alternative goal ordering or action producing methods for agents other than utility functions?

44:40 Extending and normalizing partial preferences and covering the rest of section 2 

50:00 Moving into section 3, synthesizing the utility function in practice 

52:00 Why this research agenda is helpful for other alignment methodologies 

55:50 Limits of the agenda and other problems 

58:40 Synthesizing a species wide utility function 

1:01:20 Concerns over the alignment methodology containing leaky abstractions 

1:06:10 Reflective equilibrium and the agenda not being a philosophical ideal 

1:08:10 Can we check the result of the synthesis process?

01:09:55 How did the Mahatma Armstrong idealization process fail? 

01:14:40 Any clarifications for the AI alignment community? 

 

Works referenced:

Research Agenda v0.9: Synthesising a human’s preferences into a utility function 

Some Comments on Stuart Armstrong’s “Research Agenda v0.9” 

Mahatma Armstrong: CEVed to death 

The Bitter Lesson 

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas: Hey everyone and welcome back to the AI Alignment Podcast at the Future of Life Institute. I’m Lucas Perry and today we’ll be speaking with Stuart Armstrong on his Research Agenda version 0.9: Synthesizing a human’s preferences into a utility function. Here Stuart takes us through the fundamental idea behind this research agenda, what this process of synthesizing human preferences into a utility function might look like, key philosophical and empirical insights needed for progress, how human values are changeable, manipulable, under-defined and contradictory, how these facts affect generating an adequate synthesis of human values, where this all fits in the alignment landscape and how it can inform other approaches to aligned AI systems.

If you find this podcast interesting or useful, consider sharing it with friends, on social media platforms, forums or anywhere you think it might be found valuable. I’d also like to put out a final call for this round of SurveyMonkey polling and feedback, so if you have any comments, suggestions or any other thoughts you’d like to share with me about the podcast, potential guests or anything else, feel free to do so through the SurveyMonkey poll link attached to the description of wherever you might find this podcast. I’d love to hear from you. There also seems to be some lack of knowledge regarding the pages that we create for each podcast episode. You can find a link to that in the description as well and it contains a summary of the episode, topics discussed, key points from the guest, important timestamps if you want to skip around, works referenced, as well as a full transcript of the audio in case you prefer reading.

Stuart Armstrong is a researcher at the Future of Humanity Institute who focuses on the safety and possibilities of artificial intelligence, how to define the potential goals of AI and map humanities partially defined values into it and the longterm potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations such as DeepMind to formalize AI desiderata in general models so the AI designers can include these safety methods in their designs. His collaboration with DeepMind on “Interruptability” has been mentioned in over 100 media articles. Stuart’s past research interests include comparing existential risks in general, including their probability and their interactions, anthropic probability, how the fact that we exist affects our probability estimates around that key fact, decision theories that are stable under self-reflection and anthropic considerations, negotiation theory and how to deal with uncertainty about your own preferences, computational biochemistry, fast ligand screening, parabolic geometry and his Oxford DPhil was on the holonomy of projective and conformal Cartan geometries and so without further ado or pretenses that I know anything about the holonomy of projective and conformal Cartan geometries, I give you Stuart Armstrong.

We’re here today to discuss your research agenda version 0.9: Synthesizing a human’s preferences into a utility function. One wonderful place for us to start here would be with this sort of story of evolution, which you call an inspiring just so story, and so starting this, I think it would be helpful for us contextualizing sort of the place of the human and what the human is as we sort of find ourselves here at the beginning of this value alignment problem. I’ll go ahead and read there here for listeners to begin developing a historical context and narrative.

So, I’m quoting you here. You say, “This is the story of how evolution created humans with preferences and what the nature of these preferences are. The story is not true in the sense of accurate. Instead, it is intended to provide some inspiration as to the direction of this research agenda. In the beginning, evolution created instinct driven agents. These agents have no preferences or goals nor do they need any. They were like Q-learning agents. They knew the correct action to take in different circumstances but that was it. Consider baby turtles that walk towards the light upon birth because traditionally, the sea was lighter than the land. Of course, this behavior fails them in the era of artificial lighting but evolution has a tiny bandwidth, acting once per generation, so it created agents capable of planning, of figuring out different approaches rather than having to follow instincts. This was useful especially in varying environments and so evolution off-loaded a lot of it’s job onto the planning agents.”

“Of course, to be of any use, the planning agents need to be able to model their environment to some extent or else their plans can’t work and had to have preferences or else every plan was as good as another. So, in creating the first planning agents, evolution created the first agents with preferences. Of course, evolution is messy, undirected process, so the process wasn’t clean. Planning agents are still riven with instincts and the modeling of the environment is situational, used for when it was needed rather than some consistent whole. Thus, the preferences of these agents were underdefined and some times contradictory. Finally, evolution created agents capable of self-modeling and of modeling other agents in their species. This might have been because of competitive social pressures as agents learned to lie and detect lying. Of course, this being evolution, the self and other modeling took the form of kludges built upon spandrels, built upon kludges and then arrived humans, who developed norms and norm violations.”

“As a side effect of this, we started having higher order preferences as to what norms and preferences should be but instincts and contradictions remained. This is evolution after all, and evolution looked upon this hideous mess and saw that it was good. Good for evolution that is but if we want it to be good for us, we’re going to need to straighten out this mess somewhat.” Here we arrive, Stuart, in the human condition after hundreds of millions of years of evolution. So, given the story of human evolution that you’ve written here, why were you so interested in this story and why were you looking into this mess to better understand AI alignment and development this research agenda?

Stuart: This goes back to a paper that I co-wrote for NuerIPS It basically develops the idea of inverse reinforcement learning or more broadly, can you infer what the preferences of an agent are just by observing their behavior. Humans are not entirely rational, so the question I was looking at is can you simultaneously infer the rationality and the preferences of an agent by observing their behavior. It turns out to be mathematically completely impossible. We can’t infer the preferences without making assumptions about the rationality and we can’t infer the rationality without making assumptions about the preferences. This is a rigorous result, so my looking at human evolution is to basically get around this result, in a sense, to make the right assumptions so that we can extract actual human preferences since we can’t just do it by observing behavior. We need to dig a bit deeper.

Lucas: So, what have you gleaned then from looking at this process of human evolution and seeing into how messy the person is?

Stuart: Well, there’s two key insights here. The first is that I located where human preferences reside or where we can assume that human preferences reside and that’s in the internal models of the humans, how we model the world, how we judge, that was a good thing or I want that or ooh, I’d be really embarrassed about that, and so human preferences are defined in this project or at least the building blocks of human preferences are defined to be in these internal models that humans have with the labeling of states of outcomes as good or bad. The other point to bring about evolution is that since it’s not anything like a clean process, it’s not like we have one general model with clearly labeled preferences and then everything else flows from that. It is a mixture of situational models in different circumstances with subtly different things labeled as good or bad. So, as I said to you in preferences are contradictory, changeable, manipulable and underdefined.

So, there are two core parts to this research project essentially. The first part is to identify the humans’ internal models, figure out what they are, how we use them and how we can get an AI to realize what’s going on. So those give us the sort of partial preferences, the pieces from which we build our general preferences. The second part is to then knit all these pieces together into an overall preference for any given individual in a way that works reasonably well and respects as much as possible the person’s different preferences, meta-preferences and so on.

The second part of the project is the one that people tend to have strong opinions about because they can see how it works and how the building blocks might fit together and how they’d prefer that it would be fit together in different ways and so on but in essence, the first part is the most important because that fundamentally defines the pieces of what human preferences are.

Lucas: Before we dive into the specifics of your agenda here, can you contextualize it within evolution of your thought on AI alignment and also how it fits within the broader research landscape?

Stuart: So, this is just my perspective on what the AI alignment landscape looks like. There are a collection of different approaches addressing different aspects of the alignment problem. Some of them, which MIRI is working a lot on, are technical things of how to ensure stability of goals and other similar thoughts along these lines that should be necessary for any approach. Others are developed on how to make the AI safe either indirectly or make itself fully aligned. So, the first category you have things like software as a service. Can we have super intelligent abilities integrated in a system that doesn’t allow for say super intelligent agents with pernicious goals.

Others that I have looked into in the past are things like low impact agents or oracles, which again, the idea is we have a superintelligence, we cannot align it with human preferences, yet we can use it to get some useful work done. Then there are the approaches, which aim to solve the whole problem and get actual alignment, what used to be called the friendly AI approach. So here, it’s not an AI that’s constrained in any ways, it’s an AI that is intrinsically motivated to do the right thing. There are a variety of different approaches to that, some more serious than others. Paul Christiano has an interesting variant on that, though it’s hard to tell, I would say, his in a bit of a mixture of value alignment and constraining what the AI can do in a sense, but it is very similar and so this is of that last type, of getting the aligned, the friendly AI, the aligned utility function.

In that area, there are what I would call the ones that sort of rely on indirect proxies. This is the ideas of you put Nick Bostrom in a room for 500 years or a virtual version of that and hope that you get something aligned at the end of that. There are direct approaches and this is the basic direct approach, doing everything the hard way in a sense but defining everything that needs to be defined so that the AI can then assemble an aligned preference function from all the data.

Lucas: Wonderful. So you gave us a good summary earlier of the different parts of this research agenda. Would you like to expand a little bit on the “fundamental idea” behind this specific research project?

Stuart: There are two fundamental ideas that are not too hard to articulate. The first is that though our revealed preferences could be wrong though our stated preferences could be wrong, what our actual preferences are at least in one moment is what we model inside our head, what we’re thinking of as the better option. We might lie, as I say, in politics or in a court of law or just socially but generally, when we know that we’re lying, it’s because there’s a divergence between what we’re saying and what we’re modeling internally. So, it is this internal model, which I’m identifying as the place where our preferences lie and then all the rest of it, the whole convoluted synthesis project is just basically how do we take these basic pieces and combine them in a way that does not seem to result in anything disastrous and that respects human preferences and meta-preferences and this is a key thing, actually reaches a result. That’s why the research project is designed for having a lot of default actions in a lot of situations.

Like if the person does not have strong meta-preferences, then there’s a whole procedure of how you combine say preferences about the world and preferences about your identity are, by default, combined in a different way if you would want GDP to go up, that’s a preference about the world. If you yourself would want to believe something or believe only the truth, for example, that’s a preference about your identity. It tends to be that identity preferences are more fragile, so the default is that preferences about the world are just added together and this overcomes most of the contradictions because very few human preferences are exactly anti-aligned whereas identity preferences are combined in a more smooth process so that you don’t lose too much on any of them. But as I said, these are the default procedures, and they’re all defined so that we get an answer but there’s also large abilities for the person’s meta-preferences to override the defaults. Again, precautions are taken to ensure that an answer is actually reached.

Lucas: Can you unpack what partial preferences are? What you mean by partial preferences and how they’re contextualized within human mental models?

Stuart: What I mean by partial preference is mainly that a human has a small model of part of the world like let’s say they’re going to a movie and they would prefer to invite someone they like to go with them. Within this mental model, there is the movie, themselves and the presence or absence of the other person. So, this is a very narrow model of reality, virtually the entire rest of the world and, definitely, the entire rest of the universe does not affect this. It could be very different and not change anything of this. So, this is what I call a partial preference. You can’t go from this to a general rule of what the person would want to do in every circumstance but it is a narrow valid preference. Partial preferences refers to two things, first of all, that it doesn’t cover all of our preferences and secondly, the model in which it lives only covers a narrow slice of the world.

You can make some modifications to this. This is the whole point of the second section that if the approach works, variations on the synthesis project should not actually result in results that are disastrous at all. If the synthesis process being changed a little bit would result in a disaster, then something has gone wrong with the whole approach but you could, for example, add restrictions like looking for consistent preferences but I’m starting with basically the fundamental thing is there is this mental model, there is an unambiguous judgment that one thing is better than another and then we can go from there in many ways. A key part of this approach is that there is no single fundamental synthesis process that would work, so it is aiming for an adequate synthesis rather than an idealized one because humans are a mess of contradictory preferences and because even philosophers have contradictory meta-preferences within their own minds and with each other and because people can learn different preferences depending on the order in which information is presented to them, for example.

Any method has to make a lot of choices, and therefore, I’m writing down explicitly as many of the choices that have to be made as I can so that other people can see what I see the processes entailing. I am quite wary of things that look for reflexive self-consistency because in a sense, if you define your ideal system as one that’s reflexively self-consistent, that’s a sort of local condition in a sense that the morality judges itself by its own assessment and that means that you could theoretically wander arbitrarily far in preference space before you hit that. I don’t want something that is just defined by this has reached reflective equilibrium, this morality synthesis is now self-consistent, I want something that is self-consistent and it’s not too far from where it started. So, I prefer to tie things much more closely to actual human preferences and to explicitly aim for a synthesis process that doesn’t wander too far away from them.

Lucas: I see, so the starting point is the evaluative moral that we’re trying to keep it close to?

Stuart: Yes, I don’t think you can say that any human preference synthesized is intrinsically wrong as long as it reflects some of the preferences that were inputs into it. However, I think you can say that it is wrong from the perspective of the human that you started with if it strongly contradicts what they would want. Disagreements from my starting position is something which I take to be very relevant to the ultimate outcome. There’s a bit of a challenge here because we have to avoid say preferences which are based on inaccurate facts. So, some of the preferences are inevitably going to be removed or changed just because they’re based on factually inaccurate beliefs. Some other processes of trying to make consistent what is sort of very vague will also result in some preferences being moved beyond. So, you can’t just say the starting person has veto power over the final outcome but you do want to respect their starting preferences as much as you possibly can.

Lucas: So, reflecting here on the difficulty of this agenda and on how human beings contain contradictory preferences and models, can you expand a bit how we contain these internal contradictions and how this contributes to the difficulty of the agenda?

Stuart: I mean humans contain many contradictions within them. Our mood shifts. We famously are hypocritical in favor of ourselves and against the foibles of others, we basically rewrite narratives to allow ourselves to always be heroes. Anyone who’s sort of had some experience of a human has had knowledge of when they’ve decided one way or decided the other way or felt that something was important and something else wasn’t and often, people just come up with a justification for what they wanted to do anyway, especially if they’re in a social situation, and then some people can cling to this justification and integrate that into their morality while behaving differently in other ways. The easiest example are sort of political hypocrites. The anti-gay preacher who sleeps with other men is a stereotype for a reason but it’s not just a sort of contradiction at that level. It’s that basically most of the categories in which we articulate our preferences are not particularly consistent.

If we throw a potentially powerful AI in this, which could change the world drastically, we may end up with things across our preferences. For example, suppose that someone created or wanted to create a subspecies of human that was bred to be a slave race. Now, this race did not particularly enjoy being a slave race but they wanted to be slaves very strongly. In this situation, a lot of our intuitions are falling apart because we know that slavery is almost always involuntary and is backed up by coercion. We also know that even though our preferences and our enjoyments do sometimes come apart, they don’t normally come apart that much. So, we’re now confronted by a novel situation where a lot of our intuitions are pushing against each other.

You also have things like nationalism for example. Some people have strong nationalist sentiments about their country and sometimes their country changes and in this case, what seemed like a very simple, yes, I will obey the laws of my nation, for example, becomes much more complicated as the whole concept of my nation starts to break down. This is the main way that I see preferences to being underdefined. They’re articulated in terms of concepts which are not universal and which bind together many, many different concepts that may come apart.

Lucas: So, at any given moment, like myself at this moment, the issue is that there’s a large branching factor of how many possible future Lucases there can be. At this time, currently and maybe a short interval around this time as you sort of explore in your paper, the sum total of my partial preferences and the partial world models in which these partial preferences are contained. The expression of these preferences and models can be expressed differently and sort of hacked and changed based off how questions are asked, the order of questions. I am like a 10,000-faced thing which I can show you one of my many faces depending on how you push my buttons and depending on all of the external input that I get in the future, I’m going to express and maybe become more idealized in one of many different paths. The only thing that we have to evaluate which of these many different paths I would prefer is what I would say right now, right?

Say my core value is joy or certain kinds of conscious experiences over others and all I would have for evaluating this many branching thing is say this preference now at this time but that could be changed in the future, who knows? I will create new narratives and stories that justify the new person that I am and that makes sense of the new values and preferences that I have retroactively, like something that I wouldn’t actually have approved of now but my new, maybe more evil version of myself would approve and create a new narrative retroactively. Is this sort of helping to elucidate and paint the picture of why human beings are so messy?

Stuart: Yes, we need to separate that into two. The first is that our values can be manipulated by other humans as they often are and by the AI itself during the process but that can be combated to some extent. I have a paper that may soon come out on how to reduce the influence of an AI over a learning process that it can manipulate. That’s one aspect. The other aspect is when you are confronted by a new situation, you can go in multiple different directions and these things are just not defined. So, when I said that human values are contradictory, changeable, manipulable and underdefined, I was saying that the first three are relatively easy to deal with but that the last one is not.

Most of the time, people have not considered the whole of the situation that they or the world or whatever is confronted with. No situation is exactly analogous to another, so you have to try and fit it in to different categories. So if someone dubious gets elected in a country and starts doing very authoritarian things, does this fit in the tyranny box which should be resisted or does this fit in the normal process of democracy box in which case it should be endured and dealt with through democratic means. What’ll happen is generally that it’ll have features of both, so it might not fit comfortably in either box and then there’s a wide variety for someone to be hypocritical or to choose one side or the other but the reason that there’s such a wide variety of possibilities is because this is a situation that has not been exactly confronted before so people don’t actually have preferences here. They don’t have a partial preference over this situation because it’s not one that they’ve ever considered.

How they develop one is due to a lot as you say, the order in which information is presented, which category it seems to most strongly fit into and so on. We are going here for very mild underdefinedness. The willing slave race was my attempt to push it out a bit further into something somewhat odd and then if you consider a powerful AI that is able to create vast numbers of intelligent entities, for example, and reshape society, human bodies and human minds in hugely transformative ways, we are going to enter sort of very odd situations where all our starting instincts are almost useless. I’ve actually argued at some point in the research agenda that this is an argument for insuring that we don’t go too far from the human baseline normal into exotic things where our preferences are not well-defined because in these areas, the chance that there is a large negative seems higher than the chance that there’s a large positive.

Now, I’m talking about things that are very distant in terms of our categories, like the world of Star Trek is exactly the human world from this perspective because even though they have science fiction technology, all of the concepts and decisions they are articulated around concepts that we’re very familiar with because it is a work of fiction addressed to us now. So, when I say not go too far, I don’t mean not embrace a hugely transformative future. I’m saying not embrace a hugely transformative future where our moral categories start breaking down.

Lucas: In my mind, there’s two senses. There’s the sense in which we have these models for things and we have all of these necessary and sufficient conditions for which something can be pattern matched to some sort of concept or thing and we can encounter situations where there’re conditions for many different things being included in the context in a new way which makes it so that the thing like goodness or justice is underdefined in the slavery case because we don’t really know initially whether this thing is good or bad. I see this underdefined in this sense. The other sense is maybe the sense in which my brain is a neural architectural aggregate of a lot of neurons and the sum total of its firing statistics and specific neural pathways can be potentially identified as containing preferences and models somewhere within there. So is it also true to say that it’s underdefined in the sense that the human as not a thing in the world but as a process in the world largely constituted of the human brain, even within that process, it’s underdefined where in the neural firing statistics or the processing of the person there could ever be something called a concrete preference or value?

Stuart: I would disagree that it is underdefined in the second sense.

Lucas: Okay.

Stuart: In order to solve the second problem, you need to solve the symbol grounding problem for humans. You need to show that the symbols or the neural pattern firing or the neuron connection or something inside the brain corresponds to some concepts in the outside world. This is one of my sort of side research projects. When I say side research project, I mean I wrote a couple of blog posts on this pointing out how I might approach it and I point out that you can do this in a very empirical way. If you think that a certain pattern of neural firing refers to say a rabbit, you can see whether this thing firing in the brain is that predictive of say a rabbit in the outside world or predictive of this person is going to start talking about rabbits soon.

In model theory, the actual thing that gives meaning to the symbols is sort of beyond the scope of the math theory but if you have a potential connection between the symbols and the outside world, you can check whether this theory is a good one or a terrible one. If you say this corresponds to hunger and yet that thing only seems to trigger when someone’s having sex, for example, we can say, okay, your model that this corresponds to hunger is terrible. It’s wrong. I cannot use it for predicting that the person will eat in the world but I can use it for predicting that they’re having sex. So, if I model this as connected with sex, this is a much better grounding of that symbol. So using methods like this and there’re some subtleties I also address Quine’s “gavagai” and connect it to sort of webs of connotation and concepts that go together but the basic idea is to empirically solve the symbol grounding problem for humans.

When I say that things are underdefined, I mean that they are articulated in terms of concepts that are underdefined across all possibilities in the world, not that these concepts could be anything or we don’t know what they mean. Our mental models correspond to something. It’s a collection of past experience and the concepts in our brain are tying together a variety of experiences that we’ve had. They might not be crisp. They might not be well-defined even if you look at say the totality of the universe but they correspond to something, to some repeated experience, some concepts to some thought process that we’ve had and that we’ve extracted this idea from. When we do this in practice, we are going to inject some of our own judgements into it and since humans are so very similar in how we interpret each other and how we decompose many concepts, it’s not necessarily particularly bad that we do so, but I strongly disagree that these are arbitrary concepts that are going to be put in by hand. They are going to be in the main identified via once you have some criteria for tracking what happens in the brain, comparing it with the outside world and those kinds of things.

My concept, maybe a cinema is not an objectively well-defined fact but what I think of as a cinema and what I expect in a cinema and what I don’t expect in a cinema, like I expect it to go dark and a projector and things like that. I don’t expect that this would be in a completely open space in the Sahara Desert under the sun with no seats and no sounds and no projection. I’m pretty clear that one of these things is a lot more of a cinema than the other.

Lucas: Do you want to expand here a little bit about this synthesis process?

Stuart: The main idea is to try and ensure that no disasters come about and the main thing that could lead to a disaster is the over prioritization of certain preferences over others. There are other avenues to disaster but this seems to be the most obvious. The other important part of the synthesis process is that it has to reach an outcome, which means that a vague description is not sufficient, so that’s why it’s phrased in terms of this is the default way that you synthesize preferences. This way may be modified by certain meta-preferences. The meta-preferences have to be reducible to some different way of synthesizing the preferences.

For example, the synthesis is not particularly over-weighting long-term preferences versus short term preferences. It would prioritize long-term preferences but not exclude short term ones. So, I want to be thin is not necessarily prioritizing over that’s a delicious piece of cake that I’d like to eat right now, for example, but human meta-preferences often prioritize long-term preferences over short term ones, so this is going to be included and this is going to change the default balance towards long-term preferences.

Lucas: So, powering the synthesis process, how are we to extract the partial preferences and their weights from the person?

Stuart: That’s, as I say, the first part of the project and that is a lot more empirical. This is going to be a lot more looking at what neuroscience says, maybe even what algorithm theory says or what modeling of algorithms say and about what’s physically going on in the brain and how this corresponds to internal mental models. There might be things like people noting down what they’re thinking, correlating this with changes in the brain and this is a much more empirical aspect to the process that could be carried out essentially independently from the synthesis product.

Lucas: So, a much more advanced neuroscience would be beneficial here?

Stuart: Yes, but even without that, it might be possible to infer some of these things indirectly via the AI and if the AI accounts well for uncertainties, this will not result in disasters. If it knows that we would really dislike losing something of importance to our values, even if it’s not entirely sure what the thing of importance is, it will naturally, with that kind of motivation, act in a cautious way, trying to preserve anything that could be valuable until such time as it figures out better what we want in this model.

Lucas: So, in section two of your paper, synthesizing the preference utility function, within this section, you note that this is not the only way of constructing the human utility function. So, can you guide us through this more theoretical section, first discussing what sort of utility function and why a utility function in the first place?

Stuart: One of the reasons to look for a utility function is to look for something stable that doesn’t change over time and there is evidence that consistency requirements will push any form of preference function towards a utility function and that if you don’t have a utility function, you just lose value. So, the desire to put this into a utility function is not out of an admiration for utility functions per se but our desire to get something that won’t further change or won’t further drift in a direction that we can’t control and have no idea about. The other reason is that as we start to control our own preferences better and have a better ability to manipulate our own minds, we are going to be pushing ourselves towards utility functions because of the same pressures of basically not losing value pointlessly.

You can kind of see it in some investment bankers who have to a large extent, constructed their own preferences to be expected money maximizers within a range and it was quite surprising to see but human beings are capable of pushing themselves towards that and this is what repeated exposure to different investment decision tends to do to you and it’s the correct thing to do in terms of maximizing the money and this is the kind of thing that general pressure on humans combined with human’s ability to self-modify, which we may develop in the future, so all this is going to be pushing us towards a utility function anyway, so we may as well go all the way and get the utility function directly rather than being pushed into it.

Lucas: So, is the view here that the reason why we’re choosing utility functions even when human beings are very far from being utility functions is that when optimizing our choices in mundane scenarios, it’s pushing us in that direction anyway?

Stuart: In part. I mean utility functions can be arbitrarily complicated and can be consistent with arbitrarily complex behavior. A lot of when people think of utility functions, they tend to think of simple utility functions and simple utility functions are obviously simplifications that don’t capture everything that we value but complex utility functions can capture as much of the value as we want. What tends to happen is that when people have say, inconsistent preferences, that they are pushed to make them consistent by the circumstances of how things are presented, like you might start with the chocolate mousse but then if offered a trade for the cherry pie, go for the cherry pie and then if offered a trade for the maple pie, go for the maple pie but then you won’t go back to the chocolate or even if you do, you won’t continue going around the cycle because you’ve seen that there is a cycle and this is ridiculous and then you stop it at that point.

So, what we decide when we don’t have utility functions tends to be determined by the order in which things are encountered and under contingent things and as I say, non-utility functions tend to be intrinsically less stable and so can drift. So, for all these reasons, it’s better to nail down a utility function from the start so that you don’t have the further drift and your preferences are not determined by the order in which you encounter things, for example.

Lucas: This is though in part thus a kind of normative preference then, right? To use utility functions in order not to be pushed around like that. Maybe one can have the meta-preferences for their preferences to be expressed in the order in which they encounter things.

Stuart: You could have that strong meta-preference, yes, though even that can be captured by a utility function if you feel like doing it. Utility functions can capture pretty much any form of preferences, even the ones that seem absurdly inconsistent. So, we’re not actually losing anything in theory by insisting that it should be a utility function. We may be losing things in practice in the construction of that utility function. I’m just saying if you don’t have something that is isomorphic with a utility function or very close to that, your preferences are going to drift randomly affected by many contingent factors. You might want that, in which case, you should put it in explicitly rather than implicitly and if you put it in explicitly, it can be captured by a utility function that is conditional on the things that you see, in the order in which you see them, for example.

Lucas: So, comprehensive AI services and other tool-like AI approaches to AI alignment I suppose avoid some of the anxieties produced by a strong agential AIs with utility functions. Are there alternative goal ordering or action producing methods in agents other than utility functions that may have the properties that we desire of utility functions or is the category of utility functions just so large that it encapsulates much of what is just mathematically rigorous and simple?

Stuart: I’m not entirely sure. Alternative goal structures tend to be quite ad hoc and limited in my practical experience whereas utility functions or reward functions which may or may not be isomorphic do seem to be universal. There are possible inconsistencies within utility functions themselves if you get a self-referential utility function including your own preferences, for example, but MIRI’s work should hope to clarify those aspects. I came up with an alternative goal structure which is basically an equivalence class of utility functions that are not equivalent in terms of utility and this could successfully model an agent’s who’s preferences were determined by the order in which things were chosen but I put this together as a toy model or as a thought experiment. I would never seriously suggest building that. So, it just seems that for the moment, most non-utility function things are either ad hoc or under-defined or incomplete and that most things can be captured by utility functions, so the things that are not utility functions all seem at the moment to be flawed and the utility functions seem to be sufficiently versatile to capture anything that you would want.

This may mean by the way that we may lose some of the elegant properties of utility function that we normally assume like deontology can be captured by a utility function that assigns one to obeying all the rules and zero to violating any of them and this is a perfectly valid utility function, however, there’s not much in terms of expected utility in terms of this. It behaves almost exactly like a behavioral constraint, never choose any option that is against the rules. That kind of thing, even though it’s technically a utility function, might not behave the way that we’re used to utility functions behaving in practice. So, when I say that it should be captured as a utility function, I mean formally it has to be defined in this way but informally, it may not have the properties that we informally expect of utility functions.

Lucas: Wonderful. This is a really great picture that you’re painting. Can you discuss extending and normalizing the partial preferences? Take us through the rest of section two on synthesizing to a utility function.

Stuart: The extending is just basically you have, for instance, a preference of going to the cinema this day with that friend versus going to the cinema without that friend. That’s an incredibly narrow preference, but you also have preferences about watching films in general, being with friends in general, so these things should be combined in as much as they can be into some judgment of what you like to watch, who you like to watch with and under what circumstances. That’s the generalizing. The extending is basically trying to push these beyond the typical situations. So, if there was a sort of virtual reality, which really gave you the feeling that other people were present with you, which current virtual reality doesn’t tend to, then would this count as being with your friend. What level of interaction would be required for it to count as being with your friend? Well, that’s some of the sort of extending.

The normalizing is just basically the fact that utility functions are defined up to scaling, up to multiplying by some positive real constant. So, if you want to add utilities together or combine them in a smooth-min or combine them in any way, you have to scale the different preferences and there are various ways of doing this. I fail to find an intrinsically good way of doing it that has all the nice formal properties that you would want but there are a variety of ways that can be done, all of which seem acceptable. The one I’m currently using is the mean max normalization, which is that the best possible outcome gets a utility of one, and the average outcome gets a utility of zero. This is the scaling.

Then the weight of these preferences is just how strongly you feel about it. Do you have a mild preference for going to the cinema with this friend? Do you have an overwhelming desire for chocolate? Once they’ve normalized, you weigh them, and you combine them.

Lucas: Can you take us through the rest of section two here, if there’s anything else here that you think is worth mentioning?

Stuart: I’d like to point out that this is intended to work with any particular human being that you point the process at, so there are a lot of assumptions that I made from my non-moral realist, worried about over simplification and other things. The idea is that if people have strong meta-preferences themselves, these will overwhelm the default decisions that I’ve made but if people don’t have strong meta-preferences, then they are synthesized in this way in the way which I feel is the best to not lose any important human value. There are also judgements about what would constitute a disaster or how we might judge this to have gone disastrously wrong, those are important and need to be sort of fleshed out a bit more because many of them can’t be quite captured within this system.

The other thing is that the outcomes may be very different. To choose a silly example, if you are 50% total utilitarian versus 50% average utilitarian or if you’re 45%, 55% either way, the outcomes are going to be very different because the pressure on the future is going to be different and because the AI is going to have a lot of power, it’s going to result in very different outcomes but from our perspective where if we put 50/50 total utilitarianism and average utilitarianism, we’re not exactly 50/50 most of the time. We’re kind of … Yeah, they’re about the same. So, 45, 55, should not result in a disaster if 50/50 doesn’t.

So, even though from the perspective of these three mixes, 45/55, 50/50, 55/45, these three mixes will look at something that optimizes one of the other two mixes and say that is very bad from my perspective, however, more human perspective, we’re saying all of them are pretty much okay. Well, we would say none of them are pretty much okay because they don’t incorporate many other of our preferences but the idea is that when we get all the preferences together, it shouldn’t matter a bit if it’s a bit fuzzy. So even though the outcome will change a lot if we shift it a little bit, the quality of the outcome shouldn’t change a lot and this is connected with a point that I’ll put up in section three that uncertainties may change the outcome a lot but again, uncertainties should not change the quality of the outcome and the quality of the outcome is measured in a somewhat informal way by our current preferences.

Lucas: So, moving along here into section three, what can you tell us about the synthesis of the human utility function in practice?

Stuart: So, first of all, there’s … Well, let’s do this project, let’s get it done but we don’t have perfect models of the human brain, we haven’t grounded all the symbols, what are we going to do with the great uncertainties. So, that’s arguing that even with the uncertainties, this method is considerably better than nothing and you should expect it to be pretty safe and somewhat adequate even with great uncertainties. The other part is I’m showing how thinking in terms of the human mental models can help to correct and improve some other methods like revealed preferences, our stated preferences, or the locking the philosopher in a box for a thousand years. All methods fail and we actually have a pretty clear idea when they fail, revealed preferences fail because we don’t model bounded rationality very well and even when we do, we know that sometimes our preferences are different from what we reveal. Stated preferences fail in situations where there’s strong incentives not to tell the truth, for example.

We could deal with these by sort of adding all the counter examples of the special case or we could add the counter examples as something to learn from or what I’m recommending is that we add them as something to learn from while stating that the reason that this is a counter example is that there is a divergence between whatever we’re measuring and the internal model of the human. The idea being that it is a lot easier to generalize when you have an error theory rather than just lists of error examples.

Lucas: Right and so there’s also this point of view here that you’re arguing that this research agenda and perspective is also potentially very helpful for things like corrigibility and low impact research and Christiano’s distillation and amplification, which you claim all seem to be methods that require some simplified version of the human utility function. So any sorts of conceptual insights or systematic insights which are generated through this research agenda in your view seem to be able to make significant contributions to other research agendas which don’t specifically take this lens?

Stuart: I feel that even something like corrigibility can benefit from this because in my experience, things like corrigibility, things like low impact have to define to some extent what is important and what can be categorized as unimportant. A low impact AI cannot be agnostic about our preferences, it has to know that a nuclear war is a high impact thing whether or not we’d like it whereas turning on an orange light that doesn’t go anywhere is a low impact thing, but there’s no real intrinsic measure by which one is high impact and the other is low impact. Both of them have ripples across the universe. So, I think I phrased it as Hitler, Gandhi and Thanos all know what a low impact AI is, all know what an oracle AI is, or know the behavior to expect from it. So, it means that we need to get some of the human preferences in, the bit that tells us that nuclear wars are high impact but we don’t need to get all of it in because since so many different humans will agree on it, you don’t need to capture any of their individual preferences.

Lucas: So, it’s applicable to these other methodologies and it’s also your belief and I’m quoting you here, you say that, “I’d give a 10% chance of it being possible this way, meaning through this research agenda and a 95% chance that some of these ideas will be very useful for other methods of alignment.” So, just adding that here as your credences for the skillfulness of applying insights from this research agenda to other areas of AI alignment.

Stuart: In a sense, you could think of this research agenda in reverse. Imagine that we have reached some outcome that isn’t some positive outcome, we have got alignment and we haven’t reached it through a single trick and we haven’t reached it through the sort of tool AIs or software as a service or those kinds of approaches, we have reached an actual alignment. It, therefore, seems to me all the problems that I’ve listed or almost all of them will have had to have been solved, therefore, in a sense, much of this research agenda needs to be done directly or indirectly in order to achieve any form of sensible alignment. Now, the term directly or indirectly is doing a lot of the work here but I feel that quite a bit of this will have to be done directly.

Lucas: Yeah, I think that that makes a lot of sense. It seems like there’s just a ton about the person that is just confused and difficult to understand what we even mean here in terms of our understanding of the person and also broader definitions included in alignment. Given this optimism that you’ve stated here surrounding the applicability of this research agenda on synthesizing a humans’ preferences into a utility function, what can you say about the limits of this method? Any pessimism to inject here?

Stuart: So, I have a section four, which is labeled as the things that I don’t address. Some of these are actually a bit sneaky like the section on how to combine the preferences of different people because if you read that section, it basically lays out ways of combining different people’s preferences. But I’ve put it in that to say I don’t want to talk about this issue in the context of this research agenda because I think this just diverts from the important work here, and there are a few of those points but some of them are genuine things that I think are problems and the biggest is the fact that there is a sort of informal Godel statement in humans about their own preferences. How many people would accept a computer synthesis of their preferences and say yes, that is my preferences, especially when they can explore it a bit and find the counter intuitive bits? I expect humans in general to reject the AI assigned synthesis no matter what it is, pretty much just because it was synthesized and then given to them, I expect them to reject or want to change it.

We have a natural reluctance to accept the judgment of other entities about our own morality and this is a perfectly fine meta-preference that most humans have and I think all humans have to some degree and I have no way of capturing it within the system because it’s basically a Godel statement in a sense. The best synthesis process is the one that wasn’t used. The other thing is that people want to continue with moral learning and moral improvement and I’ve tried to decompose moral learning and more improvements into different things and show that some forms of moral improvements and moral learning will continue even when you have a fully synthesized utility function but I know that this doesn’t capture everything of what people mean by this and I think it doesn’t even capture everything of what I would mean by this. So, again, there is a large hole in there.

There are some other holes of the sort of more technical nature like infinite utilities, stability of values and a bunch of other things but conceptually, I’m the most worried about these two aspects, the fact that you would reject what values you were assigned and the fact that you’d want to continue to improve and how do we define continuing improvement that isn’t just the same as well your values may drift randomly.

Lucas: What are your thoughts here? Feel free to expand on both the practical and theoretical difficulties of applying this across humanity and aggregating it into a single human species wide utility function.

Stuart: Well, the practical difficulties are basically politics, how to get agreements between different groups. People might want to hang onto their assets or their advantages. Other people might want sort of stronger equality. Everyone will have broad principles to appeal to. Basically, there’s going to be a lot of fighting over the different weightings of individual utilities. The hope there is that, especially with a powerful AI, that the advantage might be sufficiently high that it’s easier to do something where everybody gains even if the gains are uneven than to talk about how to divide a fixed sized pie. The theoretical issue is mainly what do we do with anti-altruistic preferences. I’m not talking about selfish preferences, those are very easy to deal with. That’s just basically competition for the utility, for the resources, for the goodness but actual anti-altruistic utilities so, someone who wants harm to befall other people and also to deal with altruistic preferences because you shouldn’t penalize people for having altruistic preferences.

You should, in a sense, take out the altruistic preferences and put that in the humanity one and allow their own personal preferences some extra weight, but anti-altruistic preferences are a challenge especially because it’s not quite clear where the edge is. Now, if you want someone to suffer, that’s an anti-altruistic preference. If you want to win a game and part of your enjoyment of the game is that other people lose, where exactly does that lie and that’s a very natural preference. You might become a very different person if you didn’t get some at least mild enjoyment from other people losing or from the status boost there is a bit tricky. You might sort of just tone them down so that mild anti-altruistic preferences are perfectly fine, so if you want someone to lose to your brilliant strategy at chess, that’s perfectly fine but if you want someone to be dropped slowly into a pit of boiling acid, then that’s not fine.

The other big question is population ethics. How do we deal with new entities and how do we deal with other conscious or not quite conscious animals around the world, so who gets to count as a part of the global utility function?

Lucas: So, I’m curious to know about concerns over aspects of this alignment story or any kind of alignment story involving lots of leaky abstractions, like in Rich Sutton’s short essay called The Bitter Lesson, he discusses how the bitter lesson of computer science is how leveraging computation over human domain-specific ingenuity has broadly been more efficacious for breeding very powerful results. We seem to have this tendency or partiality towards trying to imbue human wisdom or knowledge or unique techniques or kind of trickery or domain-specific insight into architecting the algorithm and alignment process in specific ways whereas maybe just throwing tons of computation at the thing has been more productive historically. Do you have any response here for concerns over concepts being leaky abstractions, or the categories in which you use to break down human preferences, not fully capturing what our preferences are?

Stuart: Well, in a sense that’s part of the research project and part of the reasons why I warned against going to distant words where in my phrasing, the web of connotations break down, in your phrasing the abstractions become too leaky and this is also part of why even though the second part is done as if this is the theoretical way of doing it, I also think there should be a lot of experimental aspect to it to test where this is going, where it goes surprisingly wrong or surprisingly right, the second part, though it’s presented as just this is basically the algorithm, it should be tested and checked and played around with to see how it goes. For The Bitter Lesson, the difference here I think is that in the case of The Bitter Lesson, we know what we’re trying to do.

We have objectives whether it’s winning at a game, whether it’s classifying images successfully, whether it’s classifying some other feature successfully, we have some criteria for the success of it. The constraints I’m putting in by hand are not so much trying to put in the wisdom of the human or the wisdom of the Stuart. There’s some of that but it’s to try and avoid disasters and the disasters cannot be just avoided with more data. You can get to many different points from the data and I’m trying to carve away lots of them. Don’t oversimplify, for example. So, to go back to The Bitter Lesson, you could say that you can tune your regularizer and what I’m saying is have a very weak regularizer, for example and this is not something that The Bitter Lesson applies to because in the real world, on the problems where The Bitter Lesson applies, you can see whether hand tuning the regularizer works because you can check what the outcome is and compare it with what you want.

Since you can’t compare it with what you want, because if we knew what we wanted we’d kind of have it solved, what I’m saying here is don’t put a strong regularizer for these reasons. The data can’t tell me that I need a stronger regularizer because the data has no opinion if you want on that. There is no ideal outcome to compare with. There might be some problems but the problems like if our preferences do not look like my logic or like our logic, this points towards the method failing, not towards the method’s needing more data and less restrictions.

Lucas: I mean I’m sure part of this research agenda is also further clarification and refinement of the taxonomy and categories used, which could potentially be elucidated by progress in neuroscience.

Stuart: Yes, and there’s a reason that this is version 0.9 and not yet version 1. I’m getting a lot of feedback and going to refine it before trying to put it out as version 1. It’s in alpha or in beta at the moment. It’s a prerelease agenda.

Lucas: Well, so hopefully this podcast will spark a lot more interest and knowledge about this research agenda and so hopefully we can further contribute to bettering it.

Stuart: When I say that this is in alpha or in beta, that doesn’t mean don’t criticize it, do criticize it and especially if these can lead to improvements but don’t just assume that this is fully set in stone yet.

Lucas: Right, so that’s sort of framing this whole conversation in the light of epistemic humility and willingness to change. So, two more questions here and then we’ll wrap up. So, reflective equilibrium, you say that this is not a philosophical ideal, can you expand here about your thoughts on reflective equilibrium and how this process is not a philosophical ideal?

Stuart: Reflective equilibrium is basically you refine your own preferences, make them more consistent, apply them to yourself until you’ve reached a moment where your meta-preferences and your preferences are all smoothly aligned with each other. What I’m doing is a much more messy synthesis process and I’m doing it in order to preserve as much as possible of the actual human preferences. It is very easy to reach reflective equilibrium by just, for instance, having completely flat preferences or very simple preferences, these tend to be very reflectively in equilibrium with itself and pushing towards this thing is a push towards, in my view, excessive simplicity and the great risk of losing valuable preferences. The risk of losing valuable preferences seems to me a much higher risk than the gain in terms of simplicity or elegance that you might get. There is no reason that the kludgey human brain and it’s mess of preferences should lead to some simple reflective equilibrium.

In fact, you could say that this is an argument against reflexive equilibrium because it means that many different starting points, many different minds with very different preferences will lead to similar outcomes which basically means that you’re throwing away a lot of the details of your input data.

Lucas: So, I guess two things, one is that this process clarifies and improves on incorrect beliefs in the person but it does not reflect what you or I might call moral wrongness, so like if some human is evil, then the synthesized human utility function will reflect that evilness. So, my second question here is, an idealization process is very alluring to me. Is it possible to synthesize the human utility function and then run it internally on the AI and then see what we get in the end and then check if that’s a good thing or not?

Stuart: Yes, in practice, this whole thing, if it works, is going to be very experimental and we’re going to be checking the outcomes and there’s nothing wrong with sort of wanting to be an idealized version of yourself. What I have, especially if it’s just one idealized, it’s the version where you are the idealized version of the idealized version of the idealized version of the idealized version, et cetera, of yourself where there is a great risk of losing yourself and the inputs there. This is where I had the idealized process where I started off wanting to be more compassionate and spreading my compassion to more and more things at each step, eventually coming to value insects as much as humans and then at the next step, value rocks as much as humans and then removing humans because of the damage that they can do to mountains, that was a process or something along the lines of what I can see if you are constantly idealizing yourself without any criteria for stop idealizing now or you’ve gone too far from where you started.

Your ideal self is pretty close to yourself. The triple idealized version of your idealized, idealized self or so on, starts becoming pretty far from your starting point and this is the sort of areas where I fear over-simplicity or trying to get to reflective equilibrium at the expense of other qualities and so on, these are the places where I fear this pushes towards.

Lucas: Can you make more clear what failed in our view in terms of that idealization process where Mahatma Armstrong turns into a complete negative utilitarian?

Stuart: It didn’t even turn into a negative utilitarian, it just turned into someone that valued rocks as much as they valued humans and therefore eliminated humans on utilitarian grounds in order to preserve rocks or to preserve insects if you wanted to go down one level of credibility. The point of this is this was the outcome of someone that wants to be more compassionate, continuously wanting to make more compassionate versions of themselves that still want to be more compassionate and so on. It went too far from where it had started. It’s one of many possible narratives but the point is the only way of resisting something like that happening is to tie the higher levels to the starting point. A better thing might say I want to be what myself would think is good and what my idealized self would think was good and what the idealized, idealized self would think was good and so on. So that kind of thing could work but just idealizing without ever tying it back to the starting point, to what compassion meant for the first entity, not what it meant for the nth entity is the problem that I see here.

Lucas: If I think about all possible versions of myself across time and I just happen to be one of them, this just seems to be a meta-preference to bias towards the one that I happen to be at this moment, right?

Stuart: We have to make a decision as to what preferences to take and we may as well take now because if we try and take into account our future preferences, we are starting to come a cropper with the manipulable aspect of our preferences. The fact that these could be literally anything. There is a future Stuart who is probably a Nazi because you can apply a certain amount of pressure to transform my preferences. I would not want to endorse their preferences now. There are future Stuarts who are saints, whose preferences I might endorse. So, if we’re deciding which future preferences that we’re accepting, we have to decide it according to criteria and criteria that at least are in part of what we have now.

We could sort of defer to our expected future selves if we sort of say I expect a reasonable experience of the future, define what reasonable means and then average out our current preferences with our reasonable future preferences if we can define what we mean by reasonable then, then yes, we can do this. This is our sole way of doing things and if we do it this way, it will most likely be non-disastrous. If doing the synthesis process with our current preference is non-disastrous then doing it with the average of our future reasonable preferences is also going to be non-disastrous. This is one of the choices that you could choose to put into the process.

Lucas: Right, so we can be mindful here that we’ll have lots of meta-preferences about the synthesis process itself.

Stuart: Yes, you can put it as a meta-preference or you can put it explicitly in the process if that’s a way you would prefer to do it. The whole process is designed strongly around get an answer from this process, so the, yes, we could do this, let’s see if we can do it for one person over a short period of time and then we can talk about how we might take into account considerations like that, including as I say, this might be in the meta-preferences themselves. This is basically another version of moral learning. We’re kind of okay with our values shifting but not okay with our values shifting arbitrarily. We really don’t want our values to completely flip from what we have now, though some aspects we’re more okay with them changing. This is part of the complicated how do you do moral learning.

Lucas: All right, beautiful, Stuart. Contemplating all this is really quite fascinating and I just think in general, humanity has a ton more thinking to do and self-reflection in order to get this process really right and I think that this conversation has really helped elucidate that to me and all of my contradictory preferences and my multitudes within the context of my partial and sometimes erroneous mental models, reflecting on that also has me feeling maybe slightly depersonalized and a bit ontologically empty but it’s beautiful and fascinating. Do you have anything here that you would like to make clear to the AI alignment community about this research agenda? Any last few words that you would like to say or points to clarify?

Stuart: There are people who disagree with this research agenda, some of them quite strongly and some of them having alternative approaches. I like that fact that they are researching other alternatives. If they disagree with the agenda and want to engage with it, the best engagement that I could see is pointing out why bits of the agenda are unnecessary or how alternate solutions could work. You could also point out that maybe it’s impossible to do it this way, which would also be useful but if you think you have a solution or the sketch of a solution, then pointing out which bits of the agenda you solve otherwise would be a very valuable exercise.

Lucas: In terms of engagement, you prefer people writing responses on the AI Alignment forum or Lesswrong

Stuart: Emailing me is also fine. I will eventually answer every non-crazy email.

Lucas: Okay, wonderful. I really appreciate all of your work here on this research agenda and all of your writing and thinking in general. You’re helping to create beautiful futures with AI and you’re much appreciated for that.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

FLI Podcast: Beyond the Arms Race Narrative: AI & China with Helen Toner & Elsa Kania

Discussions of Chinese artificial intelligence frequently center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond the arms race narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. 

Topics discussed in this episode include:

  • The rise of AI in China
  • The escalation of tensions between U.S. and China in the AI realm 
  • Chinese AI Development plans and policy initiatives
  • The AI arms race narrative and the problems with it 
  • Civil-military fusion in China vs. U.S.
  • The regulation of Chinese-American technological collaboration
  • AI and authoritarianism
  • Openness in AI research and when it is (and isn’t) appropriate
  • The relationship between privacy and advancement in AI 

References discussed in this episode include:

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast! I’m your host Ariel Conn. Now, by sheer coincidence, Lucas and I both brought on guests to cover the same theme this month, and that is AI and China. Fortunately, AI and China is a huge topic with a lot to cover. For this episode, I’m pleased to have Helen Toner and Elsa Kania join the show. We will be discussing things like the Beijing AI Principles, why the AI arms race narrative is problematic, civil-military fusion in China versus in the US, the use of AI in human rights abuses, and much more.

Helen is Director of Strategy at Georgetown’s Center for Security and Emerging Technology. She previously worked as a Senior Research Analyst at the Open Philanthropy Project, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen holds a Bachelor of Science and a Diploma in Languages from the University of Melbourne.

Elsa is a Research Fellow also at Georgetown’s CSET, and she is also a PhD student in Harvard University’s Department of Government. Her research focuses on Chinese military innovation and technological development.

Elsa and Helen, thank you so much for joining us.

Helen Toner: Great to be here.

Elsa Kania: Glad to be here.

Ariel Conn: So, I have a lot of questions for you about what’s happening in China with AI, and how that’s impacting U.S. China relations. But before I dig into all of that, I want to actually start with some of the more recent news, which is the Beijing principles that came out recently. I was actually surprised because they seem to be some of the strongest principles about artificial intelligence that I’ve seen, and I was wondering if you both could comment on your own reactions to those principles.

Elsa Kania: I was encouraged to see these principles released, and I think it is heartening to see greater discussion of AI ethics in China. At the same time, I’m not convinced that these are necessarily strong in the sense of not clear as to what the mechanism for enforcement would be, and I think that this is not unique to China, but I think often the articulation of principles can be a means of burnishing the image, whether of a company or a country, with regard to its intentions in AI.

Although it’s encouraging to hear a commitment to use AI to do good, and for humanity, and control risks, these are very abstract statements, and some of them are rather starkly at odds with realities of how we know AI is being abused by the Chinese government today for purposes that reinforce the coercive capacity of the state: including censorship, surveillance; prominently in Xinjiang where facial recognition has been racially targeted against ethnic minorities, against the backdrop of the incarceration and imprisonment of upwards of a million — by some estimates — Uyghurs in Xinjiang.

So, I think it’s hard not to feel a degree of cognitive dissonance when reading these principles. And again I applaud those involved in the process for their efforts and for continuing to move this conversation forward in China; But again, I’m skeptical that this espoused commitment to certain ethics will necessarily constrain the Chinese government from using AI in ways that it appears to be deeply committed to do so for reasons of concerns about social stability and state security.

Ariel Conn: So one question that I have is, did the Chinese government actually sign on to these principles? Or is it other entities that are involved?

Elsa Kania: So the Beijing AI principles were launched in some association with the Ministry of Science and Technology for China. So, certainly the Chinese government, actually initially in its New Generation AI Development Plan back in the summer of 2017, had committed to trying to lead and engage with issues of legal, ethical, and regulatory frameworks for artificial intelligence. And I think it is telling that these have been released in English; And to some degree part of the audience for these principles is international, against the backdrop of a push for the Chinese government to promote international cooperation in AI.

And the launch of a number of world AI conferences and attempts to really engage with the international community, again, are encouraging in some respects — but also there can be a level of inconsistency. And I think a major asymmetry is the fact that these principles, and many initiatives in AI ethics in China, are shaped by the government’s involvement. And it’s hard to imagine the sort of open exchange among civil society and different stakeholders that we’ve seen in the United States, and globally, happen in China, given the role of the government. I think it’s telling at the same time that the preamble for the Beijing AI principles talks about the construction of a human community with a shared future, which is a staple in Xi Jinping’s propaganda, and a concept that really encapsulates Chinese ambitions to shape the future course of global governance.

So again, I think I’m heartened to see greater discussion of AI ethics in China. But I think the environment in which these conversations are happening — as well as of course the constraints from any meaningful enforcement, or alteration of the government’s current trajectory in AI — makes me skeptical in some respects. I hope that I am wrong, and I hope that we will see this call to use AI for humanity, and to be diverse and inclusive, start to shape the conversation. So, it will be interesting to see whether we see indicators of results, or impact from these principles going forward.

Helen Toner: Yeah. I think that’s exactly right. And in particular, the release of these principles I think made clear a limitation of this kind of document in general. This was one of a series of sets of principles like this that have been released by a number of different organizations. And the fact of seeing principles like this that look so good on paper, in contrast with some of the behavior that Elsa described from the Chinese government, I think really puts into stark relief the limitations of well-meaning, nice sounding ideas like this that really have no enforcement mechanism.

Ariel, you asked about whether the Chinese government had signed onto these, and as Elsa described, there was certainly government involvement here. But just because there is some amount of the government giving, or some part of the Chinese government giving its blessing to the principles, does not imply that there are any kind of enforcement mechanisms, or any kind of teeth to a document of this kind.

Elsa Kania: And certainly that’s not unique to China. And I think there have been questions of whether corporate AI principles, whether from American or Chinese companies, are essentially intended for public relations purposes, or will actually shape the company’s decision making. So, I think it’s really important to move these conversations forward on ethics. At the same time, it will be interesting to see how principles translate into practice, or perhaps in some cases don’t.

Ariel Conn: So I want to backtrack a little bit to where some of the discussion about China’s development of AI started, at least from more Western perspectives. My understanding is that seeing AlphaGo beat Lee Sedol led to something of a rallying cry — I don’t know if that’s quite the right phrase — but that that sort of helped trigger the Chinese government to say, “We need to be developing this a lot stronger and faster.” Is that the case? Or what’s been sort of the trajectory of AI development in China?

Elsa Kania: I think it depends on how far back you want to go historically.

Ariel Conn: That’s fair.

Elsa Kania: I think in recent history certainly AlphaGo was a unique moment — both as an indication of how rapidly AI was progressing, given that experts had not anticipated an AI could win the game of Go for another 10, perhaps 15 years — and also in the context of how the Chinese government, and even the Chinese military, saw this as an indication of the capabilities of American artificial intelligence, including the relevance of the capacities for tactics and strategizing, command decision making in a military context. 

At the same time of course I think another influence in 2016 appears to have been the U.S. government’s emphasis on AI at the time, including a plan for research and development that may have received more attention in Beijing than it did in Washington in some respects, because this does appear to have been one of the factors that inspired China’s New Generation AI Development Plan, launched the following year. 

But I think if we’re looking at the history of AI in China, we can trace it back much further: even some linkages to the early history of cybernetics and systems engineering. And there are honestly some quite interesting episodes early on, because during the Cold War, artificial intelligence could be a topic that had some ideological undertones and underpinnings — including how the Soviet Union saw AI in system science, and some of the critiques of this as revisionism.

And then there is even an interesting detour in the 80s or so: when Qian Xuesen, a prominent strategic scientist in China’s nuclear weapons program, saw AI as entangled with an interest in parapsychology — including exceptional human body functions such as the capacity to recognize characters with your ears. There was a craze for ESP in China in the 80s, and actually received some attention in scientific literature as well: There was an interesting conflation of artificial intelligence and special functions that became the subject of some ideological debate in which Qian Xuesen was an advocate essentially of ESP in ways that undermined early AI development in China.

And other academic rivals in the Chinese Academy of Sciences argued in favor of AI as a discipline of emerging science relative to the pseudoscience that human special functions turned out to be, and this became a debate of some ideological importance as well against the backdrop of questions of arbitrating what science was, and how the Chinese Communist Party tried to sort of shape science. 

I think that does go to illustrate that although a lot of the headlines about China’s rise in AI are much more recent, not only state support for research, but also the significant increasing in publications far predates this attention, and really can be traced to some degree to the 90s, and especially from the mid 2000s onward.

Helen Toner: I’ll just add as well that if we’re thinking about what it is that caused this surge in Western interest in Chinese AI, I think a really important part of the backdrop is the shift in U.S. defense thinking to move away from thinking primarily about terrorism, and non-state actors as the primary threat to U.S. security, and shifting towards thinking about near-peer adversaries — so primarily China and Russia — which is a recent change in U.S. doctrine. And I think that is also an important factor in understanding why Chinese interest and success in AI has become such an important sort of conspicuous part of the discussion.

Elsa Kania: There’s also been really a recalibration of assessments of the state of technology and innovation in China, from often outright skepticism and dismissal that China could innovate to sometimes now a course correction towards the opposite extreme; and now anxieties that China may be beating us in the “race for AI” or 5G — even quantum computing has provoked a lot of concern. So, I think on one hand it is long overdue that U.S. policy makers and the American National Security community take seriously what are quite real and rapid advances in science and technology in China.

At the same time I think sometimes this reaction has resulted in more inflated assessments that have provoked concerns about the notion of an arms race, which I think is really wrong and misleading framing of this when we’re talking about a general purpose technology that has such a range of applications, and for which the economic and societal impacts may be more significant than the military applications in the near-term, which I say is an analyst who focuses on military issues.

Ariel Conn: I want to keep going with this idea of the fear that’s sort of been developing in the U.S. in response to China’s developments. And I guess I first started seeing it a lot more when China released their Next Generation Artificial Intelligence Plan — I believe that’s the one that said by 2030 they wanted to dominate in AI.

Helen Toner: That’s right.

Ariel Conn: So I’d like to hear both of your thoughts on that. But I’m also sort of interested in — to me it seemed like that plan came out in part as a response to what they were seeing from the US, and then the U.S. response to this is to — maybe panic is a little bit extreme, but possibly overreact to the Chinese plan — and maybe they didn’t overreact, that might be incorrect. But it seems like we’re definitely seeing an escalation occurring.

So let’s start by just talking about what that plan said, and then I want to dive into this idea of the escalation, and maybe how we can look at that problem, or address it, or consider it.

Elsa Kania: So, I’d been certainly looking at a lot of different plans and policy initiatives for the 13th Five-Year Plan period, which is 2016 to 2020, and I had noticed when this New Generation AI Development Plan came out; and initially it was only available in Chinese. A couple of us, after we’d come across it initially, had organized to work on a translation of it, and to this day that’s still the only unofficial English translation of this plan available. So far as I can tell the Chinese government itself never actually translated that plan. And in that regard, it does not appear to have been intended for an international audience in the way that, for instance, the Beijing AI Principles were.

So, I think that some of the rhetoric in the plan that rightly provoked concerns — calling for China to lead the world in AI and be a premier global innovation center for artificial intelligence — is striking, but is consistent with S&T plans that often call for China to seize the strategic commanding heights of innovation, and future advantage. So I think that a lot of the signaling about the strategic importance of AI to some degree was intended for an internal audience, and certainly we’ve seen a powerful response in terms of plans and policies launched across all elements of the Chinese government, and at all levels of government including a number of cities and provinces.

I do think it was highly significant in reflecting how the Chinese government saw AI as really a critical strategic technology to transform the Chinese economy, and society, and military — though that’s discussed in less detail in the plan.

But there is also an open acknowledgement in the plan that China still sees itself as well behind the U.S. in some respects. So, I think the ambitions and the resources and policy support across all levels of government that this plan has catalyzed are extremely significant, and I think do merit some concern, but I think some of the rhetoric about an AI race, or arms race — clearly there is competition in this domain. But I do think the plan should be placed in the context of an overall drive by the Chinese government to escape the middle income trap, and sustain economic growth at a time when it’s slowing and looking to AI as an important instrument to advance these national objectives.

Helen Toner: I also think there is something kind of amusing that happened where, as Elsa said earlier, it seems like one driver of the creation of this plan was that China saw the U.S. government under the Obama administration in 2016 run a series of events and then put together a white paper about AI, and a federal R&D plan. And China’s response to this was to think, “Oh, we should really put together our own strategy, since the U.S.has one.” And then somehow with the change in administrations, and the time that had elapsed, there suddenly became this narrative of, “Oh no, China has an AI strategy and the U.S. doesn’t have one; So now we have to have one because they have one.” And that was a little bit farcical to be honest. And I think has now died down after, I believe it’s called the American AI Initiative that President Trump released. But that was amusing to watch while it was happening.

Elsa Kania: I hope that the concerns over the state of AI in China can provoke concerns that motivate productive responses. I agree that sometimes the debate has focused too much on the notion of what it would mean to have an AI strategy, or concerns about the plan as sort of one of the most tangible manifestations of these ambitions. But I do think there are reasons for concern that the U.S. has really not recognized the competitive challenge, and sometimes still seems to take for granted American leadership in emerging technologies for which the landscape does remain much more contested.

Helen Toner: For sure.

Ariel Conn: Do you feel like we’re starting to see de-escalation then — that people are starting to maybe change their rhetoric about making sure someone’s ahead, or who’s ahead, or all that type of lingo? Or do you think we are still seeing this escalation that is perhaps being reported in the press still?

Helen Toner: I think there is still a significant amount of concern. Perhaps one shift that we’ve seen a little bit — and Elsa I’d be curious if you agree — is that I think around the time that the Next Generation Plan was released, and attention was starting to turn to China, there began to be a bit of a narrative of, “Not only is China trying to catch up with the U.S. and making progress in catching up with the U.S. but perhaps has already surpassed the U.S. and is perhaps already clearly ahead in AI research globally.” That’s an extremely difficult thing to measure, but I think some of the arguments that were made to say that were not as well backed up as they could have been.

Maybe one thing that I’ve observed over the last six or 12 months is a little bit of a rebalancing in thinking. It’s certainly true that China is investing very heavily in this, and is trying really hard; And it’s certainly true that they are seeing some results from that, but it’s not at all clear that they have already caught up with the U.S. in any meaningful way, or are surpassing it. Of course, it depends how you slice up the space, and whether you’re looking more at fundamental research, or applied research, or so on. But that might be one shift we’ve seen a little bit.

Elsa Kania: I agree. I think there has continued to be a recalibration of assessments, and even a rethinking of the notion of what leading in AI even means. And I used to be asked the question all the time of who was winning the race, or even arms race, for AI. And often I would respond by breaking down the question, asking, “Well what do you mean by who?” Because the answer will differ depending on whether we’re talking about American and Chinese companies, relative to how do we think about aggregating China and the United States as a whole when it comes to AI research — particularly considering the level of integration and interdependence between American and Chinese innovation ecosystems. What do we mean by winning in this context? How do we think about the metrics, or even desired end states? Is this a race to develop something akin to artificial general intelligence? Or is this a rivalry to see which nation can best leverage AI for economic and societal development across the board?

And then again, why do we continue to talk about this as a race? I think that is a metaphor in framing that does readily come to mind and can be catchy. And as someone who looks at the military dimension of this quite frequently, I often find myself explaining why I don’t think “arms race” is an appropriate conceptualization either. Because this is a technology that will have a range of applications across different elements of the military enterprise — and that does have great promise for providing decisive advantage in the future of warfare, and yet we’re not talking about a single capability or weapon systems, but rather something that is much more general purpose, and that is fairly nascent in its development.

So, AI does factor into this overall U.S.-China military competition that is much more complex and amorphous than the notion of an arms race to develop killer robots would imply. Because certainly there are autonomous weapons development underway in the U.S. and China today; and I think that is quite concerning from the perspective of thinking about the future military balance, or how the U.S. and Chinese militaries might be increasing the risks of a crisis, and considerations of how to mitigate those concerns and reinforce strategic stability.

So hopefully there is starting to be greater questioning of some of these more simplistic framings, often in headlines, often in some of the more sensationalist statements out there. I don’t believe China is yet an AI superpower, but clearly China is an AI powerhouse.

Ariel Conn: Somewhat recently there was an op ed by Peter Thiel in which he claims that China’s tech development is naturally a part of the military. There’s also this idea that I think comes from China of military-civil fusion. And I was wondering if you could go into the extent to which China’s AI development is naturally a part of their military, and the extent to which companies and research institutes are able to differentiate their work from military applications.

Elsa Kania: All right. So, the article in question did not provide a very nuanced discussion of these issues. And to start I would say that it is hardly surprising that the Chinese military is apparently enthusiastic about leveraging artificial intelligence. China’s new national defense white paper, titled “China’s National Defense in the New Era,” talked about advances in technologies like big data, cloud computing, artificial intelligence, quantum information, as significant at a time when the character of warfare is evolving — what is known as today’s informatized warfare, towards future intelligentized warfare, in which some of these emerging technologies, namely artificial intelligence, could be integrated into the system of systems for future conflict.

And the Chinese military is pursuing this notion of military intelligentization, which essentially involves looking to leverage AI for a range of military applications. At the same time, I see military-civil fusion as a concept and strategy to remain quite aspirational in some respects.

There’s also a degree of irony, I’d argue, that much of what China is attempting to achieve through military-civil fusion is inspired by dynamics and processes that they have seen be successful in the American defense innovation ecosystem. I think sometimes there is this tendency to talk about military-civil fusion as this exotic or uniquely Chinese approach, when in fact there are certain aspects of it that are directly mimicking, or responding to, or learning from what the U.S. has had within our ecosystem for a much longer history. And China’s trying to create this more rapidly and more recently. 

So, the delta of increase, perhaps, and the level of integration between defense, academic, and commercial developments, may be greater. But I think the actual results so far are more limited. And again it is significant, and there are reasons for concern. We are seeing a greater and greater blurring of boundaries between defense and commercial research, but the fusion is again much more aspirational, as opposed to the current state of play.

Helen Toner: I’ll add as well, returning to that specific op ed when Thiel mentioned military-civil fusion, he actually linked to an article by a colleague of Elsa’s and mine, Lorand Laskai, where he wrote about military-civil fusion, and Lorand straight up said that Thiel had clearly not read the article, based on the way that he described military-civil fusion.

Ariel Conn: Well, that’s reassuring.

Elsa Kania: We are seeing militaries around the world, the U.S. and China among them, looking to build bridges to the private sector, and deepening cooperation with commercial enterprises. And I think it’s worth thinking about the factors that could provide a potential advantage; or for militaries that are looking to increase their capacity as organizations to leverage these technologies — this is an important dimension of that. And I think we are seeing some major progress in China in terms of new partnerships, including initiatives at the local level, new parks, new joint laboratories. But I do think, as with the overall status of China’s AI plan, there’s a lot of activity and a lot of investment. But the results are harder to ascertain at this point.

And again, I think it also does speak to questions of ethics in the sense that we have in the U.S. seen very open debate about companies and concerns, particularly of their employees, about whether they should or should not be working with the military or government on different projects. And I remain skeptical that we could see comparable debates or conversations happening in China, or that a Chinese company would outright say no to the government. I think certainly some companies may resist on certain points, or at the margins, especially when they have commercial interests that differ from the priorities of the government. But I do think the political economy of this ecosystem as a whole is very distinct.

And again I’m skeptical that if the employees of a Chinese company had moral qualms about working with the Chinese military, they’d have the freedom to organize, and engage in activism to try to change that.

Ariel Conn: I’d like to go into that a little bit more, because there’s definitely concerns that get raised that we have companies in the U.S. that are rejecting contracts with the U.S. government for fear that their work will be militarized, while at the same time — as you said — companies in China may not have that luxury. But then there’s also instances where you have say Google in China doing research, and so does that mean that Google is essentially working with the Chinese military and not the U.S. military? I think there’s a lot of misunderstanding about what the situation actually is there. I was wondering if you could both go into that a little bit.

Helen Toner: Yeah. I think this is a refrain that comes up a lot in DC as, “Well, look at how Google withdrew from its contract to work on Project Maven,” which is a Department of Defense Initiative looking at tagging overhead imagery, “So clearly U.S. companies aren’t willing to work with the U.S. government, But on the other hand they are still working in China. And as we all know, research in China is immediately used by the Chinese military, so therefore, they’re aiding the Chinese military even though they’re not willing to aid the U.S. military.” And I do think this is highly oversimplified description, and pretty incorrect.

So, a couple elements here. One is that I think the Google Project Maven decision seems to have been pretty unique. We haven’t really seen it repeated by other companies. Google continues to work with the U.S. military and the U.S. government in some other ways — for example working on DARPA projects, and working on other projects; And other U.S. companies are also very willing to work with the U.S. government including really world-leading companies. A big example right now is Amazon and Microsoft bidding on this JEDI contract, which is to provide cloud computing services to the Pentagon. So, I think on the one hand, this claim that U.S. companies are unwilling to work with the U.S. military is a vast overgeneralization.

And then on the other hand, I think I would point back to what Elsa was saying about the state of military-civil fusion in China, and the extent to which it makes sense or doesn’t make sense to say that any research done in China is immediately going to be incorporated into Chinese military technologies. I definitely wouldn’t say there is nothing to be concerned about here. But I think that the simplified refrain is not very productive.

Elsa Kania: With regard to some of these controversies, I do continue to believe that having these open debates, and the freedom that American companies and researchers have, is a strength of our system. I don’t think we should envy the state of play in China, where we have seen the Chinese Communist Party become more and more intrusive with regard to its impositions upon the tech sector, and I think there may be costs in terms of the long-term trajectory of innovation in China.

And with regard to the particular activities of American companies in China, certainly there have been some cases where companies have engaged in projects, or with partners, that I think are quite problematic. And one of the most prominent examples of that recently has been Google’s involvement in Dragonfly — creating a censored search engine — which was thoroughly condemned, including because of its apparent inconsistency with their principles. So, I do think there are concerns not only of values but also of security when it comes to American companies and universities that are engaged in China, and it’s never quite a black and white issue or distinction.

So for instance in the case of Google, their presence in China in terms of research does remain fairly limited. There have been a couple of cases where papers published in collaboration between a Google researcher and a Chinese colleague involve topics that are quite sensitive and evidently not the best topic on which to be collaborating, in my opinion — such as target recognition. There’s also been concerns over research on facial recognition, given the known abuse of that technology by the Chinese government. 

I think that also when American companies or universities partner or coauthor with Chinese counterparts, especially those that are linked to or are outright elements of the Chinese military — such as the National University of Defense Technology, which has been quite active in overseas collaborations — I do think that there should be some red lines. I don’t think the answer is “no American companies or universities should do any work on AI in China.” I think that would actually be damaging to American innovation, and I think some of the criticisms of Google have been unfair in that regard, because I do think that a more nuanced conversation is really critical going forward to think about the risks and how to get policy right.

Ariel Conn: So I want to come back to this idea of openness in a minute, but first I want to stick with some pseudo-military concerns. Maybe this is more reflective of what I’m reading, but I seem to see a lot more concern being raised about military applications of AI in China, and some concerns obviously about AI use with their humanitarian issues are starting to come to the surface. In light of some recent events especially like what we’re seeing in Hong Kong, and then with the Uyghurs, should we be worrying more about how China is using AI for what we perceive as human rights abuses?

Elsa Kania: That is something that greatly concerns me, particularly when it comes to the gravity of the atrocities in Xinjiang. And certainly there are very low tech coercive elements to how the Chinese government is essentially trying to re-engineer an entire population in ways that have been compared by experts as tantamount to a cultural genocide, and the creation of concentration camps — and beyond that, the pervasiveness of biometrics and surveillance enabled by facial recognition, and the creation of new software programs to better aggregate big data about individuals. I think all of that paints a very dark picture of ways in which artificial intelligence can enable authoritarianism, and can reinforce the Chinese government’s capability to repress its own population in ways that in some cases can become pervasive in day to day life.

And I’d say that having been to Beijing recently, surveillance is kind of like air pollution. It is pervasive, in terms of the cameras you see out on the streets. It is inescapable in a sense, and it is something that the average person or citizen in China can do very little about. I think of course this is not quite a perfect panopticon yet; Elements of this remain a work in progress. But I do think that the overall trajectory of these developments is deeply worrying in terms of human rights abuses, and yet it’s not as much of a feature of conversations in AI ethics in China. But I think it does overshadow some of the more positive aspects of what the Chinese government is doing with AI, like in health care and education, that this is also very much a reality.

And I think when it comes to the Chinese military’s interest in AI, it is quite a complex landscape of research and development and experimentation. To my knowledge it does not appear that the Chinese military is yet at the stage of deploying all that much in the way of AI: again very active efforts and long term development of weapons systems — including cruise missiles, hypersonics, a range of unmanned systems across all domains with growing degrees of autonomy, unmanned underwater vehicles and submarines, progress in swarming that has been prominently demonstrated, scavenger robots in space as a covert counter-space capability, human machine integration or interaction.

But I think that the translation of some of these initial stages of military innovation into future capabilities will be challenging for the PLA in some respects. There could be ways in which the Chinese military has advantages relative to the U.S., given apparent enthusiasm and support from top-level leadership at the level of Xi Jinping himself, and several prominent generals, who have been advocating for and supporting investments in these future capabilities.

But I do think that we’re really just at the start of seeing what AI will mean for the future of military affairs, and future of warfare. But when it comes to developments underway in China, particularly in the Chinese defense industry, I think the willingness of Chinese companies to export drones, robotic systems — many of which again have growing levels of autonomy, or at least are advertised as such — is also concerning from the perspective of other militaries that will be acquiring these capabilities and could use them in ways that violate human rights. 

But I do think there are concerns how the Chinese military would use its own capabilities. The export of some of these weapons systems going forward, as well as the potential use of made-in-China technologies by non-state actors and terrorist organizations, as we’ve already seen with the use of drones made by DJI by ISIS, or Daesh, in Syria, including as improvised IEDs. So there are no shortage of reasons for concerns, but I’ll stop there for now.

Ariel Conn: Helen, did you have anything you wanted to add?

Helen Toner: I think Elsa said it well. I would just reiterate that I think the ways that we’re starting to see China incorporating AI into its larger surveillance state, and methods of domestic control, are extremely concerning.

Ariel Conn: There’s debate I think about how open AI companies and researchers should be about their technology. But we sort of have a culture of openness in AI. And so I’m sort of curious: how is that being treated in China? Does it seem like that can actually help mitigate some of the negative applications that we see of AI? Or does it help enable the Chinese or anyone else to develop AI in non-beneficial ways that we are concerned about? What’s the role of openness in this?

Elsa Kania: I think openness is vital to innovation, and I hope that can be sustained — even as we are seeing greater concerns about the misuse or transfer of these technologies. I think that the level of openness and integration between the American and Chinese innovation ecosystems is useful in the sense that it does provide a level of visibility, or awareness, or sort of a shared understanding of the state of research. But I think at the same time there are reasons to have some thought-through parameters on that openness, or again — whether from the perspective of ethics or security — ways that having better guidelines or frameworks for how to engage, I think, will be important in order to sustain that openness and engagement.

I think that having better guardrails, and how to think about where openness is warranted, and when there should be at the very least common sense, and hopefully some rigorous consideration of these concerns, will be important. And then also another dimension of openness is thinking about when to release, or publish, or make available certain research, or even the tools underlying those advances; and when it’s better to keep more information proprietary. And I think the greater concern there, beyond the U.S.-China relationship, may be the potential for misuse or exploitation of these technologies by non-state actors, or terrorist organizations, even high end criminal organizations. I think the openness of the AI field is really critical. But I also think to sustain that, it will be important to think very carefully through some of these potential negative externalities across the board.

Helen Toner: One element that makes it extra complicated here in terms of openness and collaboration between U.S. and Chinese researchers: so much of the work that is going on there is really quite basic research — work on computer vision, or on speech recognition, or things of that nature. And that kind of research can be used for so many things, including both harmful, oppressive applications, as well as many much more acceptable applications. I think it’s really difficult to think through how to think about openness in that context.

So, one thing I would love to see is more information being made available to researchers. For example, I do think that any researcher who is working with a Chinese individual, or company, or organization should be aware of what is going on in Xinjiang, and should be aware of the governance practices that are common in China. And it would be great if there were more information available on specific institutions, and how they’re connected to various practices, and so on. That would be a good step towards helping non-Chinese researchers understand what kinds of situations they might be getting themselves involved in.

Ariel Conn: Do you get the sense that AI researchers are considering how some of their work can be applied in these situations where human rights abuses are taking place? I mean, I think we’re starting to see that more, but I guess maybe how much do you feel like you’re seeing that vs. how much more do you think AI researchers need to be making themselves aware?

Helen Toner: I think there’s a lot of interest and care among many AI researchers in how their work will be used, and in making the world a better place, and so on. And I think things like Google’s withdrawal from Project Maven, and also the pressure that was put on Google when it was leaked that it was working on a censored search engine to be used in China: I think those are both evidence of the level of, I guess, caring that is there. But I do think that there could be more awareness of specific issues that are going on in China. I think the situation in Xinjiang is gradually becoming more widely known, but I wouldn’t be surprised if it wasn’t something that plenty of AI researchers had come across. I think it’s a matter of pairing that interest in how their work might be used with information about what is going on, and what might happen in the future.

Ariel Conn: One of the things that I’ve also read, and I think both of you addressed this in works of yours that I was looking at: there’s this concern that China obviously has a lot more people, their privacy policies aren’t as strict, and so they have a lot more access to big data, and that that could be a huge advantage for them. Reading some of your work, it sounded like maybe that wasn’t quite the advantage that people worry about, at least yet. And I was hoping you could explain a little bit about technological difficulties that they might be facing even if they do have more data.

Helen Toner: For sure. I think there are quite a few different ways in which this argument is weaker than it might appear at first. So, I think there are many reasons to be concerned about the privacy implications of China’s data practices. Certainly having spent time in China, it’s very clear that the instant messages you’re sending, for example, are not only being read by you; That’s certainly concerning from that perspective. But if we’re talking about whether data will give them an advantage in developing AI, think there are a few different reasons to be a little bit skeptical.

One reason, which I think you alluded to, is simply whether they can make use of this data that they’re collecting. There was some reporting, I believe, last year coming out of Tencent, talking about ways in which data was very siloed inside the company, and it’s notoriously difficult. The joke among the data scientists is that when you’re trying to solve some problem with data, you spend the first 90% of your time just cleaning and structuring the data, and then only the last 10 percent actually solving the problem. So, that’s the sort of logistical or practical issue that you mentioned.

Other issues are things like: the U.S. doesn’t have as large a population as China, but U.S. companies have much greater international reach. So, they often have as many, if not more, users compared with Chinese companies. Even more importantly, I think, are two extra issues — one of which being that for most AI applications, the kind of data that will be useful in training a given model needs to be relevant to the problem that model is solving. So, if you have lots of data about Chinese customers’ purchases on Taobao, which is Chinese Amazon, then you’re going to be really good at predicting what kind of purchases Chinese consumers will make on Taobao. But that’s not going to help you with, for example, the kind of overhead imagery analysis that Project Maven was targeting, and things like this.

So that’s one really fundamental problem, I think, is this matter of data primarily being useful for training systems that are solving problems that are very related to the data that you have. And then a second really fundamental issue is thinking about how important it is or isn’t to have pre-gathered data in order to train a given model. And so, something that I think is left out of a lot of conversations on this issue is the fact that many types of models — notably, reinforcement learning models — can often be trained on what is referred to as synthetic data, which basically means data that you generate during the experiment — as opposed to requiring a pre gathered data set that you are training your model on.

So, an example of this would be AlphaGo, that we mentioned before. The original AlphaGo was first trained on human games, and then fine tuned from there. But AlphaGo Zero, which was released subsequently, did not actually need any pre-collected data, and instead just used computation to simulate games and play against itself, and thereby learn how to play the game even better than AlphaGo, which was trained on human data. So, I think there are all manner of reasons to be a little bit skeptical of this story that China has some fundamental advantage in access to data.

Elsa Kania: Those are all great points, and I would just add that I think this is particularly true when we look at the apparent disparities in access to data between China’s commercial ecosystem and the Chinese military. As Helen mentioned, much of that data generated from China’s mobile ecosystem will have very little relevance if you are looking to build advanced weapon systems, and the critical question going forward, or the much more relevant concern, will be the Chinese military’s capacity as an organization to improve its management and employment of its own data, while also gaining access to other relevant sources of data and looking to leverage simulations, even war gaming, as techniques to generate more data of relevance to training AI systems for military purposes.

So, the notion that data is the new oil I think is at best a massive oversimplification, given this is much more a complex landscape; And access to and use of, even labeling of data become very practical measures that militaries, among other bureaucracies, will have to grapple with as they think about how to develop AI that is trained particularly for the missions they have in mind.

Ariel Conn: So, does it seem fair to say then that it’s perfectly reasonable for Western countries to maintain, and possibly even develop, stricter privacy laws and still remain competitive?

Helen Toner: I think absolutely. The idea that one would need to reduce privacy controls in order to keep up with some volume of data that needs to be collected in order to be competitive in AI fundamentally misunderstands how AI research works. And I think also misunderstands the ways that Western companies will stay competitive; I think it’s not an accident that WeChat, for example, the most popular messaging app in China has really struggled to spread beyond China, the Chinese diaspora. I would posit that a significant part of that is the fact that it’s clear that messages on that app are going to the Chinese government. So, I think U.S. and other Western companies should be wary of sacrificing the kinds of features and functionalities that are based in the values that we hold dear.

Elsa Kania: I’d just add that I think there’s often this framing of a dichotomy between privacy and advancement in AI — and as Helen said, I think that there are ways to reconcile our priorities and our values in this context. And I think the U.S. government can also do much more when it comes to better leveraging data that it does have available, and making it more open for research purposes while focusing on privacy in the process. Exploitation of data should not come at the expense of privacy or be seen as at odds with advancement.

Helen Toner: And I’ll also add as well that we’re seeing advancements in various technologies that make it possible to utilize data without invading the privacy of the holder of that data. So, these are things like differential privacy, multi-party computation, a number of other related techniques that make it possible to securely and privately make use of data for improving goals without exposing the individual data of any particular user.

Ariel Conn: I feel like that in and of itself is another podcast topic.

Helen Toner: I agree.

Ariel Conn: The last question I have is: what do you think is most important for people to know and consider when looking at Chinese AI development and the Western concerns about it?

Elsa Kania: The U.S. in many respects does remain in a fairly advantageous position. However, I worry we may erode our own advantages if we don’t recognize what they are. And I think it does come down to the fact that the openness of the American innovation ecosystem, including our welcome to students and scholars from all over the world, has been critical to progress in science in the United States. And I think it’s really vital to sustain that. I think between the United States and China today, the critical determinant of competitive advantage going forward will be talent. I think there are many ways that China continues to struggle and is lagging behind its access to human capital resources — though there are some major policy initiatives underway from the Chinese Ministry of Education, significant expansions of the use of AI in and for education.

So, I think that as we think about relative trajectories in the long term, it will be important to think about talent, and how this is playing out in a very complex and often very integrated landscape between the U.S. and China. And I’ve said it before, and I’ll say it again: I think in the United States it is encouraging that the Department of Defense has a strategy for AI and is thinking very carefully about the ethics and opportunities it provides. I hope that the U.S. Department of Education, and that states and cities across the U.S., will also start to think more about what AI can do in terms of opportunities, in terms of more personalized and modernized approaches to education in the 21st century.

Because I think again, although I’m someone who as an analyst looks more at the military elements of this question, I think talent and education are foundational to everything. And some of what the Chinese government is doing with exploring the potential of AI in education are things that I wish the U.S. government would consider pursuing equally actively — though with greater concern to privacy and to the well-being of students. I don’t think we should necessarily envy or look to emulate many elements of China’s approach, but I think on talent and education it’s really critical for the U.S. to think about that as a main frontier of competition and to sustain openness to students and scientists from around the world, which requires thinking about some of these tricky issues of immigration that have become politicized to a level that is unfortunate and risks damaging our overall innovation ecosystem, not to mention the well-being and opportunities of those who can sometimes get caught in this crossfire in terms of the geopolitics and politics.

Helen Toner: I’d echo what Elsa said. I think in a nutshell what I would recommend for those interested in thinking about China’s prospects in AI is to be less concerned about how much data they have access to, or about the Chinese government and its plans being a well-oiled machine that works perfectly on the first try — and to pay more attention to, on the one hand, the willingness of the Chinese Communist Party to use extremely oppressive measures, and on the other hand, to pay more attention to the question of human capital and talent in AI development, and to focus more on how the U.S. can do better at attracting and retaining top talent — which has historically been something the U.S. has done really well, but for a variety of reasons has perhaps started to slide a little bit in recent years.

Ariel Conn: All right. Well, thank you both so much for joining this month. This was really interesting for me.

Elsa Kania: Thank you so much. Enjoyed the conversation, and certainly much more to discuss on these fronts.

Helen Toner: Thanks so much for having us.

 

 

New Report: Don’t Be Evil – A Survey of the Tech Sector’s Stance on Lethal Autonomous Weapons

As we move towards a more automated world, tech companies are increasingly faced with decisions about how they want — and don’t want — their products to be used. Perhaps most critically, the sector is in the process of negotiating its relationship to the military, and to the development of lethal autonomous weapons in particular. Some companies, including industry leaders like Google, have committed to abstaining from building weapons technologies; Others have wholeheartedly embraced military collaboration. 

In a new report titled “Don’t Be Evil,” Dutch advocacy group Pax evaluated the involvement of 50 leading tech companies in the development of military technology. They sent out a survey asking companies about their current activities and their policies on autonomous weapons, and used each company’s responses to categorize it as “best practice,” “medium concern,” or “high concern.” Categorizations were based on 3 criteria:  

  • Is the company developing technology that could be relevant in the context of lethal autonomous weapons?
  • Does the company work on relevant military projects?
  • Has the company committed to not contribute to the development of lethal autonomous weapons? 

“Best practice” companies are those with explicit policies that ensure their technology will not be used for lethal autonomous weapons. Companies categorized as “medium concern” are those currently working on military applications of relevant technology but who responded that they were not working on autonomous weapons; or companies who are not known to be working on military applications of technology but who did not respond to the survey. “High concern” companies are those working on military applications of relevant technology who did not respond to the survey. 

The report makes several recommendations for how companies can prevent their products from contributing to the development of lethal autonomous weapons. It suggests that companies make a public commitment not to contribute; that they establish clear company policies reiterating such a commitment and providing concrete implementation measures; and that they inform employees about the work they are doing and allow open discussion around any concerns. 

Pax identifies six sectors considered relevant to autonomous weapons: big tech, AI software and system integration, autonomous (swarming) aerial systems, hardware, pattern recognition, and ground robots. The report is organized into these categories, and then subdivided further by country and product. We’ve instead listed the companies in alphabetical order. Find basic information about all 50 companies in the chart, and read more about a select group below.

CompanyHQRelevant TechnologyRelevant Military/Security ProjectsConcern Level
AerialXCanadaCounter-drone systemsDroneBulletHigh
AiroboticsIsraelAutonomous dronesBorder security patrol botsMedium
Airspace SystemsUSCounter-drone systemsAirspace interceptor High
AlibabaChinaAI chips; facial recognitionMedium
AmazonUSCloud; drones; facial and speech recognitionJEDI; Rekognition High
Anduril IndustriesUSAI platformsProject Maven; LatticeHigh
Animal DynamicsUKAutonomous dronesSkeeterBest practice
AppleUSComputers; facial and speech recognitionMedium
Arbe RoboticsIsraelAutonomous vehiclesBest practice
ATOSFranceAI architecture; cyber security; data ManagementMedium
BaiduChinaDeep learning; pattern recognitionMedium
Blue Bear SystemsUKUnmanned maritime and aerial systemsProject Mosquito/LANCAHigh
CambriconChinaAI chipsMedium
Citadel DefenseUSCounter-drone systemsTitanHigh
ClarifaiUSFacial recognitionProject MavenHigh
Cloudwalk TechnologyChinaFacial recognitionMedium
Corenova TechnologiesUSAutonomous swarming systemsHiveDefense; OFFSETHigh
DeepGlintChinaFacial recognitionMedium
DiboticsFranceAutonomous navigation; drones‘Generate’Medium
EarthCubeFranceMachine learning‘Algorithmic warfare tools of the future’High
FacebookUSSocial media; pattern recognition; virtual RealityMedium
General RoboticsIsraelGround robotsDogoBest practice
GoogleUSAI architecture; social media; facial recognitionBest practice
Heron SystemsUSAI software; machine learning; drone applications‘Solutions to support tomorrow’s military aircraft’High
HiveMapperUSPattern recognition; mappingHiveMapper appBest practice
IBMUSAI chips; cloud; super computers; facial recognitionNuclear testing super computers; ex-JEDIMedium
InnovizIsraelAutonomous vehiclesMedium
IntelUSAI chips; UASDARPA HIVEHigh
MegviiChinaFacial recognition Medium
MicrosoftUSCloud; facial recognitionHoloLens; JEDIHigh
MontvieuxUKData analysis; deep learning ‘Revolutionize human information relationship for defence’High
NaverS. Korea‘Ambient Intelligence’; autonomous robots; machine vision systemsMedium
NeuralaUSDeep learning neural network softwareTarget identification software for military drones Medium
OracleUSCloud; AI infrastructure; big dataEx-JEDI High
Orbital InsightUSGeospatial analyticsMedium
PalantirUSData analyticsDCGS-AHigh
PerceptoIsraelAutonomous drones Medium
Roboteam IsraelUnmanned systems; AI softwareSemi-autonomous military UGVsHigh
SamsungS. KoreaComputers and AI platformsMedium
SenseTimeChinaComputer vision; deep learningSenseFace; SenseTotem for police use High
Shield AIUSAutonomous (swarming) dronesNovaHigh
SiemensGermanyAI; AutomationKRNS; TRADESMedium
SoftbankJapanTelecom; RoboticsBest practice
SparkCognitionUSAI systems; swarm technology‘Works across defense and national security space in the U.S.’High
SynthesisBelarusAI- and cloud-based applications; pattern recognitionKipodHigh
Taiwan SemiconductorTaiwanAI chipsMedium
TencentChinaAI applications; cloud; ML; pattern recognitionMedium
TharsusUKRoboticsMedium
VisionLabsRussiaVisual recognitionBest practice
YituChinaFacial recognitionPolice use High

Company names are colored to indicate concern level: best practicemedium concernhigh concern.

AerialX

  • Developing the DroneBullet, a kamikaze drone that can autonomously identify, track, and attack a target drone
  • Working to modify DroneBullet “for a warhead-equipped loitering munition system”

Airobotics

  • In response to survey, stated that its “drone system has nothing to do with weapons and related industries
  • Has clear links to military and security business; announced a Homeland Security and Defense division and an initiative to perform emergency services in 2017
  •  Involved in border security, in particular US-Mexico border; provides patrol bots
  • Co-founder has stated that it will not add weapons to its drones

Airspace Systems

  • Utilizes AI and advanced robotics for airspace security solutions, including “long-range detection, instant identification, and autonomous mitigation—capture and safe removal of unauthorized or malicious drones”
  • Developed Airspace Interceptor, a fully autonomous system that can capture target drones, in collaboration with US Department of Defense

Alibaba

  • China’s largest online shopping company
  • Recently invested in seven research labs that will focus on areas including AI, machine learning, network security, and natural language processing
  • Established a semiconductor subsidiary, Pingtouge, in September 2018
  • Major investor in tech sector, including in Megvii and SenseTime 

Amazon

  • Likely winner of JEDI contract, a US military project that will serve as universal data infrastructure linking Pentagon and soldiers in the field
  • Developed Rekognition program, used by police; testing by ACLU revealed that nearly 40 percent of false matches involved people of color
  • CEO has stated, “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.” 
  • Received backlash since exposure of partnership with government agencies, including ICE
  • Has since proposed guidelines for responsible use of tech

Anduril Industries

  • Co-founded by a former intelligence official
  • Has vocally supported stronger ties between tech sector and Pentagon: “AI has paradigm-shifting potential to be a force-multiplier […] it will provide better outcomes faster, a recipe for success in combat.”
  • Involved in Project Maven
  • Has offered support for the Pentagon’s newly formed Joint Artificial Intelligence Center
  • Developed the Lattice, an autonomous system that provides soldiers with a view of the front line and can be used to identify targets and direct unmanned vehicles into combat; has been used to catch border crossers
  • Co-founder has stated that Anduril is “deployed at several military bases. We’re deployed in multiple spots along the U.S. border […] We’re deployed around some other infrastructure I can’t talk about.”

Animal Dynamics

  • Spin-off company originating in Oxford University’s Zoology Department
  • Develops unmanned aerial vehicles
  • Stork, a paraglider with autonomous guidance and navigation, has received interest from both military and humanitarian aid/disaster relief organizations
  • Skeeter, “disruptive drone technology,” was developed with funding from UK government’s Defense Science and Technology Laboratory
  • In March 2019, took over software developer Accelerated Dynamics, which has developed ADx autonomous flight-control software
  • Use of ADx with Skeeter allows it to be operated in a swarm configuration, which has military applications
  • In response to survey, CEO stated that “we will not weaponize or provide ‘kinetic’ functionality to the products we make,” and that “legislating against harmful uses for autonomy is an urgent and necessary matter for government and the legislative framework to come to terms with.”

Arbe Robotics

  • Began in military and homeland security sectors but has moved to cars
  • In response to survey, stated that it “will sign agreements with customers that would confirm that they are not using our technology for military use.”

Baidu

  • Largest provider of the Chinese-language Internet search services
  • Highly committed to artificial intelligence and machine learning and is exploring applications for facial recognition technology
  • Opened Silicon Valley AI research lab in 2013, where it has been heavily investing in AI applications; has scaled down this research since US-China trade war
  • In charge of China’s Engineering Laboratory for Deep Learning Technologies, established March 2017
  • Will contribute to National Engineering Laboratory for Brain-Inspired Intelligence Technology and Applications

Blue Bear Systems

  • Research company involved in all aspects of unmanned systems and autonomy, including big data, AI, electronic warfare, and swarming systems
  • In March 2019, consortium it headed was awarded UK Ministry of Defense contract worth GBP 2.5 million to develop drone swarm technology

Citadel Defense

  • “Protects soldiers from drone attacks and surveillance in enemy combat” and “creates a force multiplier for Warfighters that enables them to get more done with the same or fewer resources”
  • Contracted by US Air Force to provide systems that can defeat weaponized drones and swarms
  • Developed autonomous counter-drone system called Titan

Corenova Technologies

  •  Offers “military-grade solutions to secure autonomous operations,” according to website
  • Developed HiveDefense, “an evolving swarm of self-learning bots”
  • Works with DARPA on OFFSET, facilitating unmanned missions without human control

Dibotics

  • Works on autonomous navigation
  • Supported by Generate, a program for French defense start-ups
  • Founder/CEO signed FLI’s 2017 open letter to the UN

EarthCube

  • “Developing monitoring solutions based on an automated analysis of geospatial information”
  • Has been described as “conceiving of the algorithmic warfare tools of the future.”
  • CEO has stated, “With the emergence of new sensors—whether they are satellite, UAV or plane—we have seen here a great opportunity to close the gap between AI in the lab and Activity Based Intelligence (ABI) in the field.”

General Robotics

  • Robotics company focused on defense and security
  • Founder previously worked in Israeli defense ministry’s R & D authority
  • Supplies “advanced robotics systems to counter-terrorist units worldwide,” many of which are designed for “urban warfare”
  • Developed Dogo, said to be “the world’s first inherently armed tactical combat robot,” but controlled remotely rather than autonomously
  • In response to survey, CEO stated that “our position is not to allow lethal autonomous weapons without human supervision and human final active decision […] In general, our systems are designed to provide real-time high quality information and to present it to a trained human operator in an intuitive manner; this insures better decision making by the human and thereby better results with less casualties.” 

Heron Systems

  • Provides “leading-edge solutions for national security customers”
  • States that its mission is “to strengthen America’s defense by providing innovative laboratory testing and simulation solutions”

Hivemapper

  • Software provides mapping, visualization, and analytic tools; uses video footage to generate instant detailed 3-D maps and detect changes; could potentially be used by Air Force to model bombing
  • Founder/CEO has stated that he “believes Silicon Valley and the US government have to work together to maintain America’s technological edge—lest authoritarian regimes that don’t share the US values catch up.”
  • Founder/CEO signed FLI’s 2015 open letter; In his survey response, he stated that “we absolutely want to see a world where humans are in control and responsible for all lethal decisions.”  

IBM

  • Bid for Jedi contract and failed to qualify 
  • Actively working towards producing “next-generation artificial intelligence chips” for which it is building a new AI research center; over the next 10 years, expect to improve AI computing by 1,000 times
  • Long history of military contracting, including building supercomputers for nuclear weapons research and simulations
  • Currently involved in augmented military intelligence research for US Marine Corps
  • 3 dozen staff members, including Watson design lead and VP of Cognitive Computing at IBM Research, signed a 2015 open letter calling for a ban on lethal autonomous weapons
  • Developed Diversity in Faces dataset using information from Flickr images; claims the project will reduce bias in facial recognition; Dataset available to companies and universities linked to military and law enforcement around the world
  • In response to survey, confirmed it is not currently developing lethal autonomous weapons systems

Innoviz

  • Produces laser-based radar for cars
  • Founded by former members of the IDF’s elite technological unit, but does not currently appear to be developing military applications

Intel

  • Develops various AI technologies, including specific solutions, software, and hardware, which it provides to governments
  • Selected by DARPA in 2017 to collaborate on DARPA HIVE, a data-handling and computing platform utilizing AI and ML
  • Announced in 2018 that it will work with DARPA on developing “the design tools and integration standards required to develop modular electronic systems”
  • Has invested significantly in unmanned aerial vehicles and flight control technology

Megvii

  • AI provider known for facial recognition software Face++
  • Reportedly uses facial scans from a Ministry of Public Security photo database that contains files on nearly every Chinese citizen
  • Has stated, “We want to build the eyes and brain of the city, to help police analyze vehicles and people to an extent beyond what is humanly possible.”

Microsoft

  • Competing with Amazon for JEDI contract
  • Published “The Future Computed” in 2018, which defines core principles necessary for the development of beneficial AI 
  • According to employees, “With JEDI, Microsoft executives are on track to betray these principles in exchange for short-term profits.” 
  • Company position on lethal autonomous weapons systems unclear
  • First tech giant to call for regulations to limit use of facial recognition technology

Montvieux

  • Developing a military decision-making tool that uses deep learning-based neural networks to assess complex data
  • Receives funding from the UK government

Neurala

  • Sells AI technology that can run on light devices and helps drones, robots, cars, and consumer electronics analyze their environments and make decisions
  • Military applications are a key focus
  • Works with a broad range of clients including the US Air Force, Motorola, and Parrot
  • Co-founder/CEO signed FLI’s 2017 open letter to the UN

Oracle

  • Provides database software and technology, cloud-engineered systems, and enterprise software products
  • Website states, “Oracle helps modern defense prepare for dynamic mission objectives”
  • Bid for JEDI contract and failed to qualify; filed several complaints, in part related to Pentagon’s decision to use only one vendor

Palantir

  • Data-analysis company founded in 2004 by Trump advisor; has roots in CIA-backed In-q-Tel venture capital organization
  • Producer of “Palantir Intelligence,” a tool for analyzing data that is used throughout the intelligence community
  • Has developed predictive policing technology used by law enforcement around the US
  • In 2016, won a Special Operations Command contract worth USD 222 million for a technology 
  • In March 2019, won a US Army contract worth over USD 800 million to build the Distributed Common Ground System, an analytical systems for use by soldiers in combat zones

Percepto

  • Developed the Sparrow, an autonomous patrol drone with security applications
  • Focuses explicitly on industrial, rather than military or border security, applications
  • In response to survey, stated “Since we develop solutions to the industrial markets, addressing security, safety, and operational needs, the topic of lethal weapon[s] is completely out of the scope of our work”

Roboteam

  • Founded by two former Israeli military commanders with “access to the Israel Defense Forces as our backyard for testing”
  • Specifically serves military markets, including the Pentagon
  • Developed Artificial Intelligence Control Unit (AI-CU), which brings autonomous navigation, facial recognition, and other AU-enabled capabilities to control and operation of unmanned systems and payloads
  • Exposure of links to Chinese investment firm FengHe Fund Management appears to have cost them a series of US Army robotics contracts last year

Samsung

  • One of world’s largest tech companies
  • Developing AI technologies to be applied to all its products and services in order to retain its hold on telephone/computer market
  • Samsung Techwin, Samsung’s military arm known for SG1A Sentry robot, was sold in 2014

SenseTime

  • Major competitor of Megvii
  • Sells software that recognizes objects and people
  • Various Chinese police departments use its SenseTotem and SenseFace systems to analyze video and make arrests
  • Valued at USD 4.5 billion, it is “the world’s most valuable AI start-up” and receives about two-fifths of its revenue from government contracts
  • In November 2017, sold its 51 percent stake in Tangli Technology, a “smart-policing” company it helped found

Shield AI

  • States that its “mission is to protect service members and innocent civilians with artificially intelligent systems”
  • Makes systems based on Hivemind, AI that enables robots to “learn from their experiences”
  • Developed Nova, a “combat proven” robot that autonomously searches buildings while streaming video and generating maps
  • Works with Pentagon and Department of Homeland Security “to enable fully autonomous unmanned systems that dramatically reduce risk and enhance situational awareness in the most dangerous situations.

Siemens

  • Europe’s largest industrial manufacturing conglomerate
  • Known for medical diagnostics equipment (CT scanners), energy equipment (turbines, generators), and trains
  • Produces MindSphere, a cloud-based system that helps enable the US of AI in industry
  • In 2013, won a USD 2.2 million military research contract with Carnegie Mellon University and HRL Laboratories to develop improved intelligence tools
  • Collaborating with DARPA on the TRAnsformative DESign (TRADES) program
  • In response to survey, stated: “Siemens in not active in this business area. Where we see a potential risk that components or technology or financing may be allocated for a military purpose, Siemens performs a heightened due diligence. […] All our activities are guided by our Business Conduct Guidelines that make sure that we follow high ethical standards and implement them in our everyday business. We also work on responsible AI principles which we aim to publish later this year.”

SoftBank

  • Invests in AI technology through its USD 100 billion Vision Fund, including BrainCorp, NVIDIA, and Slack Technologies; Owns some 30 percent of Alibaba
  • Works in partnership with Saudi Arabia’s sovereign wealth fund and is part of Saudi strategy for diversifying away from oil
  • In 2017, took over Boston Dynamics and Schaft, both connected with DARPA
  • Developed the humanoid Pepper robot
  • In response to survey, stated, “We do not have a weapons business and have no intention to develop technologies that could be used for military purposes”

SparkCognition

  • Collaborates “with the world’s largest organizations that power, finance, and defend our society to uncover their highest potential through the application of AI technologies.”
  • Has attracted interest from former and current Pentagon officials, several of whom serve on the board or as advisors
  • Works “across the national security space—including defense, homeland security intelligence, and energy—to streamline every step of their operations”; has worked with the British Army on military AI applications
  • Founder/CEO has stated that he believes restrictions on autonomous weapons would stifle progress and innovation

Synesis

  • Developed Kipod, a video analytics platform used by law enforcement agencies, governments, and private security organizations to find faces, license plates, object features, and behavioral events
  • In use by law enforcement in Belarus, Russia, Kazakhstan, and Azerbaijan

Tencent

  • China’s biggest social media company
  • Created Miying platform to assist doctors with disease screening and more
  •  Focused on research in machine learning, speech recognition, natural language processing, and computer vision
  • Developing practical AI applications in online games, social media, and cloud services
  • Investing in autonomous vehicle AI technologies
  • Has described its relationship to public in terms of a social contract: “Billions of users have entrusted us with their personal sensitive information; this is the reason we must uphold our integrity above the requirements of the law.”

VisionLabs

  • Developed Luna, software package that helps businesses verify and identify customers based on photos or videos
  • Partners with more than 10 banks in Russia and the Commonwealth of Independent States (CIS)
  • In response to survey, stated that they “explicitly prohibit the use of VisionLabs technology for military applications. This is a part of our contracts. We also monitor the results/final solution developed by our partners.”

Yitu

  • Developed “Intelligent Service Platform,” an algorithm that covers facial recognition, vehicle identification, text recognition, target tracking, and feature-based image retrieval
  • Its DragonFly Eye System can reportedly identify a person from a nearly two-billion-photo database within seconds
  • Technology utilized by numerous public security bureaus
  • In February 2018, supplied Malaysia’s police with facial recognition technologies; partners with local governments and other organizations in Britain

AI Alignment Podcast: China’s AI Superpower Dream with Jeffrey Ding

“In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China.” (FLI’s AI Policy – China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China’s AI development and strategy, as well as China’s approach to strategic technologies more generally. 

Topics discussed in this episode include:

  • China’s historical relationships with technology development
  • China’s AI goals and some recently released principles
  • Jeffrey Ding’s work, Deciphering China’s AI Dream
  • The central drivers of AI and the resulting Chinese AI strategy
  • Chinese AI capabilities
  • AGI and superintelligence awareness and thinking in China
  • Dispelling AI myths, promoting appropriate memes
  • What healthy competition between the US and China might look like

You can take a short (3 minute) survey to share your feedback about the podcast here.

 

Key points from Jeffrey: 

  • “Even if you don’t think Chinese AI capabilities are as strong as have been hyped up in the media and elsewhere, important actors will treat China as either a bogeyman figure or as a Sputnik type of wake-up call motivator… other key actors will leverage that as a narrative, as a Sputnik moment of sorts to justify whatever policies they want to do. So we want to understand what’s happening and how the conversation around what’s happening in China’s AI development is unfolding.”
  • “There certainly are differences, but we don’t want to exaggerate them. I think oftentimes analysis of China happens in a vacuum where it’s like, ‘Oh, this only happens in this mysterious far off land, we call China and it doesn’t happen anywhere else.’ Shoshana Zuboff has this great book on Surveillance Capitalism that shows how the violation of privacy is pretty extensive on the US side, not only from big companies but also from the national security apparatus. So I think a similar phenomenon is taking place with the social credit system. Jeremy Dom at Yale laws China Center has put it really nicely where he says that, ‘We often project our worst fears about technology in AI onto what’s happening in China, and we look through a glass darkly and we unleash all of our anxieties on what’s happening on to China without reflecting on what’s happening here in the US, what’s happening here in the UK.'”
  • “I think we have to be careful about which historical analogies and memes we choose. So ‘arms race’ is a very specific call back to cold war context, where there’s almost these discrete types of missiles that we are racing Soviet Union on and discrete applications that we can count up; Or even going way back to what some scholars call the first industrial arms race in the military sphere over steam power boats between Britain and France in the late 19th century. And all of those instances you can count up. France has four iron clads, UK has four iron clads; They’re racing to see who can build more. I don’t think there’s anything like that. There’s not this discreet thing that we’re racing to see who can have more of. If anything, it’s about a competition to see who can absorb AI advances from abroad better, who can diffuse them throughout the economy, who can adopt them in a more sustainable way without sacrificing core values. So that’s sort of one meme that I really want to dispel. Related to that, assumptions that often influence a lot of our discourse on this is techno-nationalist assumption, which is this idea that technology is contained within national boundaries and that the nation state is the most important actor –– which is correct and a good one to have and a lot of instances. But there are also good reasons to adopt techno-globalist assumptions as well, especially in the area of how fast technologies diffuse nowadays and also how much underneath this national level competition, firms from different countries are working together and make standards alliances with each other. So there’s this undercurrent of techno-globalism, where there are people flows, idea flows, company flows happening while the coverage and the sexy topic is always going to be about national level competition, zero sum competition, relative games rhetoric. So you’re trying to find a balance between those two streams.”
  • “I think currently a lot of people in the US are locked into this mindset that the only two players that exist in the world are the US and China. And if you look at our conversation, right, oftentimes I’ve displayed that bias as well. We should probably have talked a lot more about China-EU or China-Japan corporations in this space and networks in this space because there’s a lot happening there too. So a lot of US policy makers see this as a two-player game between the US and China. And then in that sense, if there’s some cancer research project about discovering proteins using AI that may benefit China by 10 points and benefit the US only by eight points, but it’s going to save a lot of people from cancer  –– if you only care about making everything about maintaining a lead over China, then you might not take that deal. But if you think about it from the broader landscape of it’s not just a zero sum competition between US and China, then your kind of evaluation of those different point structures and what you think is rational will change.”

 

Important timestamps: 

0:00 intro 

2:14 Motivations for the conversation

5:44 Historical background on China and AI 

8:13 AI principles in China and the US 

16:20 Jeffrey Ding’s work, Deciphering China’s AI Dream 

21:55 Does China’s government play a central hand in setting regulations? 

23:25 Can Chinese implementation of regulations and standards move faster than in the US? Is China buying shares in companies to have decision making power? 

27:05 The components and drivers of AI in China and how they affect Chinese AI strategy 

35:30 Chinese government guidance funds for AI development 

37:30 Analyzing China’s AI capabilities 

44:20 Implications for the future of AI and AI strategy given the current state of the world 

49:30 How important are AGI and superintelligence concerns in China?

52:30 Are there explicit technical AI research programs in China for AGI? 

53:40 Dispelling AI myths and promoting appropriate memes

56:10 Relative and absolute gains in international politics 

59:11 On Peter Thiel’s recent comments on superintelligence, AI, and China 

1:04:10 Major updates and changes since Jeffrey wrote Deciphering China’s AI Dream 

1:05:50 What does healthy competition between China and the US look like? 

1:11:05 Where to follow Jeffrey and read more of his work

 

Works referenced 

Deciphering China’s AI Dream

FLI AI Policy – China page

ChinAI Newsletter

Jeff’s Twitter

Previous podcast with Jeffrey

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. More works from GovAI can be found here.

 

Lucas Perry: Hello everyone and welcome back to the AI Alignment Podcast at The Future of Life Institute. I’m Lucas Perry and today we’ll be speaking with Jeffrey Ding from The Future of Humanity Institute on China and their efforts to be the leading AI Superpower by 2030. In this podcast, we provide a largely descriptive account of China’s historical technological efforts, their current intentions and methods for pushing Chinese AI Success, some of the foundational AI principles being called for within China; We cover the drivers of AI progress, the components of success, China’s strategies born of these variables; We also assess China’s current and likely future AI capabilities, and the consequences of all this tied together. The FLI AI Policy China page, and Jeffrey Ding’s publication Deciphering China’s AI Dream are large drivers of this conversation, and I recommend you check them out.

If you find this podcast interesting or useful, consider sharing it with friends on social media platforms, forums, or anywhere you think it might be found valuable. As always, you can provide feedback for me by following the SurveyMonkey link found in the description of wherever you might find this podcast. 

Jeffrey Ding specializes in AI strategy and China’s approach to strategic technologies more generally. He is the China lead for the Center for the Governance of AI. There, Jeff researches China’s development of AI and his work has been cited in the Washington Post, South China Morning Post, MIT Technological Review, Bloomberg News, Quartz, and other outlets. He is a fluent Mandarin speaker and has worked at the US Department of State and the Hong Kong Legislative Council. He is also reading for a PhD in international relations as a Rhodes scholar at the University of Oxford. And so without further ado, let’s jump into our conversation with Jeffrey Ding.

Let’s go ahead and start off by providing a bit of the motivations for this conversation today. So why is it that China is important for AI alignment? Why should we be having this conversation? Why are people worried about the US-China AI Dynamic?

Jeffrey Ding: Two main reasons, and I think they follow an “even if” structure. The first reason is China is probably second only to the US in terms of a comprehensive national AI capabilities measurement. That’s a very hard and abstract thing to measure. But if you’re taking which countries have the firms on the leading edge of the technology, the universities, the research labs, and then the scale to lead in industrial terms and also in potential investment in projects related to artificial general intelligence. I would put China second only to the US, at least in terms of my intuition and sort of my analysis that I’ve done on the subject.

The second reason is even if you don’t think Chinese AI capabilities are as strong as have been hyped up in the media and elsewhere, important actors will treat China as either a bogeyman figure or as a Sputnik type of wake-up call motivator. And you can see this in the rhetoric coming from the US especially today, and even in areas that aren’t necessarily connected. So Axios had a leaked memo from the US National Security Council that was talking about centralizing US telecommunication services to prepare for 5G. And in the memo, one of the justifications for this was because China is leading in AI advances. The memo doesn’t really tie the two together. There are connections –– 5G may empower different AI technologies –– but that’s a clear example of how even if Chinese capabilities in AI, especially in projects related to AGI, are not as substantial as has been reported, or we think, other key actors will leverage that as a narrative, as a Sputnik moment of sorts to justify whatever policies they want to do. So we want to understand what’s happening and how the conversation around what’s happening in China’s AI development is unfolding.

Lucas Perry: So the first aspect being that they’re basically the second most powerful AI developer. And we can get into later their relative strength to the US; I think that in your estimation, they have about half as much AI capability relative to the United States. And here, the second one is you’re saying –– and there’s this common meme in AI Alignment about how avoiding races is important because in races, actors have incentives to cut corners in order to gain decisive strategic advantage by being the first to deploy advanced forms of artificial intelligence –– so there’s this important need, you’re saying, for actually understanding the relationship and state of Chinese AI Development to dispel inflammatory race narratives?

Jeffrey Ding: Yeah, I would say China’s probably at the center of most race narratives when we talk about AI arms races and the conversation in at least US policy-making circles –– which is what I follow most, US national security circles –– has not talked necessarily about AI as a decisive strategic advantage in terms of artificial general intelligence, but definitely in terms of decisive strategic advantage and who has more productive power, military power. So yeah, I would agree with that.

Lucas Perry: All right, so let’s provide a little bit more historical background here, I think, to sort of contextualize why there’s this rising conversation about the role of China in the AI space. So I’m taking this here from the FLI AI Policy China page: “In July of 2017, the State Council of China released the New Generation Artificial Intelligence Development Plan. And this was an AI research strategy policy to build a domestic AI industry worth nearly $150 billion in the next few years” –– again, this was in 2017 –– “and to become a leading AI power by 2030. This officially marked the development of the AI sector as a national priority, and it was included in President Xi Jinping’s grand vision for China.” And just adding a little bit more color here: “given this, the government expects its companies and research facilities to be at the same level as leading countries like the United States by 2020.” So within a year from now –– maybe a bit ambitious, given your estimation that they have is about half as much capability as us.

But continuing this picture I’m painting: “five years later, it calls for breakthroughs in select disciplines within AI” –– so that would be by 2025. “That will become a key impetus for economic transformation. And then in the final stage, by 2030, China is intending to become the world’s premier artificial intelligence innovation center, which will in turn foster a new national leadership and establish the key fundamentals for an economic great power,” in their words. So there’s this very clear, intentional stance that China has been developing in the past few years.

Jeffrey Ding: Yeah, definitely. And I think it was Jess Newman who put together the AI policy in China page –– did a great job. It’s a good summary of this New Generation AI Development Plan issued in July 2017 and I would say the plan was more reflective of momentum that was already happening at the local level with companies like Baidu, Tencent, Alibaba, making the shift to focus on AI as a core part of their business strategy. Shenzhen, other cities, had already set up their own local funds and plans, and this was an instance of the Chinese national government, in the words of I think Paul Triolo and some other folks at New America, “riding the wave,” and kind of joining this wave of AI development.

Lucas Perry: And so adding a bit more color here again: there’s also been developments in principles that are being espoused in this context. I’d say probably the first major principles on AI were developed at the Asilomar Conference, at least those pertaining to AGI. In June 2019, the New Generation of AI Governance Expert Committee released principles for next-generation artificial intelligence governance, which included tenants like harmony and friendliness and fairness and justice, inclusiveness and sharing, open cooperation, shared responsibility, and agile governance. 

And then also in May of 2019 the Beijing AI Principles were released. That was by a multi-stakeholder coalition, including the Beijing Academy of Artificial Intelligence, a bunch of top universities in China, as well as industrial firms such as Baidu, Alibaba, and Tencent. And these 15 principles, among other things, called for “the construction of a human community with a shared future and the realization of beneficial AI for humankind in nature.” So it seems like principles and intentions are also being developed similarly in China that sort of echo and reflect many of the principles and intentions that have been developing in the states.

Jeffrey Ding: Yeah, I think there’s definitely a lot of similarities, and I think it’s not just with this recent flurry of AI ethics documents that you’ve done a good job of summarizing. It dates back to even the plan that we were just talking about. If you read the July 2017 New Generation AI Plan carefully, there’s a lot of sections devoted to AI ethics, including some sections that are worried about human robot alienation.

So, depending on how you read that, you could read that as already anticipating some of the issues that could occur if human goals and AI goals do not align. Even back in March, I believe, of 2018, a lot of government bodies came together with companies to put out a white paper on AI standardization, which I translated for New America. And in that, they talk about AI safety and security issues, how it’s important to ensure that the design goals of AI are consistent with the interests, ethics, and morals of most humans. So a lot of these topics, I don’t even know if they’re western topics. These are just basic concepts: We want systems to be controllable and reliable. And yes, those have deeper meanings in the sense of AGI, but that doesn’t mean that some of these initial core values can’t be really easily applied to some of these deeper meanings that we talk about when we talk about AGI ethics.

Lucas Perry: So with all of the animosity and posturing and whatever that happens between the United States and China, these sort of principles and intentions which are being developed, at least in terms of AI –– both of them sort of have international intentions for the common good of humanity; At least that’s what is being stated in these documents. How do you think about the reality of the day-to-day combativeness and competition between the US and China in relation to these principles which strive towards the deployment of AI for the common good of humanity more broadly, rather than just within the context of one country?

Jeffrey Ding: It’s a really good question. I think the first point to clarify is these statements don’t have teeth behind them unless they’re enforced, unless there’s resources dedicated to funding research on these issues, to track 1.5, track 2 diplomacy, technical meetings between researchers. These are just statements that people can put out and they don’t have teeth unless they’re actually enforced. Oftentimes, we know it’s the case. Firms like Google and Microsoft, Amazon, will put out principles about facial recognition or what their ethical stances are, but behind the scenes they’ll chase profit motives and maximize shareholder value. And I would say the same would take place for Tencent, Baidu, Alibaba. So I want to clarify that, first of all. The competitive dynamics are real: It’s partly not just an AI story, it’s a broader story of China’s rise. I’ve come from international relations background, so I’m a PhD student at Oxford studying that, and there’s a big debate in the literature about what happens when a rising power challenges an established power. And oftentimes frictions result, and it’s about how to manage these frictions without leading to accidents, miscalculation, arms races. And that’s the tough part of it.

Lucas Perry: So it seems –– at least for a baseline, thinking that we’re still pretty early in the process of AI alignment or this long-term vision we have –– it seems like at least there is theoretically some shared foundational principles reflective across both the cultures. Again, these Beijing AI Principles also include focus on benefiting all of humanity and the environment; serving human values such as privacy, dignity, freedom, autonomy and rights; continuous focus on AI safety and security; inclusivity, openness; supporting international cooperation; and avoiding a malicious AI race. So the question now simply seems: implementation of these shared principles, ensuring that they manifest.

Jeffrey Ding: Yeah. I don’t mean to be dismissive of these efforts to create principles that were at least expressing the rhetoric of planning for all of humanity. I think there’s definitely a lot of areas of US-China cooperation in the past that have also echoed some of these principles: bi-lateral cooperation on climate change research; there’s a good nuclear safety cooperation module; different centers that we’ve worked on. But at the same time, I also think that even with that list of terms you just mentioned, there are some differences in terms of how both sides understand different terms.

So with privacy in the Chinese context, it’s not necessarily that Chinese people or political actors don’t care about privacy. It’s that privacy might mean more of privacy as an instrumental right, to ensure your financial data doesn’t get leaked, you don’t lose all your money; to ensure that your consumer data is protected from companies; but not necessarily in other contexts where privacy is seen as an intrinsic right, as a civil right of sorts, where it’s also about an individual’s protection from government surveillance. That type of protection is not caught up in conversations about privacy in China as much.

Lucas Perry: Right, so there are going to be implicitly different understandings about some of these principles that we’ll have to navigate. And again, you brought up privacy as something –– and this has been something people have been paying more attention to, as there has been kind of this hype and maybe a little bit of hysteria over the China social crediting system, and plenty of misunderstanding around that.

Jeffrey Ding: Yeah, and this ties into a lot of what I’ve been thinking about lately, which is there certainly are differences, but we don’t want to exaggerate them. I think oftentimes analysis of China happens in a vacuum where it’s like, “Oh, this only happens in this mysterious far off land we call China and it doesn’t happen anywhere else.” Shoshana Zuboff has this great book on surveillance capitalism that shows how the violation of privacy is pretty extensive on the US side, not only from big companies but also from the national security apparatus.

So I think a similar phenomenon is taking place with the social credit system. Jeremy Dom at Yale Law’s China Center has put it really nicely where he says that, “We often project our worst fears about technology in AI onto what’s happening in China, and we look through a glass darkly and we unleash all of our anxieties on what’s happening onto China without reflecting on what’s happening here in the US, what’s happening here in the UK.”

Lucas Perry: Right. I would guess that generally in human psychology it seems easier to see the evil in the other rather than in the self.

Jeffrey Ding: Yeah, that’s a little bit out of range for me, but I’m sure there’s studies on that.

Lucas Perry: Yeah. All right, so let’s get in here now to your work on deciphering China’s AI dream. This is a work that you’d published in 2018 and in this work you divided up into these four different sections. First you work on context, then you discuss components, then you discuss capabilities, and then you discuss consequences all in relation to AI in China. Would you like to just sort of unpack the structuring?

Jeffrey Ding: Yeah, this was very much just a descriptive paper. I was just starting out researching this area and I just had a bunch of basic questions. So question number one for context: what is the background behind China’s AI Strategy? How does it compare to other countries’ plans? How does it compare to its own past science and technology plans? The second question was, what are they doing in terms of pushing forward drivers of AI Development? So that’s the component section. The third question is, how well are they doing? It’s about assessing China’s AI capabilities. And then the fourth is, so what’s it all mean? Why does it matter? And that’s where I talk about the consequences and the potential implications of China’s AI ambitions for issues related to AI Safety, some of the AGI issues we’ve been talking about, national security, economic development, and social governance.

Lucas Perry: So let’s go ahead and move sequentially through these. We’ve already here discussed a bit of context about what’s going on in China in terms of at least the intentional stance and the development of some principles. Are there any other key facets or areas here that you’d like to add about China’s AI strategy in terms of its past science and technology? Just to paint a picture for our listeners.

Jeffrey Ding: Yeah, definitely. I think two past critical technologies that you could look at are the plans to increase China’s space industry, aerospace sector; and then also biotechnology. So in each of these other areas there was also a national level strategic plan; An agency or an office was set up to manage this national plan; Substantial funding was dedicated. With the New Generation AI Plan, there was also a sort of implementation office set up across a bunch of the different departments tasked with implementing the plan.

AI was also elevated to the level of a national strategic technology. And so what’s different between these two phases? Because it’s debatable how successful the space plan and the biotech plans have been. What’s different with AI is you already had big tech giants who are pursuing AI capabilities and have the resources to shift a lot of their investments toward the AI space, independent of government funding mechanisms: companies like Baidu, Tencent, Alibaba, even startups that have really risen like SenseTime. And you see that reflected in the type of model.

It’s no longer the traditional national champion model where the government almost builds a company from the ground up, maybe with the help of like international financers and investors. Now it’s a national team model where they ask for the support of these leading tech giants, but it’s not like these tech giants are reliant on the government for subsidies or funding to survive. They are already flourishing firms that have international presence.

The other bit of context I would just add is that if you look at the New Generation Plan, there’s a lot of terms that are related to manufacturing. And I mentioned in Deciphering China’s AI Dream, how there’s a lot of connections and callbacks to manufacturing plans. And I think this is key because it’s one aspect of China’s strive for AI as they want to escape the middle income trap and kind of get to those higher levels of value-add in the manufacturing chain. So I want to stress that as a key point of context.

Lucas Perry: So the framing here is the Chinese government is trying to enable companies which already exist and already are successful. And this stands in contrast to the US and the UK where it seems like the government isn’t even part of a teamwork effort.

Jeffrey Ding: Yeah. So maybe a good comparison would be how technical standards develop, which is an emphasis of not only this deciphering China dream paper but a lot of later work. So I’m talking about technical standards, like how do you measure the accuracy of facial recognition systems and who gets to set those measures, or product safety standards for different AI applications. And in many other countries, including the US, the process for that is much more decentralized. It’s largely done through industry alliances. There is the NIST, which is a body under the Department of Commerce in the US that helps coordinate that to some extent, but not nearly as much as what happens in China with the Standards Administration Commission (SAC), I believe. There, it’s much more of a centralized effort to create technical standards. And there are pros and cons to both.

With the more decentralized approach, you minimize the risks of technological lock-in by setting standards too early, and you let firms have a little bit more freedom, competition as well. Whereas having a more centralized top-down effort might lead to earlier harmonization on standards and let you leverage economies of scale when you just have more interoperable protocols. That could help with data sharing, help with creating stable test bed for different firms to compete and measure stuff I was talking about earlier, like algorithmic accuracy. So there are pros and cons of the two different approaches. But I think yeah, that does flush out how the relationship between firms and the government differs a little bit, at least in the context of standards setting.

Lucas Perry: So on top of standards setting, would you say China’s government plays more of a central hand in the regulation as well?

Jeffrey Ding: That’s a good question. It probably differs in terms of what area of regulation. So I think in some cases there’s a willingness to let companies experiment and then put down regulations afterward. So this is the classic example with mobile payments: There was definitely a gray space as to how these platforms like Alipay, WeChat Pay were essentially pushing into a gray area of law in terms of who could handle this much money that’s traditionally in the hands of the banks. Instead of clamping down on it right away, the Chinese government kind of let that play itself out, and then once these mobile pay platforms got big enough that they’re holding so much capital and have so much influence on the monetary stock, they then started drafting regulations for them to be almost treated as banks. So that’s an example of where it’s more of a hands-off approach.

In AI, folks have said that the US and China are probably closer in terms of their approach to regulation, which is much more hands-off than the EU. And I think that’s just a product partly of the structural differences in the AI ecosystem. The EU has very few big internet giants and AI algorithm firms, so they have more of an incentive to regulate other countries’ big tech giants and AI firms.

Lucas Perry: So two questions are coming up. One is, is there sufficiently more unity and coordination in the Chinese government such that when standards and regulations, or decisions surrounding AI, need to be implemented that they’re able to move, say, much quicker than the United States government? And the second thing was, I believe you mentioned also that the Chinese government is also trying to find ways of using potential government money for buying up shares in these companies and try to gain decision making power.

Jeffrey Ding: Yeah, I’ll start with the latter. The reference is to the establishment of special management shares: so these would be almost symbolic, less than 1% shares in a company so that they could maybe get a seat on the board –– or another vehicle is through the establishment of party committees within companies, so there’s always a tie to party leadership. I don’t have that much more insight into how these work. I think probably it’s fair to say that the day-to-day and long-term planning decisions of a lot of these companies are mostly just driven by what their leadership wants, not necessarily what the party leaders want, because it’s just very hard to micromanage these billion dollar giants.

And that was part of a lot of what was happening with the reform of the state-owned enterprise sector, where, I think it was the SAC –– there are a lot of acronyms –– but this was the body in control of state-owned enterprises and they significantly cut down the number of enterprises that they directly oversee and sort of focused on the big ones, like the big banks or the big oil companies.

To your first point on how smooth policy enforcement is, this is not something I’ve studied that carefully. I think to some extent there’s more variability in terms of what the government does. So I read somewhere that if you look at the government relations departments of Chinese big tech companies versus US big tech companies, there’s just a lot more on the Chinese side –– although that might be changing with recent developments in the US. Two cases I’m thinking of right now are the Chinese government worrying about addictive games and then issuing the ban against some games including Tencent’s PUBG, which has wrecked Tencent’s game revenues and was really hurtful for their stock value.

So that’s something that would be very hard for the US government to be like, “Hey, this game is banned.” At the same time, there’s a lot of messiness with this, which is why I’m pontificating and equivocating and not really giving you a stable answer, because local governments don’t implement things that well. There’s a lot of local center attention. And especially with technical stuff –– this is the case of the US as well –– there’s just not as much technical talent in the government. So with a lot of these technical privacy issues, it’s very hard to develop good regulations if you don’t actually understand the tech. So what they’ve been trying to do is audit privacy policies of different social media tech companies and they started with 10 of the biggest and have tried to audit them. So I think it’s very much a developing process in both China and the US.

Lucas Perry: So you’re saying that the Chinese government, like the US, lacks much scientific or technical expertise? I had some sort of idea in my head that many of the Chinese mayors or other political figures actually have engineering degrees or degrees in science.

Jeffrey Ding: That’s definitely true. But I mean, by technical expertise I mean something like what the US government did with the digital service corps, where they’re getting people who have worked in the leading edge tech firms to then work for the government. That type of stuff would be useful in China.

Lucas Perry: So let’s move on to the second part, discussing components. And here you relate the key features of China’s AI strategy to the drivers of AI development, and here the drivers of AI development you say are hardware in the form of chips for training and executing AI algorithms, data as an input for AI Algorithms, research and algorithm development –– so actual AI researchers working on the architectures and systems through which the data will be put, and then the commercial AI ecosystems, which I suppose support and feed these first three things. What can you say about the state of these components in China and how it affects China’s AI strategy?

Jeffrey Ding: I think the main thing that I want to emphasize here that a lot of this is the Chinese government is trying to fill in some of the gaps, a lot of this is about enabling people, firms that are already doing the work. One of the gaps is private firms tend to under-invest in basic research or will under-invest in broader education because they don’t get a capture all those gains. So the government tries to support not only AI as a national level discipline but also to construct AI institutes, help fund talent programs to bring back the leading researchers from overseas. So that’s one part of it. 

The second part of it, which I did not talk about that much in the report in this section but I’ve recently researched more and more about, is that where the government is more actively driving things is when they are the final end client. So this is definitely the case in the surveillance industry space: provincial-level public security bureaus are working with companies in both hardware, data, research and development and the whole security systems integration process to develop more advanced high tech surveillance systems.

Lucas Perry: Expanding here, there’s also this way of understanding Chinese AI strategy as it relates to previous technologies and how it’s similar or different. Ways in which it’s similar involve strong degree of state support and intervention, transfer of both technology and talent, and investment in long-term whole-of-society measures; I’m quoting you here.

Jeffrey Ding: Yeah.

Lucas Perry: Furthermore, you state that China is adopting a catch-up approach in the hardware necessary to train and execute AI algorithms. This points towards an asymmetry, that most of the chip manufacturers are not in China and they have to buy them from Nvidia. And then you go on to mention about how access to large quantities of data is an important driver for AI systems and that China’s data protectionism favors Chinese AI companies and accessing data from China’s large domestic market, but it also detracts from cross-border pooling of data.

Jeffrey Ding: Yeah, and just to expand on that point, there’s been good research out of folks at DigiChina, which is a New America Institute, that looks at the cybersecurity law –– and we’re still figuring out how that’s going to be implemented completely, but the original draft would have prevented companies from taking data that was collected inside of China and taking it outside of China.

And actually these folks at DigiChina point out how some of the major backlash to this law didn’t just come from US multinational incorporations but also Chinese multinationals. That aspect of data protectionism illustrates a key trade-off: on one sense, countries and national security players are valuing personal data almost as a national security asset for the risk of blackmail or something. So this is the whole Grindr case in the US where I think Grindr was encouraged or strongly encouraged by the US government to find a non-Chinese owner. So that’s on one aspect you want to protect personal information, but on the other hand, free data flows are critical to spurring gains and innovation as well for some of these larger companies.

Lucas Perry: Is there an interest here to be able to sell their data to other companies abroad? Is that why they’re against this data protectionism in China?

Jeffrey Ding: I don’t know that much about this particular case, but I think Alibaba and Tencent have labs all around the world. So they might want to collate their data together, so they were worried that the cybersecurity law would affect that.

Lucas Perry: And just highlighting here for the listeners that access to large amounts of high quality data is extremely important for efficaciously training models and machine learning systems. Data is a new, very valuable resource. And so you go on here to say, I’m quoting you again, “China’s also actively recruiting and cultivating talented researchers to develop AI algorithms. The state council’s AI plan outlines a two pronged gathering and training approach.” This seems to be very important, but it also seems like from your report that China’s losing AI talent to America largely. What can you say about this?

Jeffrey Ding: Often the biggest bottleneck cited to AI development is lack of technical talent. That gap will eventually be filled just based on pure operations in the market, but in the meantime there has been a focus on AI talent, whether that’s through some of these national talent programs, or it also happens through things like local governments offering tax breaks for companies who may have headquarters around the world.

For example, Jingchi which is an autonomous driving startup, they had I think their main base in California or one of their main bases in California; But then Shenzhen or Guangzhou, I’m not sure which local government it was, they gave them basically free office space to move one of their bases back to China and that brings a lot of talented people back. And you’re right, a lot of the best and brightest do go to US companies as well, and one of the key channels for recruiting Chinese students are big firms setting up offshore research and development labs like Microsoft Research Asia in Beijing.

And then the third thing I’ll point out, and this is something I’ve noticed recently when I was doing translations from science and tech media platforms that are looking at the talent space in particular: They’ve pointed out that there’s sometimes a tension between the gathering and the training planks. So there’ve been complaints from domestic Chinese researchers, so maybe you have two super talented PhD students. One decides to stay in China, the other decides to go abroad for their post-doc. And oftentimes the talent plans –– the recruiting, gathering plank of this talent policy –– will then favor the person who went abroad for the post-doc experience over the person who stayed in China, and they might be just as good. So then that actually creates an incentive for more people to go abroad. There’s been good research that a lot of the best and brightest ended up staying abroad; The stay rates, especially in the US for Chinese PhD students in computer science fields, are shockingly high.

Lucas Perry: What can you say about Chinese PhD student anxieties with regards to leaving the United States to go visit family in China and come back? I’ve heard that there may be anxieties about not being let back in given that their research has focused on AI and that there’s been increasing US suspicions of spying or whatever.

Jeffrey Ding: I don’t know how much of it is a recent development but I think it’s just when applying for different stages of the path to permanent residency –– whether it’s applying for the H-1B visa or if you’re in the green card pipeline –– I’ve heard just secondhand that they avoid traveling abroad or going back to visit family just to kind of show commitment that they’re residing here in the US. So I don’t know how much of that is recent. My dad actually, he started out as a PhD student in math at University of Iowa before switching to computer science and I remember we had a death in the family and he couldn’t go back because it was so early on in his stay. So I’m sure it’s a conflicted situation for a lot of Chinese international students in the US.

Lucas Perry: So moving along here and ending this component section, you also say here –– and this kind of goes back to what we were discussing earlier about government guidance funds –– Chinese government is also starting to take a more active role in funding AI ventures, helping to grow the fourth driver of AI development, which again is the commercial AI ecosystems, which support and are the context for hardware data and research on algorithm development. And so the Chinese government is disbursing funds through what are called Government Guidance Funds or GGFs, set up by local governments and state owned companies. And the government has invested more than a billion US dollars on domestic startups. This seems to be in clear contrast with how America functions on this, with much of the investments shifting towards healthcare and AI as the priority areas in the last two years.

Jeffrey Ding: Right, yeah. So the GGFs are an interesting funding vehicle. The China Money Network, which has I think the best English language coverage of these vehicles, say that they may be history’s greatest experiment in using state capitol to reshape a nation’s economy. These essentially are Public Private Partnerships, PPPs, which do exist across the world, in the US. And the idea is basically the state seeds and anchors these investment vehicles and then they partner with private capital to also invest in startups, companies that the government thinks either are supporting a particular policy initiative or are good for overall development.

A lot of this is hard to decipher in terms of what the impact has been so far, because publicly available information is relatively scarce. I mentioned in my report that these funds haven’t had a successful exit yet, which means that maybe just they need more time. I think there’s also been some complaints that the big VCs –– whether it’s Chinese VCs or even international VCs that have a Chinese arm –– they much prefer to just to go it on their own rather than be tied to all the strings and potential regulations that come with working with the government. So I think it’s definitely a case of time will tell, and also this is a very fertile research area that I know some people are looking into. So be on the lookout for more conclusive findings about these GGFs, especially how they relate to the emerging technologies.

Lucas Perry: All right. So we’re getting to your capabilities section, which assesses the current state of China’s AI capabilities across the four drivers of AI development. Here you’re constructing an AI Potential Index, which is an index for the potentiality of, say, a country, based off these four variables, to be able to create successful AI products. So based on your research, you give China an AI Potential Index score of 17, which is about half of the US’s AI Potential Index score of 33. And so you state here that what is sort of essential to draw from this finding is the relative scale, or at least the proportionality, between China and the US. So the conclusion which we can try to draw from this is that China trails the US in every driver except for access to data, and that on all of these dimensions China is about half as capable as the US.

Jeffrey Ding: Yes, so the AIPI, the AI Potential Index, was definitely just meant as a first cut at developing a measure for which we can make comparative claims. I think at the time, and even now, I think we just throw around things like, “who is ahead in AI?” I was reading this recent Defense One article that was like, “China’s the world leader in GANs,” G-A-Ns, Generative Adversarial Networks. That’s just not even a claim that is coherent. Are you the leader at developing the talent who is going to make advancement to GANs? Are you the leader at applying and deploying GANs in the military field? Are you the leader in producing the most publications related to GANs?

I think that’s what was frustrating me about the conversation and net assessment of different countries’ AI capabilities, so that’s why I tried to develop a more systematic framework which looked at the different drivers, and it was basically looking at what is the potential of country’s AI capabilities based on their marks across these drivers.

Since then, probably the main thing that I’ve done update this was in my written testimony before the US China Economic and Security Review Commission, where I kind of switch up a little bit how I evaluate the current AI capabilities of China and the US. Basically there’s this very fuzzy concept of national AI capabilities that we throw around and I slice it up into three cross-sections. The first is, let’s look at what the scientific and technological inputs and outputs different countries are putting into AI. So that’s: how many publications are coming out of this country in Europe versus China versus US? How many outputs also in the sense of publications or inputs in the sense of R&D investments? So let’s take a look at that. 

The second slice is, let’s not just say AI. I think every time you say AI it’s always better to specify subtypes, or at least in the second slice I look at different layers of the AI value chain: foundational layers, technological layers, and the application layer. So, for example, foundation layers may be who is leading in developing the AI open source software that serves as the technological backbone for a lot of these AI applications and technologies? 

And then the third slice that I take is different sub domains of AI –– so computer vision, predictive intelligence, natural language processing, et cetera. And basically my conclusion: I throw a bunch of statistics in this written testimony out there –– some of it draws from this AI potential index that I put out last year –– and my conclusion is that China is not poised to overtake the US in the technology domain of AI; Rather the US maintains structural advantages in the quality of S and T inputs and outputs, the fundamental layers of the AI value chain, and key sub domains of AI.

So yeah, this stuff changes really fast too. I think a lot of people are trying to put together more systemic ways of measuring these things. So Jack Clark at openAI; projects like the AI index out of Stanford University; Matt Sheehan recently put out a really good piece for MacroPolo on developing sort of a five-dimensional framework for understanding data. So in this AIPI first cut, my data indicator is just a very raw who has more mobile phone users, but that obviously doesn’t matter for who’s going to lead in autonomous vehicles. So having finer grained understanding of how to measure different drivers will definitely help this field going forward.

Lucas Perry: What can you say about symmetries or asymmetries in terms of sub-fields in AI research like GANs or computer vision or any number of different sub-fields? Can we expect very strong specialties to develop in one country rather than another, or there to be lasting asymmetries in this space, or does research publication subvert this to some extent?

Jeffrey Ding: I think natural language processing is probably the best example because everyone says NLP, but then you just have that abstract word and you never dive into, “Oh wait, China might have a comparative advantage in Chinese language data processing, speech recognition, knowledge mapping,” which makes sense. There is just more of an incentive for Chinese companies to put out huge open source repositories to train automatic speech recognition.

So there might be some advantage in Chinese language data processing, although Microsoft Research Asia has very strong NOP capabilities as well. Facial recognition, maybe another area of comparative advantage: I think in my testimony I cite that China has published 900 patents in this sub domain in 2017; In that same year less than 150 patents related to facial recognition were filed in the US. So that could be partly just because there’s so much more of a fervor for surveillance applications, but in other domains such as the larger scale business applications the US probably possesses a decisive advantage. So autonomous vehicles are the best example of that: In my opinion, Google’s Waymo, GM’s Cruise are lapping the field.

And then finally in my written testimony I also try to look at military applications, and I find one metric that puts the US as having more than seven times as many military patents filed with the terms “autonomous” or “unmanned” in the patent abstract in the years 2003 to 2015. So yeah, that’s one of the research streams I’m really interested in, is how can we have more fine grain metrics that actually put into context China’s AI development, and that way we can have a more measured understanding of it.

Lucas Perry: All right, so we’ve gone into length now providing a descriptive account of China and the United States and key descriptive insights of your research. Moving into consequences now, I’ll just state some of these insights which you bring to light in your paper and then maybe you can expand on them a bit.

Jeffrey Ding: Sure.

Lucas Perry: You discuss the potential implications of China’s AI dream for issues of AI safety and ethics, national security, economic development, and social governance. The thinking here is becoming more diversified and substantive, though you claim it’s also too early to form firm conclusions about the long-term trajectory of China’s AI development; This is probably also true of any other country, really. You go on to conclude that a group of Chinese actors is increasingly engaged with issues of AI safety and ethics. 

A new book has been authored by Tencent’s Research Institute, and it includes a chapter in which the authors discuss the Asilomar Principles in detail and call for  strong regulations and controlling spells for AI. There’s also this conclusion that military applications of AI could provide a decisive strategic advantage in international security. The degree to which China’s approach to military AI represents a revolution in military affairs is an important question to study, to see how strategic advantages between the United States and China continue to change. You continue by elucidating how the economic benefit is the primary and immediate driving force behind China’s development of AI –– and again, I think you highlighted this sort of manufacturing perspective on this.

And finally, China’s adoption of AI Technologies could also have implications for its mode of social governance. For the state council’s AI plan, you state, “AI will play an irreplaceable role in maintaining social stability, an aim reflected in local level integrations of AI across a broad range of public services, including judicial services, medical care, and public security.” So given these sort of insights that you’ve come to and consequences of this descriptive picture we’ve painted about China and AI, is there anything else you’d like to add here?

Jeffrey Ding: Yeah, I think as you are laying out those four categories of consequences, I was just thinking this is what makes this area so exciting to study because if you think about it, each four of those consequences map out onto four research fields: AI ethics and safety, which with benevolent AI efforts, stuff that FLI is doing, the broader technology studies, critical technologies studies, technology ethics field; then in the social governance space, AI as a tool of social control: what are the social aftershocks of AI’s economic implications? You have this entire field of democracy studies or studies of technology and authoritarianism; and the economic benefits, you have this entire field of innovation studies: how do we understand the productivity benefits of general purpose technologies? And of course with AI as a revolution in military affairs, you have this whole field of security studies that is trying to understand what are the implications of new emerging technologies for national security? 

So it’s easy to start delineating these into their separate containers. I think what’s hard, especially for those of us are really concerned about that first field, AI ethics and safety, and the risks of AGI arms races, is a lot of other people are really, really concerned about those other three fields. And how do we tie in concepts from those fields? How do we take from those fields, learn from those fields, shape the language that we’re using to also be in conversation with those fields –– and then also see how those fields may actually be in conflict with some of what our goals are? And then how do we navigate those conflicts? How do we prioritize different things over others? It’s an exciting but daunting prospect ahead.

Lucas Perry: If you’re listening to this and are interested in becoming an AI researcher in terms of the China landscape, we need you. There’s a lot of great and open research questions here to work on.

Jeffrey Ding: For sure. For sure.

Lucas Perry: So I’ve extracted some insights from previous podcasts you did –– I can leave a link for that in the page for this podcast –– so I just want to kind of rapid fire these as points that I thought were interesting that we may or may not have covered here. You point out a language asymmetry: The best Chinese AI researchers read English and Chinese, whereas the western researchers generally cannot do this. You have a newsletter called China AI with 1A; Your newsletter attempts to correct for this as you translate important Chinese tech-related things into English. I suggest everyone follow that if you’re interested in continuing to track China and AI. There is more international cooperation on research at international conferences –– this is a general trend that you point out: Some top Chinese AI conferences are English only. Furthermore, I believe that you claim that the top 10% of AI research is still happening in America and the UK. 

Another point which I think that you’ve brought up is that China is behind on military AI uses. I’m also interested here just to see if you can expand a little bit more on it, but that China and AI safety and superintelligence is also something interesting to hear a little bit more about because on this podcast we often take the lens of long-term AI issues and AGI and super intelligence. So I think you mentioned that the Nick Bostrom of China is Professor, correct me if I get this wrong, Jao ting Wang. And also I’m curious here if you might be able to expand on how large or serious this China superintelligence FLI/FHI vibe is and what the implications of this are, and if there are any orgs in China that are explicitly focused on this. I’m sorry if this is a silly question, but are there like nonprofits in China in the same way that there are in the US? How does that function? Is China on the brink of having an FHI or FLI or MIRI or anything like this?

Jeffrey Ding: So a lot to untangle there and all really good questions. First, just to clarify, yeah, there are definitely nonprofits, non-governmental organizations. In recent years there has been some pressure on international nongovernmental organizations, nonprofit organizations, but there’s definitely nonprofits. One of the open source NLP initiatives I mentioned earlier, the Chinese language Corpus, was put together by a nonprofit online organization called AIShell Foundation, and they put together AIShell-1, AIShell-2, which are the largest open source speech Corpus available for Mandarin speech recognition.

I haven’t really followed up on Jao ting Wang. He’s a philosopher at the Chinese Academy of Social Sciences. The sort of “Nick Bostrom of China” label was more of a newsletter headline to get people to read, but he does devote a lot of time and thinking to the long-term risks of AI. Another professor at Nanjing University by the name of Zhi-Hua Zhou, he’s published articles about the need to not even touch some of what he calls strong AI. These were published in a pretty influential publication outlet by the Chinese Computer Federation, which brings together a lot of the big name computer scientists. So there’s definitely conversations about this happening. Whether there is an FHI, FLI equivalent, let’s say probably not, at least not yet.

Peking University may be developing something in this space. Berggruen Institute is also I think looking at some related issues. There’s probably a lot of stuff happening in Hong Kong as well; Maybe we just haven’t looked hard enough. I think the biggest difference is there’s definitely not something on the level of a DeepMind or OpenAI, because even the firms with the best general AI capabilities –– DeepMind and OpenAI almost like these unique entities where profits and stocks don’t matter.

So yeah, definitely some differences, but honestly I updated significantly once I started reading more, and nobody had really looked at this Zhi-Hua Zhou essay before we went looking and found it. So maybe there are a lot of these organizations and institutions out there but we just need to look harder.

Lucas Perry: So on this point of there not being OpenAI or DeepMind equivalents, are there any research organizations or departments explicitly focused on the mission of creating artificial general intelligence or superintelligence safely scalable machine learning systems that could go from now until infinity? Or is this just more like scattered researchers?

Jeffrey Ding: I think it’s how you define an AGI project. Like what you just said is probably a good tight definition. I know Seth Baum, he’s done some research tracking AGI projects and he says that there are six in China. I would say probably the only ones that come close are, I guess Tencent says it’s one of their missions streams to develop artificial general intelligence; horizon robotics, which is actually like a chip company, they also state it as one of their objectives. It depends also on how much you think work on neuroscience related pathways into AGI count or not. So there’s probably some Chinese Academy of Science labs working on whole brain emulation or kind of more brain inspired approaches to AGI, but definitely not anywhere to the level of DeepMind, OpenAI.

Lucas Perry: All right. So there are some myths in table one of your paper which you demystify. Three of these are: China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; And there is little to no discussion of issues of AI ethics and safety in China. And then maybe lastly I might add, if you might be able to add to it, that there is just to begin with an AI arms race between the US and China.

Jeffrey Ding: Yeah, I think that’s a good addition. I think we have to be careful about which historical analogies and memes we choose. So “arms race” is a very specific call back to cold war context, where there’s almost these discrete types of missiles that we are racing Soviet Union on and discrete applications that we can count up; Or even going way back to what some scholars call the first industrial arms race in the military sphere over steam power boats between Britain and France in the late 19th century. And all of those instances you can count up. France has four iron clads, UK has four iron clads; They’re racing to see who can build more. I don’t think there’s anything like that. There’s not this discreet thing that we’re racing to see who can have more of. If anything, it’s about a competition to see who can absorb AI advances from abroad better, who can diffuse them throughout the economy, who can adopt them in a more sustainable way without sacrificing core values.

So that’s sort of one meme that I really want to dispel. Related to that, assumptions that often influence a lot of our discourse on this is techno-nationalist assumption, which is this idea that technology is contained within national boundaries and that the nation state is the most important actor –– which is correct and a good one to have and a lot of instances. But there are also good reasons to adopt techno-globalist assumptions as well, especially in the area of how fast technologies diffuse nowadays and also how much underneath this national level competition, firms from different countries are working together and make standards alliances with each other. So there’s this undercurrent of techno-globalism, where there are people flows, idea flows, company flows happening while the coverage and the sexy topic is always going to be about national level competition, zero sum competition, relative games rhetoric. So you’re trying to find a balance between those two streams.

Lucas Perry: What can you say about this sort of reflection on zero sum games versus healthy competition and the properties of AI and AI research? I’m seeking clarification on this secondary framing that we can take on a more international perspective about deployment and implementation of AI research and systems rather than, as you said, this sort of techno-nationalist one.

Jeffrey Ding: Actually, this idea comes from my supervisor: Relative gains make sense if there’s only two players involved, just from a pure self-interest maximizing standpoint. But once you introduce three or more players, relative gains doesn’t make as much sense as optimizing for absolute gains. So maybe one way to explain this is to take the perspective of a European country –– let’s say Germany –– and you are working on an AI project with China or some other country that maybe the US is pressuring you not to work with; You’re working with Saudi Arabia or China on some project and it’s going to benefit China 10 arbitrary points and it’s going to benefit Germany eight arbitrary points versus if you didn’t choose to cooperate at all.

So in that sense, Germany, the rational actor, would take that deal. You’re not just caring about being better than China; From a German perspective, you care about maintaining leadership in the European Union, providing health benefits to your citizens, continuing to power your economy. So in that sense you would take the deal even though China benefits a little bit more, relatively speaking. 

I think currently a lot of people in the US are locked into this mindset that the only two players that exist in the world are the US and China. And if you look at our conversation, right, oftentimes I’ve displayed that bias as well. We should probably have talked a lot more about China-EU or China-Japan cooperation in this space and networks in this space because there’s a lot happening there too. So a lot of US policy makers see this as a two-player game between the US and China. And then in that sense, if there’s some cancer research project about discovering proteins using AI that may benefit China by 10 points and benefit the US only by eight points, but it’s going to save a lot of people from cancer  –– if you only care about making everything about maintaining a lead over China, then you might not take that deal. But if you think about it from the broader landscape of it’s not just a zero sum competition between US and China, then your kind of evaluation of those different point structures and what you think is rational will change.

Lucas Perry: So as there’s more actors, is the idea here that you care more about absolute gains in the sense that these utility points or whatever can be translated into decisive strategic advantages like military advantages?

Jeffrey Ding: Yeah, I think that’s part of it. What I was thinking along that example is basically 

if you as Germany don’t choose to cooperate with Saudi Arabia or work on this joint research project with China then the UK or some other countries just going to swoop in. And that possibility doesn’t exist in the world where you’re just thinking about two players. There’s a lot of different ways to fit these sort of formal models, but that’s probably the most simplistic way of explaining it.

Lucas Perry: Okay, cool. So you’ve spoken a bit here on important myths that we need to dispel or memes that we need to combat. And recently Peter Thiel has been on a bunch of conservative platforms, and he also wrote an op-ed, basically fanning the flames of AGI as a military weapon, AI as a path to superintelligence and, “Google campuses have lots of Chinese people on them who may be spies,” and that Google is actively helping China with AI military technology. In terms of bad memes and myths to combat, what are your thoughts here?

Jeffrey Ding: There’s just a lot of things that Thiel gets wrong. I’m mostly kind of just confused because he is one of the original founders of OpenAI, he’s funded other institutions, really concerned about AGI safety, really concerned about race dynamics –– and then in the middle of this piece, he first says AI is a military technology, then he goes back to saying AI is dual use in the middle, and then he says this ambiguity is “strangely missing from the narrative that pits a monolithic AI against all of humanity.” He out of anyone should know that these conversations about the risks of AGI, why are you attacking this straw man in the form of a terminator AI meme? Especially, you’re funding a lot of the organizations that are worried about the risks of AGI for all of humanity. 

The other main thing that’s really problematic is if you’re concerned about the US military advantage, that more than ever is rooted on our innovation advantage. It’s not about spinoff from military innovation to civilian innovation, which was the case in the days of US tech competition against Japan. It’s more the case of spin on, where innovations are happening in the commercial sector that are undergirding the US military advantage.

And this idea of painting Google as anti-American for setting up labs in China is so counterproductive. There are independent Google developer conferences all across China just because so many Chinese programmers want to use Google tools like TensorFlow. It goes back to the fundamental AI open source software I was talking about earlier that lets Google expand its talent pool: People want to work on Google products; They’re more used to the framework of Google tools to build all these products. Google’s not doing this out of charity to help the Chinese military. They’re doing this because the US has a flawed high-skilled immigration system, so they need to go to other countries to get talent. 

Also, the other thing about the piece is he cites no empirical research on any of these fronts, when there’s this whole globalization of innovation literature that backs up empirically a lot of what I’m saying. And then I’ve done my own empirical research on Microsoft Research Asia, which as we’ve mentioned is their second biggest lab overall, it’s based in Beijing. I’ve tracked their PhD Fellowship Program: This basically gives people at Chinese PhD programs, you get a full scholarship and you just do an internship at Microsoft Research Asia for one of the summers. And then we track their career trajectories, and a lot of them end up coming to the US or working for Microsoft Research Asia in Beijing. And the ones that come to the US don’t just go to Microsoft: They go to Snapchat or Facebook or other companies. And it’s not just about the people: As I mentioned earlier, we have this innovation centrism about who produces the technology first, but oftentimes it’s about who diffuses and adopts the technology first. And we’re not always going to be the first on the scene, so we have to be able to adopt and diffuse technologies that are invented first in other areas. And these overseas labs are some of our best portals into understanding what’s happening in these other areas. If we lose them, it’s another form of asymmetry because Chinese AI companies are going abroad and expanding. 

I honestly, I’m just really confused about what the point of this piece was and to be honest, it’s kind of sad because this is not what Thiel researches every day. So he’s obviously picking up bits and pieces from the narrative frames that are dominating our conversation. And it’s actually probably a structural stain on how we’ve allowed the discourse to have so many of these bad problematic memes, and we need more people calling them out actively, doing the heart to heart conversations behind the scenes to get people to change their minds or have productive constructive conversations about these.

And the last thing I’ll point out here is there’s this zombie Cold War mentality that still lingers today, and I think the historian Walter McDougall was really great in calling this out, where he talks about we paint this other, this enemy, and we use it to justify sacrifices in human values to drive society to its fullest technological potential. And that often comes with sacrificing human values like privacy, equality, freedom of speech. And I don’t want us to compete with China over who can build better tools to sensor, repress, and surveil dissidents and minority groups, right? Let’s see who can build the better, I don’t know, industrial internet of things or build better privacy preserving algorithms that are going to sustain a more trustworthy AI ecosystem.

Lucas Perry: Awesome. So just moving along here as we’re making it to the end of our conversation: What are updates you’ve had or major changes since you’ve written Deciphering China’s AI Dreams, since it has been a year?

Jeffrey Ding: Yeah, I mentioned some of the updates in the capability section. The consequences, I mean I think those are still the four main big issues, all of them tied to four different literature bases. The biggest change would probably be in the component section. I think when I started out, I was pretty new in this field, I was reading a lot of literature from the China watching community and also a lot from Chinese comparative politics or articles about China, and so I focused a lot on government policies. And while I think the party and the government are definitely major players, I think I probably overemphasized the importance of government policies versus what is happening at the local level.

So if I were to go back and rewrite it, I would’ve looked a lot more at what is happening at the local level, given more examples of AI firms, like iFlytek I think is a very interesting under-covered firm, and how they are setting up research institutes with a university in Chung Cheng very similar to the industry- academia style collaborations in the US, basically just ensuring that they’re able to train the next generation of talent. They have relatively close ties to the state as well, I think controlling shares or a large percentage of shares owned by state-owned vehicles. So I probably would have gone back and looked at some of these more under-covered firms and localities and looked at what they were doing rather than just looking at the rhetoric coming from the central government.

Lucas Perry: Okay. What does it mean for there to be healthy competition between the United States and China? What is an ideal AI research and political situation? What are the ideal properties of the relations the US and China can have on the path to superintelligence?

Jeffrey Ding: Yeah.

Lucas Perry: Solve AI Governance for me, Jeff!

Jeffrey Ding: If I could answer that question, I think I could probably retire or something. I don’t know.

Lucas Perry: Well, we’d still have to figure out how to implement the ideal governance solutions.

Jeffrey Ding: Yeah. I think one starting point is on the way to more advanced AI systems, we have to stop looking at AI as if it’s like this completely special area with no analogs, because even though there are unique aspects of AI –– like their autonomous intelligence systems, a possibility of the product surpassing human level intelligence, or the process surpassing human level intelligence –– we can learn a lot from past general purpose technologies like steam, electricity, the diesel engine. And we can learn about a lot of competition in past strategic industries like chips, steel.

So I think probably one thing that we can distill from some of this literature is there are some aspects of AI development that are going to be more likely to lead to race dynamics than others. So one cut that you could take are industries where it’s likely that there are only going to be two or three, four or five major players –– so it might be the case that capital costs, the upstart costs, the infrastructure costs of autonomous vehicles requires that there are going to be only one or two players across the world. And that is like, hey, if you’re a national government who’s thinking strategically, you might really want to have a player in that space, so that might incentivize more competition. Whereas in other fields, maybe there’s just going to be a lot more competition or less need for relative gain, zero sum thinking. So like neural machine translation, that could be a case of something that just almost becomes like a commodity. 

So then there are things we can think about in those fields where there’s only going to be four or five players or three or four players. Can we maybe balance it out so that at least one is from the two major powers or is the better approach to, I don’t know, enact global competition, global antitrust policy to kind of ensure that there’s always going to be a bunch of different players from a bunch of different countries? So those are some of the things that come to mind that I’m thinking about, but yeah, this is definitely something where I claim zero credibility relative to others who are thinking about it.

Lucas Perry: Right. Well unclear anyone has very good answers here. I think my perspective, to add at least one frame on it, is that given the dual use nature of many of the technologies like computer vision and like embedded robot systems and developing autonomy and image classification –– all of these different AI specialty subsystems can be sort of put together in arbitrary ways. So in terms of autonomous weapons, FLI’s position is, it’s important to establish international standards around the appropriate and beneficial uses of these technologies.

Image classification, as people already know, can be used for discrimination or beneficial things. And the technologies can be aggregated to make anything from literal terminator swarm robots to lifesaving medical treatments. So the relation between the United States and China can be made more productive if clear standards based on the expression of the principles we enumerated earlier could be created. And given that, then we might be taking some paths towards a beneficial beautiful future of advanced AI systems.

Jeffrey Ding: Yeah, no, I like that a lot. And some of the technical standards documents I’ve been translating: I definitely think in the short-term, technical standards are a good way forward, sort of solve the starter pack type of problems before AGI. Even some Chinese white papers on AI standardization have put out the idea of ranking the intelligence level of different autonomous systems –– like an autonomous car might be more than a smart speaker or something: Even that is a nice way to kind of keep track of the progress, is continuities in terms of intelligence explosions and trajectories in the space. So yeah, I definitely second that idea. Standardization efforts, autonomous weapons regulation efforts, as serving as the building blocks for larger AGI safety issues.

Lucas Perry: I would definitely like to echo this starter pack point of view. There’s a lot of open questions about the architectures or ways in which we’re going to get to AGI, about how the political landscape and research landscape is going to change in time. But I think that we already have enough capabilities and questions that we should really be considering where we can be practicing and implementing the regulations and standards and principles and intentions today in 2019 that are going to lead to robustly good futures for AGI and superintelligence.

Jeffrey Ding: Yeah. Cool.

Lucas Perry: So Jeff, if people want to follow you, what is the best way to do that?

Jeffrey Ding: You can hit me up on Twitter, I’m @JJDing99; Or I put out a weekly newsletter featuring translations on AI related issues from Chinese media, Chinese scholars and that’s China AI Newsletter, C-H-I-N-A-I. if you just search that, it should pop up.

Lucas Perry: Links to those will be provided in the description of wherever you might find this podcast. Jeff, thank you so much for coming on and thank you for all of your work and research and efforts in this space, for helping to create a robust and beneficial future with AI.

Jeffrey Ding: All right, Lucas. Thanks. Thanks for the opportunity. This was fun.

Lucas Perry: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

How Can AI Systems Understand Human Values?

Machine learning (ML) algorithms can already recognize patterns far better than the humans they’re working for. This allows them to generate predictions and make decisions in a variety of high-stakes situations. For example, electricians use IBM Watson’s predictive capabilities to anticipate clients’ needs; Uber’s self-driving system determines what route will get passengers to their destination the fastest; and Insilico Medicine leverages its drug discovery engine to identify avenues for new pharmaceuticals. 

As data-driven learning systems continue to advance, it would be easy enough to define “success” according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly. 

Understanding Values

In order to highlight the kinds of ethical decisions that our ML systems are already contending with, Kaj Sotala, a researcher in Finland working for the Foundational Research Institute, turns to traffic analysis and self-driving cars. Should a toll road be used in order to shave five minutes off the commute, or would it be better to take the longer route in order to save money? 

Answering that question is not as easy as it may seem. 

For example, Person A may prefer to take a toll road that costs five dollars if it will save five minutes, but they may not want to take the toll road if it costs them ten dollars. Person B, on the other hand, might always prefer taking the shortest route regardless of price, as they value their time above all else. 

In this situation, Sotala notes that we are ultimately asking the ML system to determine what humans value more: Time or money. Consequently, what seems like a simple question about what road to take quickly becomes a complex analysis of competing values. “Someone might think, ‘Well, driving directions are just about efficiency. I’ll let the AI system tell me the best way of doing it.’ But another person might feel that there is some value in having a different approach,” he said. 

While it’s true that ML systems have to weigh our values and make tradeoffs in all of their decisions, Sotala notes that this isn’t a problem at the present juncture. The tasks that the systems are dealing with are simple enough that researchers are able to manually enter the necessary value information. However, as AI agents increase in complexity, Sotala explains that they will need to be able to account for and weigh our values on their own. 

Understanding Utility-Based Agents

When it comes to incorporating values, Sotala notes that the problem comes down to how intelligent agents make decisions. A thermostat, for example, is a type of reflex agent. It knows when to start heating a house because of a set, predetermined temperature — the thermostat turns the heating system on when it falls below a certain temperature and turns it off when it goes above a certain temperature. Goal-based agents, on the other hand, make decisions based on achieving specific goals. For example, an agent whose goal is to buy everything on a shopping list will continue its search until it has found every item.

Utility-based agents are a step above goal-based agents. They can deal with tradeoffs like the following: Getting milk is more important than getting new shoes today. However, I’m closer to the shoe store than the grocery store, and both stores are about to close. I’m more likely to get the shoes in time than the milk.” At each decision point, goal-based agents are presented with a number of options that they must choose from. Every option is associated with a specific “utility” or reward. To reach their goal, the agents follow the decision path that will maximize the total rewards. 

From a technical standpoint, utility-based agents rely on “utility functions” to make decisions. These are formulas that the systems use to synthesize data, balance variables, and maximize rewards. Ultimately, the decision path that gives the most rewards is the one that the systems are taught to select in order to complete their tasks. 

While these utility programs excel at finding patterns and responding to rewards, Sotala asserts that current utility-based agents assume a fixed set of priorities. As a result, these methods are insufficient when it comes to future AGI systems, which will be acting autonomously and so will need a more sophisticated understanding of when humans’ values change and shift.

For example, a person may always value taking the longer route to avoid a highway and save money, but not if they are having a heart attack and trying to get to an emergency room. How is an AI agent supposed to anticipate and understand when our values of time and money change? This issue is further complicated because, as Sotala points out, humans often value things independently of whether they have ongoing, tangible rewards. Sometimes humans even value things that may, in some respects, cause harm. Consider an adult who values privacy but whose doctor or therapist may need access to intimate and deeply personal information — information that may be lifesaving. Should the AI agent reveal the private information or not?

Ultimately, Sotala explains that utility-based agents are too simple and don’t get to the root of human behavior. “Utility functions describe behavior rather than the causes of behavior….they are more of a descriptive model, assuming we already know roughly what the person is choosing.” While a descriptive model might recognize that passengers prefer saving money, it won’t understand why, and so it won’t be able to anticipate or determine when other values override “saving money.”

An AI Agent Creates a Queen

At its core, Sotala emphasizes that the fundamental problem is ensuring that AI systems are able to uncover the models that govern our values. This will allow them to use these models to determine how to respond when confronted with new and unanticipated situations. As Sotala explains, “AIs will need to have models that allow them to roughly figure out our evaluations in totally novel situations, the kinds of value situations where humans might not have any idea in advance that such situations might show up.”

In some domains, AI systems have surprised humans by uncovering our models of the world without human input. As one early example, Sotala references research with “word embeddings” where an AI system was tasked with classifying sentences as valid or invalid. In order to complete this classification task, the system identified relationships between certain words. For example, as the AI agent noticed a male/female dimension to words, it created a relationship that allowed it to get from “king” to “queen” and vice versa.

Since then, there have been systems which have learned more complex models and associations. For example, OpenAI’s recent GPT-2 system has been trained to read some writing and then write the kind of text that might follow it. When given a prompt of “For today’s homework assignment, please describe the reasons for the US Civil War,” it writes something that resembles a high school essay about the US Civil War. When given a prompt of “Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry,” it writes what sounds like Lord of the Rings-inspired fanfiction, including names such as Aragorn, Gandalf, and Rivendell in its output.

Sotala notes that in both cases, the AI agent “made no attempt of learning like a human would, but it tried to carry out its task using whatever method worked, and it turned out that it constructed a representation pretty similar to how humans understand the world.” 

There are obvious benefits to AI systems that are able to automatically learn better ways of representing data and, in so doing, develop models that correspond to humans’ values. When humans can’t determine how to map, and subsequently model, values, AI systems could identify patterns and create appropriate models by themselves. However, the opposite could also happen — an AI agent could construct something that seems like an accurate model of human associations and values but is, in reality, dangerously misaligned.

For instance, suppose an AI agent learns that humans want to be happy, and in an attempt to maximize human happiness, it hooks our brains up to computers that provide electrical stimuli that gives us feelings of constant joy. In this case, the system understands that humans value happiness, but it does not have an appropriate model of how happiness corresponds to other competing values like freedom. “In one sense, it’s making us happy and removing all suffering, but at the same time, people would feel that ‘no, that’s not what I meant when I said the AI should make us happy,’” Sotala noted.  

Consequently, we can’t rely on an agent’s ability to uncover a pattern and create an accurate model of human values from this pattern. Researchers need to be able to model human values, and model them accurately, for AI systems. 

Developing a Better Definition

Given our competing needs and preferences, it’s difficult to model the values of any one person. Combining and agreeing on values that apply universally to all humans, and then successfully modeling them for AI systems, seems like an impossible task. However, several solutions have been proposed, such as inverse reinforcement learning or attempting to extrapolate the future of humanity’s moral development. Yet, Sotala notes that these solutions fall short. As he articulated in a recent paper, “none of these proposals have yet offered a satisfactory definition of what exactly human values are, which is a serious shortcoming for any attempts to build an AI system that was intended to learn those values.” 

In order to solve this problem, Sotala developed an alternative, preliminary definition of human values, one that might be used to design a value learning agent. In his paper, Sotala argues that values should be defined not as static concepts, but as variables that are considered separately and independently across a number of situations in which humans change, grow, and receive “rewards.” 

Sotala asserts that our preferences may ultimately be better understood in terms of evolutionary theory and reinforcement learning. To justify this reasoning, he explains that, over the course of human history, people evolved to pursue activities that are likely to lead to certain outcomes — outcomes that tended to improve our ancestors’ fitness. Today, he notes that human still prefer those outcomes, even if they no longer maximize our fitness. In this respect, over time, we also learn to enjoy and desire mental states that seem likely to lead to high-reward states, even if they do not. 

So instead of a particular value directly mapping onto a rewards, our preferences map onto our expectation of rewards.

Sotala claims that the definition is useful when attempting to program human values into machines, as value learning systems informed by this model of human psychology would understand that new experiences can change which states a person’s brain categorizes as “likely to lead to reward.” Summing Sotala’s work, the Machine Intelligence Research Institute outlined  the benefits to this framing. “Value learning systems that take these facts about humans’ psychological dynamics into account may be better equipped to take our likely future preferences into account, rather than optimizing for our current preferences alone,” they said.

This form of modeling values, Sotala admits, is not perfect. First, the paper is only a preliminary stab at defining human values, which still leaves a lot of details open for future research. Researchers still need to answer empirical questions related to things like how values evolve and change over time. And once all the empirical questions are answered, researchers need to contend with the philosophical questions that don’t have an objective answer, like how those values should be interpreted and how they should guide an AGI’s decision-making.

When addressing these philosophical questions, Sotala notes that the path forward may simply be to get as much of a consensus as possible. “I tend to feel that there isn’t really any true fact of which values are correct and what would be the correct way of combining them,” he explains. “Rather than trying to find an objectively correct way of doing this, we should strive to find a way that as many people as possible could agree on.”

Since publishing this paper, Sotala has been working on a different approach for modeling human values, one that is based on the premise of viewing humans as multiagent systems. This approach has been published as a series of Less Wrong articles. There is also a related, but separate, research agenda by Future of Humanity Institute’s Stuart Armstrong, which focuses on synthesizing human preferences into a more sophisticated utility function.

AI Alignment Podcast: On the Governance of AI with Jade Leung

In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.

Topics discussed in this episode include:

  • The landscape of AI governance
  • GovAI’s research agenda and priorities
  • Aligning government and companies with ideal governance and the common good
  • Norms and efforts in the AI alignment community in this space
  • Technical AI alignment vs. AI Governance vs. malicious use cases
  • Lethal autonomous weapons
  • Where we are in terms of our efforts and what further work is needed in this space

You can take a short (3 minute) survey to share your feedback about the podcast here.

Important timestamps: 

0:00 Introduction and updates

2:07 What is AI governance?

11:35 Specific work that Jade and the GovAI team are working on

17:21 Windfall clause

21:20 Policy advocacy and AI alignment community norms and efforts

27:22 Moving away from short-term vs long-term framing to a stakes framing

30:44 How do we come to ideal governance?

40:22 How can we contribute to ideal governance through influencing companies and government?

48:12 US and China on AI

51:18 What more can we be doing to positively impact AI governance?

56:46 What is more worrisome, malicious use cases of AI or technical AI alignment?

01:01:19 What is more important/difficult, AI governance or technical AI alignment?

01:03:49 Lethal autonomous weapons

01:09:49 Thinking through tech companies in this space and what we should do

 

Two key points from Jade: 

“I think one way in which we need to rebalance a little bit, as kind of an example of this is, I’m aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we’re looking at private labs that are working on developing AGI. And they’re more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there’s an attraction. There’s really good rational reasons to engage with the folks because they’re the ones who are developing this technology and they’re plausibly the ones who are going to develop something advanced.

“But there’s also, I think, somewhat biased reasons why we engage, is because they’re not as messy, or they’re more familiar, or we see more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I’m actually not entirely sure what I’m advocating for. I’m not sure that I want people to go and engage with all of the UN conversations on this because there’s a lot of noise and very little signal. So I think it’s a tricky one to navigate, for sure. But I’ve just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we’re sort of covering all the basis that we need to cover.”

 

“I think one thing I’d like for people to be thinking about… this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I’ve tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.

“And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don’t need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there’s really good reason to engage in them for several reasons, right?…Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we’re at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns.”

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. Key works mentioned in this podcast can be found here 

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry. And today, we will be speaking with Jade Leung from the Center for the Governance of AI, housed at the Future of Humanity Institute. Their work strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. They focus on the political challenges arising from transformative AI, and seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, and her research work focusing on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.

In this episode, we discuss GovAI’s research agenda and priorities, the landscape of AI governance, how we might arrive at ideal governance, the dynamics and roles of both companies and states within this space, how we might be able to better align private companies with what we take to be ideal governance. We get into the relative importance of technical AI alignment and governance efforts on our path to AGI, we touch on lethal autonomous weapons, and also discuss where we are in terms of our efforts in this broad space, and what work we might like to see more of.

As a general bit of announcement, I found all the feedback coming in through the SurveyMonkey poll to be greatly helpful. I’ve read through all of your comments and thoughts, and am working on incorporating feedback where I can. So for the meanwhile, I’m going to leave the survey up, and you’ll be able to find a link to it in a description of wherever you might find this podcast. Your feedback really helps and is appreciated. And, as always, if you find this podcast interesting or useful, consider sharing with others who might find it valuable as well. And so, without further ado, let’s jump into our conversation with Jade Leung.

So let’s go ahead and start by providing a little bit of framing on what AI governance is, the conceptual landscape that surrounds it. What is AI governance, and how do you view and think about this space?

Jade: I think the way that I tend to think about AI governance is with respect to how it relates to the technical field of AI safety. In both fields, the broad goal is how humanity can best navigate our transition towards a world with advanced AI systems in it. The technical AI safety agenda and the kind of research that’s being done there is primarily focused on how do we build these systems safely and well. And the way that I think about AI governance with respect to that is broadly everything else that’s not that. So that includes things like the social, political, economic context that surrounds the way in which this technology is developed and built and used and employed.

And specifically, I think with AI governance, we focus on a couple of different elements of it. One big element is the governance piece. So what are the kinds of norms and institutions we want around a world with advanced AI serving the common good of humanity. And then we also focus a lot on the kind of strategic political impacts and effects and consequences of the route on the way to a world like that. So what are the kinds of risks, social, political, economic? And what are the kinds of impacts and effects that us developing it in sort of sub-optimal ways could have on the various things that we care about.

Lucas: Right. And so just to throw out some other cornerstones here, because I think there’s many different ways of breaking up this field and thinking about it, and this sort of touches on some of the things that you mentioned. There’s the political angle, the economic angle. There’s the military. There’s the governance and the ethical dimensions.

Here on the AI Alignment Podcast, before we’ve, at least breaking down the taxonomy sort of into the technical AI alignment research, which is getting machine systems to be aligned with human values and desires and goals, and then the sort of AI governance, the strategy, the law stuff, and then the ethical dimension. Do you have any preferred view or way of breaking this all down? Or is it all just about good to you?

Jade: Yeah. I mean, there are a number of different ways of breaking it down. And I think people also mean different things when they say strategy and governance and whatnot. I’m not particular excited about getting into definitional debates. But maybe one way of thinking about what this word governance means is, at least I often think of governance as the norms, and the processes, and the institutions that are going to, and already do, shape the development and deployment of AI. So I think a couple of things that are work underlining in that, I think there’s … The word governance isn’t just specifically government and regulations. I think that’s a specific kind of broadening of the term, which is worth pointing out because that’s a common misconception, I think, when people use the word governance.

So when I say governance, I mean governance and regulation, for sure. But I also mean what are other actors doing that aren’t governance? So labs, researchers, developers, NGOs, journalists, et cetera, and also other mechanisms that aren’t regulation. So it could be things like reputation, financial flows, talent flows, public perception, what’s within and outside the opportune window, et cetera. So there’s a number of different levers I think you can pull if you’re thinking about governance.

It’s probably worth also pointing out, I think, when people say governance, a lot of the time people are talking about the normative side of things, so what should it look like, and how could be if it were good? A lot of governance research, at least in this space now, is very much descriptive. So it’s kind of like what’s actually happening, and trying to understand the landscape of risk, the landscape of existing norms that we have to work with, what’s a tractable way forward with existing actors? How do you model existing actors in the first place? So a fair amount of the research is very descriptive, and I would qualify that as AI governance research, for sure.

Other ways of breaking it down are, according to the research done that we put out, is one option. So that kind of breaks it down into firstly understanding the technological trajectory, so that’s understanding where this technology is likely to go, what are the technical inputs and constraints, and particularly the ones that have implications for governance outcomes. This looks like things like modeling AI progress, mapping capabilities, involves a fair amount of technical work.

And then you’ve got the politics cluster, which is probably where a fair amount of the work is at the moment. This is looking at political dynamics between powerful actors. So, for example, my work is focusing on big firms and government and how they relate to each other, but also includes how AI transforms and impacts political systems, both domestically and internationally. This includes the cluster around international security and the race dynamics that fall into that. And then also international trade, which is a thing that we don’t talk about a huge amount, but politics also includes this big dimension of economics in it.

And then the last cluster is this governance cluster, which is probably the most normative end of what we would want to be working on in this space. This is looking at things like what are the ideal institutions, infrastructure, norms, mechanisms that we can put in place now/in the future that we should be aiming towards that can steer us in robustly good directions. And this also includes understanding what shapes the way that these governance systems are developed. So, for example, what roles does the public have to play in this? What role do researchers have to play in this? And what can we learn from the way that we’ve governed previous technologies in similar domains, or with similar challenges, and how have we done on the governance front on those bits as well. So that’s another way of breaking it down, but I’ve heard more than a couple of ways of breaking this space down.

Lucas: Yeah, yeah. And all of them are sort of valid in their own ways, and so we don’t have to spend too much time on this here. Now, a lot of these things that you’ve mentioned are quite macroscopic effects in the society and the world, like norms and values and developing a concept of ideal governance and understanding actors and incentives and corporations and institutions and governments. Largely, I find myself having trouble developing strong intuitions about how to think about how to impact these things because it’s so big it’s almost like the question of, “Okay, let’s figure out how to model all of human civilization.” At least all of the things that matter a lot for the development and deployment of technology.

And then let’s also think about ideal governance, like what is also the best of all possible worlds, based off of our current values, that we would like to use our model of human civilization to bring us closer towards? So being in this field, and exploring all of these research threads, how do you view making progress here?

Jade: I can hear the confusion in your voice, and I very much resonate with it. We’re sort of consistently confused, I think, at this place. And it is a very big, both set of questions, and a big space to kind of wrap one’s head around. I want to emphasize that this space is very new, and people working in this space are very few, at least with respect to AI safety, for example, which is still a very small section that feels as though it’s growing, which is a good thing. We are at least a couple of years behind, both in terms of size, but also in terms of sophistication of thought and sophistication of understanding what are more concrete/sort of decision relevant ways in which we can progress this research. So we’re working hard, but it’s a fair ways off.

One way in which I think about it is to think about it in terms of what actors are making decisions now/in the near to medium future, that are the decisions that you want to influence. And then you sort of work backwards from that. I think at least, for me, when I think about how we do our research at the Center for the Governance of AI, for example, when I think about what is valuable for us to research and what’s valuable to invest in, I want to be able to tell a story of how I expect this research to influence a decision, or a set of decisions, or a decision maker’s priorities or strategies or whatever.

Ways of breaking that down a little bit further would be to say, you know, who are the actors that we actually care about? One relatively crude bifurcation is focusing on those who are in charge of developing and deploying these technologies, firms, labs, researchers, et cetera, and then those who are in charge of sort of shaping the environment in which this technology is deployed, and used, and is incentivized to progress. So that’s folks who shape the legislative environment, folks who shape the market environment, folks who shape the research culture environment, and expectations and whatnot.

And with those two sets of decision makers, you can then boil it down into what are the particular decisions they are in charge of making that you can decide you want to influence, or try to influence, by providing them with research insights or doing research that will in some down shoot way, affect the way they think about how these decisions should be made. And a very, very concrete example would be to pick, say, a particular firm. And they have a set of priorities, or a set of things that they care about achieving within the lifespan of that firm. And they have a set of strategies and tactics that they intend to use to execute on that set of priorities. So you can either focus on trying to shift their priorities towards better directions if you think they’re off, or you can try to point out ways in which their strategies could be done slightly better, e.g. they be coordinating more with other actors, or they should be thinking harder about openness in their research norms. Et cetera, et cetera.

Well, you can kind of boil it down to the actor level and the decision specific level, and get some sense of what it actually means for progress to happen, and for you to have some kind of impact with this research. One caveat with this is that I think if one takes this lens on what research is worth doing, you’ll end up missing a lot of valuable research being done. So a lot of the work that we do currently, as I said before, is very much understanding what’s going on in the first place. What are the actual inputs into the AI production function that matter and are constrained and are bottle-necked? Where are they currently controlled? A number of other things which are mostly just descriptive I can’t tell you with which decision I’m going to influence by understanding this. But having a better baseline will inform better work across a number of different areas. I’d say that this particular lens is one way of thinking about progress. There’s a number of other things that it wouldn’t measure, that are still worth doing in this space.

Lucas: So it does seem like we gain a fair amount of tractability by just thinking, at least short term, who are the key actors, and how might we be able to guide them in a direction which seems better. I think here it would also be helpful if you could let us know, what is the actual research that you, and say, Allan Dafoe engage in on a day to day basis. So there’s analyzing historical cases. I know that you guys have done work with specifying your research agenda. You have done surveys of American attitudes and trends on opinions on AI. Jeffrey Ding has also released a paper on deciphering China’s AI dream, tries to understand China’s AI strategy. You’ve also released on the malicious use cases of artificial intelligence. So, I mean, what is it like being Jade on a day to day trying to conquer this problem?

Jade: The specific work that I’ve spent most of my research time on to date sort of falls into the politics/governance cluster. And basically, the work that I do is centered on the assumption that there are things that we can learn from a history of trying to govern strategic general purpose technologies well. And if you look at AI, and you believe that it has certain properties that make it strategic, strategic here in the sense that it’s important for things like national security and economic leadership of nations and whatnot. And it’s also general purpose technology, in that it has the potential to do what GPTs do, which is to sort of change the nature of economic production, push forward a number of different frontiers simultaneously, enable consistent cumulative progress, change course of organizational functions like transportation, communication, et cetera.

So if you think that AI looks like strategic general purpose technology, then the claim is something like, in history we’ve seen a set of technology that plausibly have the same traits. So the ones that I focus on are biotechnology, cryptography, and aerospace technology. And the question that sort of kicked off this research is, how have we dealt with the very fraught competition that we currently see in the space of AI when we’ve competed across these technologies in the past. And the reason why there’s a focus on competition here is because, I think one important thing that characterizes a lot of the reasons why we’ve got a fair number of risks in the AI space is because we are competing over it. “We” here being very powerful nations, very powerful firms, and the reason why competition is an important thing to highlight is that it exacerbates a number of risks and it causes a number of risks.

So when you’re in a competitive environment, actors were normally incentivized to take larger risks than they otherwise would rationally do. They are largely incentivized to not engage in the kind of thinking that is required to think about public goods governance and serving the common benefit of humanity. And they’re more likely to engage in thinking about, is more about serving parochial, sort of private, interests.

Competition is bad for a number of reasons. Or it could be bad for a number of reasons. And so the question I’m asking is, how have we competed in the past? And what have been the outcomes of those competitions? Long story short, so the research that I do is basically I dissect these cases of technology development, specifically in the US. And I analyze the kinds of conflicts, and the kinds of cooperation that have existed between the US government and the firms that were leading technology development, and also the researcher communities that were driving these technologies forward.

Other pieces of research that are going on, we have a fair number of our researcher working on understanding what are the important inputs into AI that are actually progressing us forward. How important is compute relative to algorithmic structures, for example? How important is talent, with respect to other inputs? And then the reason why that’s important to analyze and useful to think about is understanding who controls these inputs, and how they’re likely to progress in terms of future trends. So that’s an example of the technology forecasting work.

In the politics work, we have a pretty big chunk on looking at the relationship between governments and firms. So this is a big piece of work that I’ve been doing, along with a fair amount of others, understanding, for example, if the US government wanted to control AI R&D, what are the various levers that they have available, that they could use to do things like seize patents, or control research publications, or exercise things like export controls, or investment constraints, or whatnot. And the reason why we focus on that is because my hypothesis is that ultimately, ultimately you’re going to start to see states get much more involved. At the moment, you’re currently in this period of time wherein a lot of people describe it as very private sector driven, and the governments are behind, I think, and history would also suggest that the state is going to be involved much more significantly very soon. So understanding what they could do, and what their motivations are, are important.

And then, lastly, on the governance piece, a big chunk of our work here is specifically on public opinions. So you’ve mentioned this before. But basically, we have a big substantial chunk of our work, consistently, is just understanding what the public thinks about various issues to do with AI. So recently, we published a report of the recent set of surveys that we did surveying the American public. And we asked them a variety of different questions and got some very interesting answers.

So we asked them questions like: What risks do you think are most important? Which institution do you trust the most to do things with respect to AI governance and development? How important do you think certain types of governance challenges are for American people? Et cetera. And the reason why this is important for the governance piece is because governance ultimately needs to have sort of public legitimacy. And so the idea was that understanding how the American public thinks about certain issues can at least help to shape some of the conversation around where we should be headed in governance work.

Lucas: So there’s also been work here, for example, on capabilities forecasting. And I think Allan and Nick Bostrom also come at these from slightly different angles sometimes. And I’d just like to explore all of these so we can get all of the sort of flavors of the different ways that researchers come at this problem. Was it Ben Garfinkel who did the offense-defense analysis?

Jade: Yeah.

Lucas: So, for example, there’s work on that. That work was specifically on trying to understand how the offense-defense bias scales as capabilities change. This could have been done with nuclear weapons, for example.

Jade: Yeah, exactly. That was an awesome piece of work by Allan and Ben Garfinkel, looking at this concept of the offense-defense balance, which exists for weapon systems broadly. And they were sort of analyzing and modeling. It’s a relatively theoretical piece of work, trying to model how the offense-defense balance changes with investments. And then there was a bit of a investigation there specifically on how we could expect AI to affect the offense-defense balance in different types of contexts. The other cluster work, which I failed to mention as well, is a lot of our work on policy, specifically. So this is where projects like the windfall clause would fall in.

Lucas: Could you explain what the windfall clause is, in a sentence or two?

Jade: The windfall clause is an example of a policy lever, which we think could be a good idea to talk about in public and potentially think about implementing. And the windfall clause is an ex-ante voluntary commitment by AI developers to distribute profits from the development of advanced AI for the common benefit of humanity. What I mean by ex-ante is that they commit to it now. So an AI developer, say a given AI firm, will commit to, or sign, the windfall clause prior to knowing whether they will get to anything like advanced AI. And what they commit to is saying that if I hit a certain threshold of profits, so what we call windfall profit, and the threshold is very, very, very high. So the idea is that this should only really kick in if a firm really hits the jackpot and develops something that is so advanced, or so transformative in the economic sense, that they get a huge amount of profit from it at some sort of very unprecedented scale.

So if they hit that threshold of profit, this clause will kick in, and that will commit them to distributing their profits according to some kind of pre-committed distribution mechanism. And the idea with the distribution mechanism is that it will redistribute these products along the lines of ensuring that sort of everyone in the world can benefit from this kind of bounty. There’s a lot of different ways in which you could do the distribution. And we’re about to put out the report which outlines some of our thinking on it. And there are many more ways in which it could be done besides from what we talk about.

But effectively, what you want in a distribution mechanism is you want it to be able to do things like rectify inequalities that could have been caused in the process of developing advanced AI. You want it to be able to provide a financial buffer to those who’ve been thoughtlessly unemployed by the development of advanced AI. And then you also want it to do somewhat positive things too. So it could be, for example, that you distribute it according to meeting the sustainable development goals. Or it could be redistributed according to a scheme that looks something like the UBI. And that transitions us into a different type of economic structure. So there are various ways in which you could play around with it.

Effectively, the windfall clause is starting a conversation about how we should be thinking about the responsibilities that AI developers have to ensure that if they do luck out, or if they do develop something that is as advanced as some of what we speculate we could get to, there is a responsibility there. And there also should be a committed mechanism there to ensure that that is balanced out in a way that reflects the way that we want this value to be distributed across the world.

And that’s an example of the policy lever that is sort of uniquely concrete, in that we don’t actually do a lot of concrete research. We don’t do much policy advocacy work at all. But to the extent that we want to do some policy advocacy work, it’s mostly with the motivation that we want to be starting important conversations about robustly good policies that we could be advocating for now, that can help steer us in better directions.

Lucas: And fitting this into the research threads that we’re talking about here, this goes back to, I believe, Nick Bostrom’s Superintelligence. And so it’s sort of predicated on more foundational principles, which can be attributed to before the Asilomar Conference, but also the Asilomar principles which were developed in 2017, that the benefits of AI should be spread widely, and there should be abundance. And so then there becomes these sort of specific policy implementations or mechanisms by which we are going to realize these principles which form the foundation of our ideal governance.

So Nick has sort of done a lot of this work on forecasting. The forecasting in Superintelligence was less about concrete timelines, and more about the logical conclusions of the kinds of capabilities that AI will have, fitting that into our timeline of AI governance thinking, with ideal governance at the end of that. And then behind us, we have history, which we can, as you’re doing yourself, try to glean more information about how what you call general purpose technologies affect incentives and institutions and policy and law and the reaction of government to these new powerful things. Before we brought up the windfall clause, you were discussing policy at FHI.

Jade: Yeah, and one of the reasons why it’s hard is because if we put on the frame that we mostly make progress by influencing decisions, we want to be pretty certain about what kinds of directions we want these decisions to go, and what we would want these decisions to be, before we engage in any sort of substantial policy advocacy work to try to make that actually a thing in the real world. I am very, very hesitant about our ability to do that well, at least at the moment. I think we need to be very humble about thinking about making concrete recommendations because this work is hard. And I also think there is this dynamic, at least, in setting norms, and particularly legislation or regulation, but also just setting up institutions, in that it’s pretty slow work, but it’s very path dependent work. So if you establish things, they’ll be sort of here to stay. And we see a lot of legacy institutions and legacy norms that are maybe a bit outdated with respect to how the world has progressed in general. But we still struggle with them because it’s very hard to get rid of them. And so the kind of emphasis on humility, I think, is a big one. And it’s a big reason why basically policy advocacy work is quite slim on the ground, at least in the moment, because we’re not confident enough in our views on things.

Lucas: Yeah, but there’s also this tension here. The technology’s coming anyway. And so we’re sort of on this timeline to get the right policy stuff figured out. And here, when I look at, let’s just take the Democrats and the Republicans in the United States, and how they interact. Generally, in terms of specific policy implementation and recommendation, it just seems like different people have various dispositions and foundational principles which are at odds with one another, and that policy recommendations are often not substantially tested, or the result of empirical scientific investigation. They’re sort of a culmination and aggregate of one’s very broad squishy intuitions and modeling or the world, and different intuitions one has. Which is sort of why, at least at the policy level, seemingly in the United States government, it seems like a lot of the conversation is just endless arguing that gets nowhere. How do we avoid that here?

Jade: I mean, this is not just specifically an AI governance problem. I think we just struggle with this in general as we try to do governance and politics work in a good way. It’s a frustrating dynamic. But I think one thing that you said definitely resonates and that, a bit contra to what I just said. Whether we like it or not, governance is going to happen, particularly if you take the view that basically anything that shapes the way this is going to go, you could call governance. Something is going to fill the gap because that’s what humans do. You either have the absence of good governance, or you have somewhat better governance if you try to engage a little bit. There’s definitely that tension.

One thing that I’ve recently been reflecting on, in terms of things that we under-prioritize in this community, because it’s sort of a bit of a double-edged sword of being very conscientious about being epistemically humble and being very cautious about things, and trying to be better calibrated and all of that, which are very strong traits of people who work in this space at the moment. But I think almost because of those traits, too, we undervalue, or we don’t invest enough time or resource in just trying to engage in existing policy discussions and existing governance institutions. And I think there’s also an aversion to engaging in things that feel frustrating and slow, and that’s plausibly a mistake, at least in terms of how much attention we pay to it because in the absence of our engagement, the things still going to happen anyway.

Lucas: I must admit that as someone interested in philosophy I’ve resisted for a long time now, the idea of governance in AI at least casually in favor of nice calm cool rational conversations at tables that you might have with friends about values, and ideal governance, and what kinds of futures you’d like. But as you’re saying, and as Alan says, that’s not the way that the world works. So here we are.

Jade: So here we are. And I think one way in which we need to rebalance a little bit, as kind of an example of this is, I’m aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we’re looking at private labs that are working on developing AGI. And they’re more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there’s an attraction. There’s really good rational reasons to engage with the folks because they’re the ones who are developing this technology and they’re plausibly the ones who are going to develop something advanced.

But there’s also, I think, somewhat biased reasons why we engage, is because they’re not as messy, or they’re more familiar, or we feel more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I’m actually not entirely sure what I’m advocating for. I’m not sure that I want people to go and engage with all of the UN conversations on this because there’s a lot of noise and very little signal. So I think it’s a tricky one to navigate, for sure. But I’ve just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we’re sort of covering all the bases that we need to cover.

Lucas: Yeah. My view on this, and this may be wrong, is just looking at the EA community, and the alignment community, and all that they’ve done to try to help with AI alignment. It seems like a lot of talent feeding into tech companies. And there’s minimal efforts right now to engage in actual policy and decision making at the government level, even for short term issues like disemployment and privacy and other things. The AI alignment is happening now, it seems.

Jade: On the noise to signal point, I think one thing I’d like for people to be thinking about, I’m pretty annoyed at this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I’ve tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.

And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don’t need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there’s really good reason to engage in them for several reasons, right? One is this path dependency that I talked about before, so norms that you’re developing around, for example, privacy or surveillance. Those norms are going to stick, and the ways in which we decide we want to govern that, even with narrow technologies now, those are the ones we’re going to inherit, grandfather in, as we start to advance this technology space. And then I think you can also just get a fair amount of information about how we should be governing the more advanced versions of these risks or concerns if you engage earlier.

I think there are actually probably, even just off the top off of my head, I can think of a couple which seemed to have scalable stakes. So, for example, a very existing conversation in the policy space is about this labor displacement problem and automation. And that’s the thing that people are freaking out about now, is the extent that you have litigation and bills and whatnot being passed, or being talked about at least. And you’ve got a number of people running on political platforms on the basis of that kind of issue. And that is both an existing concern, given automation to date. But it’s also plausibly a huge concern as this stuff is more advanced, to the point of economic singularity, if you wanted to use that term, where you’ve got vast changes in the structure of the labor market and the employment market, and you can have substantial transformative impacts on the ways in which humans engage and create economic value and production.

And so existing automation concerns can scale into large scale labor displacement concerns, can scale into pretty confusing philosophical questions about what it means to conduct oneself as a human in a world where you’re no longer needed in terms of employment. And so that’s an example of a conversation which I wish more people were engaged in right now.

Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we’re at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns.

Lucas: That makes sense. I think that trying on the stakes framing is helpful, and I can see why. It’s just a question about what are the things today, and within the next few years, that are likely to have a large effect on a larger end that we arrive at with transformative AI. So we’ve got this space of all these four cornerstones that you guys are exploring. Again, this has to do with the interplay and interdependency of technical AI safety, politics, policy of ideal governance, the economics, the military balance and struggle, and race dynamics all here with AI, on our path to AGI. So starting here with ideal governance, and we can see how we can move through these cornerstones, what is the process by which ideal governance is arrived at? How might this evolve over time as we get closer to superintelligence?

Jade: It may be a couple of thoughts, mostly about what I think a desirable process is that we should follow, or what kind of desired traits do we want to have in the way that we get to ideal governance and what ideal governance could plausibly look like. I think that’s to the extent that I maybe have thoughts about it. And they’re quite obvious ones, I think. Governance literature has said a lot about what consists of both morally sound, politically sound, socially sound governance processes or design of governance processes.

So those are things like legitimacy and accountability and transparency. I think there are some interesting debates about how important certain goals are, either as end goals or as instrumental goals. So for example, I’m not clear where my thinking is on how important inclusion and diversity is. As we’re aiming for ideal governance, so I think that’s an open question, at least in my mind.

There are also things to think through around what’s unique to trying to aim for ideal governance for a transformative general purpose technology. We don’t have a very good track record of governing general purpose technologies at all. I think we have general purpose technologies that have integrated into society and have served a lot of value. But that’s not for having had governance of them. I think we’ve been come combination of lucky and somewhat thoughtful sometimes, but not consistently so. If we’re staking the claim that AI could be a uniquely transformative technology, then we need to ensure that we’re thinking hard about the specific challenges that it poses. It’s a very fast-moving emerging technology. And governments historically has always been relatively slow at catching up. But you also have certain capabilities that you can realize by developing, for example, AGI or super intelligence, which governance frameworks or institutions have never had to deal with before. So thinking hard about what’s unique about this particular governance challenge, I think, is important.

Lucas: Seems like often, ideal governance is arrived at through massive suffering of previous political systems, like this form of ideal governance that the founding fathers of the United States came up with was sort of an expression of the suffering they experienced at the hands of the British. And so I guess if you track historically how we’ve shifted from feudalism and monarchy to democracy and capitalism and all these other things, it seems like governance is a large slowly reactive process born of revolution. Whereas, here, what we’re actually trying to do is have foresight and wisdom about what the world should look like, rather than trying to learn from some mistake or some un-ideal governance we generate through AI.

Jade: Yeah, and I think that’s also another big piece of it, is another way of thinking about how to get to ideal governance is to aim for a period of time, or a state of the world in which we can actually do the thinking well without a number of other distractions/concerns on the way. So for example, conditions that we want to drive towards would mean getting rid of things like the current competitor environment that we have, which for many reasons, some of which I mentioned earlier, it’s a bad thing, and it’s particularly counterproductive to giving us the kind of space and cooperative spirit and whatnot that we need to come to ideal governance. Because if you’re caught in this strategic competitive environment, then that makes a bunch of things just much harder to do in terms of aiming for coordination and cooperation and whatnot.

You also probably want better, more accurate, information out there, hence being able to think harder by looking at better information. And so a lot of work can be done to encourage more accurate information to hold more weight in public discussions, and then also encourage an environment that is genuine, epistemically healthy deliberation about that kind of information. All of what I’m saying is also not particularly unique, maybe, to ideal governance for AI. I think in general, you can sometimes broaden this discussion to what does it look like to govern a global world relatively well. And AI is one of the particular challenges that are maybe forcing us to have some of these conversations. But in some ways, when you end up talking about governance, it ends up being relatively abstract in a way, I think, ruins technology. At least in some ways there are also particular challenges, I think, if you’re thinking particularly about superintelligence scenarios. But if you’re just talking about governance challenges in general, things like accurate information, more patience, lack of competition and rivalrous dynamics and what not, that generally is kind of just helpful.

Lucas: So, I mean, arriving at ideal governance here, I’m just trying to model and think about it, and understand if there should be anything here that should be practiced differently, or if I’m just sort of slightly confused here. Generally, when I think about ideal governance, I see that it’s born of very basic values and principles. And I view these values and principles as coming from nature, like the genetics, evolution instantiating certain biases and principles and people that tend to lead to cooperation, conditioning of a culture, how we’re nurtured in our homes, and how our environment conditions us. And also, people update their values and principles as they live in the world and communicate with other people and engage in public discourse, even more foundational, meta-ethical reasoning, or normative reasoning about what is valuable.

And historically, these sort of conversations haven’t mattered, or they don’t seem to matter, or they seem to just be things that people assume, and they don’t get that abstract or meta about their values and their views of value, and their views of ethics. It’s been said that, in some sense, on our path to superintelligence, we’re doing philosophy on a deadline, and that there are sort of deep and difficult questions about the nature of value, and how best to express value, and how to idealize ourselves as individuals and as a civilization.

So I guess I’m just throwing this all out there. Maybe not necessarily we have any concrete answers. But I’m just trying to think more about the kinds of practices and reasoning that should and can be expected to inform ideal governance. Should meta-ethics matter here, where it doesn’t seem to matter in public discourse. I still struggle between the ultimate value expression that might be happening through superintelligence, and the tension between that, and how are public discourse functions. I don’t know if you have any thoughts here.

Jade: No particular thoughts, aside from to generally agree that I think meta-ethics is important. It is also confusing to me why public discourse doesn’t seem to track the things that seem important. This probably is something that we’ve struggled and tried to address in various ways before, so I guess I’m always cognizant of trying to learn from ways in which we’ve tried to improve public discourse and tried to create spaces for this kind of conversation.

It’s a tricky one for sure, and thinking about better practices is probably the main way at least in which I engage with thinking about ideal governance. It’s often the case that people, when they look at the cluster of ideal governance work though like, “Oh, this is the thing that’s going to tell us what the answer is,” like what’s the constitution that we have to put in place, or whatever it is.

At least for me, the maun chunk of thinking is mostly centered around process, and it’s mostly centered around what constitutes a productive optimal process, and some ways of answering this pretty hard question. And how do you create the conditions in which you can engage with that process without being distracted or concerned about things like competition? Those are kind of the main ways in which it seems obvious that we can fix the current environment so that we’re better placed to answer what is a very hard question.

Lucas: Coming to mind here is also, is this feature that you pointed out, I believe, that ideal governance is not figuring everything out in terms of our values, but rather creating the kind of civilization and space in which we can take the time to figure out ideal governance. So maybe ideal governance is not solving ideal governance, but creating a space to solve ideal governance.

Usually, ideal governance has to do with modeling human psychology, and how to best to get human being to produce value and live together harmoniously. But when we introduce AI, and human beings become potentially obsolete, then ideal governance potentially becomes something else. And I wonder, if the role of, say, experimental cities with different laws, policies, and governing institutions might be helpful here.

Jade: Yeah, that’s an interesting thought. Another thought that came to mind as well, actually, is just kind of reflecting on how ill-equipped I feel thinking about this question. One funny trait of this field is that you have a slim number of philosophers, but specially in the AI strategy and safety space, it’s political scientists, international relations people, economists, and engineers, and computer scientists thinking about questions that other spaces have tried to answer in different ways before.

So when you mention psychology, that’s an example. Obviously, philosophy has something to say about this. But there’s also a whole space of people have thought about how we govern things well across a number of different domains, and how we do a bunch of coordination and cooperation better, and stuff like that. And so it makes me reflect on the fact that there could be things that we already have learned that we should be reflecting a little bit more on which we currently just don’t have access to because we don’t necessarily have the right people or the right domains of knowledge in this space.

Lucas: Like AI alignment has been attracting a certain crowd of researchers, and so we miss out on some of the insights that, say, psychologists might have about ideal governance.

Jade: Exactly, yeah.

Lucas: So moving along here, from ideal governance, assuming we can agree on what ideal governance is, or if we can come to a place where civilization is stable and out of existential risk territory, and where we can sit down and actually talk about ideal governance, how do we begin to think about how to contribute to AI governance through working with or in private companies and/or government.

Jade: This is a good, and quite large, question. I think there are a couple of main ways in which I think about productive actions that either companies or governments can take, or productive things we can do with both of these actors to make them more inclined to do good things. On the point of other companies, the primary thing I think that is important to work on, at least concretely in the near term, is to do something like establish the norm and expectation that as developers of this important technology that will have a large plausible impact on the world, they have a very large responsibility proportional to their ability to impact the development of this technology. By making the responsibility something that is tied to their ability to shape this technology, I think that as a foundational premise or a foundational axiom to hold about why private companies are important, that can get us a lot of relatively concrete things that we should be thinking about doing.

The simple way of saying its is something like if you are developing the thing, you’re responsibly for thinking about how that thing is going to affect the world. And establishing that, I think is a somewhat obvious thing. But it’s definitely not how the private sector operates at the moment, in that there is an assumed limited responsibility irrespective of how your stuff is deployed in the world. What that actually means can be relatively concrete. Just looking at what these labs, or what these firms have the ability to influence, and trying to understand how you want to change it.

So, for example, internal company policy on things like what kind of research is done and invested in, and how you allocate resources across, for example, safety and capabilities research, what particular publishing norms you have, and considerations around risks or benefits. Those are very concrete internal company policies that can be adjusted and shifted based on one’s idea of what they’re responsible for. The broad thing, I think, to try to steer them in this direction of embracing, acknowledging, and then living up this greater responsibility, as an entity that is responsible for developing the thing.

Lucas: How would we concretely change the incentive structure of a company who’s interested in maximizing profit towards this increased responsibility, say, in the domains that you just enumerated.

Jade: This is definitely probably one of the hardest things about this claim being translated into practice. I mean, it’s not the first time we’ve been somewhat upset at companies for doing things that society doesn’t agree with. We don’t have a great track record of changing the way that industries or companies work. That being said, I think if you’re outside of the company, there are particularly levers that one can pull that can influence the way that a company is incentivized. And then I think we’ve also got examples of us being able to use these levers well.

The fact that companies are constrained by the environment that a government creates, and governments also have the threat of things like regulation, or the threat of being able to pass certain laws or whatnot, which actually the mere threat, historically, has done a fair amount in terms of incentivizing companies to just step up their game because they don’t want regulation to kick in, which isn’t conducive to what they want to do, for example.

Users of the technology is a pretty classic one. It’s a pretty inefficient one, I think, because you’ve got to coordinate many, many different types of users, and actors, and consumers and whatnot, to have an impact on what companies are incentivized to do. But you have seen environmental practices in other types of industries that have been put in place as standards or expectations that companies should abide by because consumers across a long period of time have been able to say, “I disagree with this particular practice.” That’s an example of a trend that has succeeded.

Lucas: That would be like boycotting or divestment.

Jade: Yeah, exactly. And maybe a slightly more efficient one is focusing on things like researchers and employees. That is, if you are a researcher, if you’re an employee, you have levers over the employer that you work for. They need you, and you need them, and there’s that kind of dependency in that relationship. This is all a long way of saying that I think, yes, I agree it’s hard to change incentive structures of any industry, and maybe specifically so in this case because they’re very large. But I don’t think it’s impossible. And I think we need to think harder about how to use those well. I think the other thing that’s working in our favor in this particular case is that we have a unique set of founders or leaders of these labs or companies that have expressed pretty genuine sounding commitments to safety and to cooperativeness, and to serving the common good. It’s not a very robust strategy to rely on certain founders just being good people. But I think in this case, it’s kind of working in our favor.

Lucas: For now, yeah. There’s probably already other interest groups who are less careful, who are actually just making policy recommendations right now, and we’re broadly not in on the conversation due to the way that we think about the issue. So in terms of government, what should we be doing? Yeah, it seems like there’s just not much happening.

Jade: Yeah. So I agree there isn’t much happening, or at least relative to how much work we’re putting into trying to understand and engage with private labs. There isn’t much happening with government. So I think there needs to be more thought put into how we do that piece of engagement. I think good things that we could be trying to encourage more governments to do, for one, investing in productive relationships with the technical community, and productive relationships with the researcher community, and with companies as well. At least in the US, it’s pretty adversarial between Silicon Valley firms and DC.

And that isn’t good for a number of reasons. And one very obvious reason is that there isn’t common information or common understand of what’s going on, what the risks are, what the capabilities are, et cetera. One of the main critiques of governments is that they’re ill-equipped in terms of access to knowledge, and access to expertise, to be able to appropriately design things like bills, or things like pieces of legislation or whatnot. And I think that’s also something that governments should take responsibility for addressing.

So those are kind of law hanging fruit. There’s a really tricky balance that I think governments will need to strike, which is the balance between avoiding over-hasty ill-informed regulation. A lot of my work looking at history will show that the main ways in which we’ve achieved substantial regulation is as a result of big public, largely negative events to do with the technology screwing something up, or the technology causing a lot of fear, for whatever reasons. And so there’s a very sharp spike in public fear or public concern, and then the government then kicks into gear. And I think that’s not a good dynamic in terms of forming nuanced well-considered regulation and governance norms. Avoiding the outcome is important, but it’s also important that governments do engage and track how this is going, and particularly track where things like company policy and industry-wide efforts are not going to be sufficient. So when do you start translating some of the more soft law, if you will, into actual hard law.

That will be a very tricky timing question, I think, for governments to grapple with. But ultimately, it’s not sufficient to have companies governing themselves. You’ll need to be able to consecrate it into government backed efforts and initiatives and legislation and bills. My strong intuition is that it’s not quite the right time to roll out object level policies. And so the main task for governments will be just to position themselves to do that well when the time is right.

Lucas: So what’s coming to my mind here is I’m thinking about YouTube compilations of congressional members of the United States and senators asking horrible questions to Mark Zuckerberg and the CEO of, say, Google. They just don’t understand the issues. The United States is currently not really thinking that much about AI, and especially transformative AI. Whereas, China, it seems, has taken a step in this direction and is doing massive governmental investments. So what can we say about this assuming difference? And the question is, what are governments to do in this space? Different governments are paying attention at different levels.

Jade: Some governments are more technological savvy than others, for one. So I pushed back on the US not … They’re paying attention on different things. So, for example, the Department of Commerce put out a notice to the public indicating that they’re exploring putting in place export controls on a cluster of emerging technologies, including a fair number of AI relevant technologies. The point of export controls is to do something like ensure that adversaries don’t get access to critical technologies that, if they do, then that could undermine national security and/or domestic industrial base. The reasons why export controls are concerning is because they’re a) a relatively outdated tool. They used to work relatively well when you were targeting specific kind of weapons technologies, or basically things that you could touch and see. And the restriction of them from being on the market by the US means that a fair amount of it won’t be able to be accessed by other folks around the world. And you’ve seen export controls be increasingly less effective the more that we’ve tried to apply to things like cryptography, where it’s largely software based. And so trying to use export controls, which are applied at the national border, is a very tricky thing to make effective.

So you have the US paying attention to the fact that they think that AI is a national security concern, at least in this respect, enough to indicate that they’re interested in exploring export controls. I think it’s unlikely that export controls are going to be effective at achieving the goals that the US want to pursue. But I think export controls is also indicative of a world that we don’t want to slide in, which is a world where you have rivalrous economic blocks, where you’re sort of protecting your own base, and you’re not contributing to the kind of global commons of progressing this technology.

Maybe it goes back to what we were saying before, in that if you’re not engaged in the governance, the governance is going to happen anyway. This is an example of activity is going to happen anyway. I think people assume now, probably rightfully so, that the US government is not going to be very effective because they are not technically literate. In general, they are sort of relatively slow moving. They’ve got a bunch of other problems that they need to think about, et cetera. But I don’t think it’s going to take very, very long for the US government to start to seriously engage. I think the thing that is worth trying to influence is what they do when they start to engage.

If I had a policy in mind that I thought was robustly good that the US government should pass, then that would be the more proactive approach. It seems possible that if we think about this hard enough, there could be robustly good things that the US government could do, that could be good to be proactive about.

Lucas: Okay, so there’s this sort of general sense that we’re pretty heavy on academic papers because we’re really trying to understand the problem, and the problem is so difficult, and we’re trying to be careful and sure about how we progress. And it seems like it’s not clear if there is much room, currently, for direct action, given our uncertainty about specific policy implementations. There are some shorter term issues. And sorry to say shorter term issues. But, by that, I mean automation and maybe lethal autonomous weapons and privacy. These things, we have a more clear sense of, at least about potential things that we can start doing. So I’m just trying to get a sense here from you, on top of these efforts to try to understand the issues more, and on top of these efforts, for example, like 80,000 Hours has contributed. And by working to place aligned persons in various private organizations, what else can we be doing? What would you like to see more being done on here?

Jade: I think this is on top of just more research. But that would be the first thing that comes to mind, is people thinking hard about it seems like a thing that I want a lot more of, in general. But on top of that, what you mentioned, I think, the placing people, that maybe fits into this broader category of things that seems good to do, which is investing in building our capacity to influence the future. That’s quite a general statement. But something like it takes a fair amount of time to build up influence, particularly in certain institutions, like governments, like international institutions, et cetera. And so investing in that early seems good. And doing things like trying to encourage value aligned sensible people to climb the ladders that they need to climb in order to get to positions of influence, that generally seems like a good and useful thing.

The other thing that comes to mind as well is putting out more accurate information. One specific version of things that we could do here is, there is currently a fair number of inaccurate, or not well justified memes that are floating around, that are informing the way that people think. For example, the US and China are in a race. Or a more nuanced one is something like, inevitably, you’re going to have a safety performance trade off. And those are not great memes, in the sense that they don’t seem to be conclusively true. But they’re also not great in that they put you in a position of concluding something like, “Oh, well, if I’m going to invest in safety, I’ve got to be an altruist, or I’m going to trade off my competitive advantage.”

And so identifying what those bad ones are, and countering those, is one thing to do. Better memes could be something like those are developing this technology are responsible for thinking through its consequences. Or something even as simple as governance doesn’t mean government, and it doesn’t mean regulation. Because I think you’ve got a lot of firms who are terrified of regulation. And so they won’t engage in this governance conversation because of it. So there could be some really simple things I think we could do, just to make the public discourse both more accurate and more conducive to things being done that are good in the future.

Lucas: Yeah, here I’m also just seeing the tension here between the appropriate kinds of memes that inspire, I guess, a lot of the thinking within the AI alignment community, and the x-risk community, versus what is actually useful or spreadable for the general public, adding in here ways in which accurate information can be info-hazardy. I think broadly in our community, the common good principle, and building an awesome future for all sentient creatures, and I am curious to know how spreadable those memes are.

Jade: Yeah, the spreadability of memes is a thing that I want someone to investigate more. The things that make things not spreadable, for example, are just things that are, at a very simple level, quite complicated to explain, or are somewhat counterintuitive so you can’t pump the intuition very easily. Particularly things that require you to decide that one set of values that you care about, that’s competing against another set of values. Any set of things that brings nationalism against cosmopolitanism, I think, is a tricky one, because you have some subset of people. The ones that you and I talk to the most are very cosmopolitan. But you also have a fair amount of people who care about the common good principle, in some sense, but also care about their nation in a fairly large sense as well.

So there are things that make certain memes less good or less spreadable. And one key thing will be to figure out which ones are actually good in the true sense, and good in the pragmatic to spread sense.

Lucas: Maybe there’s a sort of research program here, where psychologists and researchers can explore focus groups on the best spreadable memes, which reflect a lot of the core and most important values that we see within AI alignment, and EA, and x-risk.

Jade: Yeah, that could be an interesting project. I think also in AI safety, or in the AI alignment space, people are framing safety in quite different ways. One framing of that, which like it’s a part of what it means to be a good AI person, is to think about safety. That’s an example of one that I’ve seen take off a little bit more lately because that’s an explicit act of trying to mainstream the thing. That’s a meme, or an example of a framing, or a meme, or whatever you want to call it. And you know there are pros and cons of that. The pros would be, plausibly, it’s just more mainstream. And I think you’ve seen evidence of that be the case because more people are more inclined to say, “Yeah, I agree. I don’t want to build a thing that kills me if I want it to get coffee.” But you’re not going to have a lot of conversations about maybe the magnitude of risks that you actually care about. So that’s maybe a con.

There’s maybe a bunch of stuff to do in this general space of thinking about how to better frame the kind of public facing narratives of some of these issues. Realistically, memes are going to fill the space. People are going to talk about it in certain ways. You might as well try to make it better, if it’s going to happen.

Lucas: Yeah, I really like that. That’s a very good point. So let’s talk here a little bit about technical AI alignment. So in technical AI alignment, the primary concerns are around the difficulty of specifying what humans actually care about. So this is like capturing human values and aligning with our preferences and goals, and what idealized versions of us might want. So, so much of AI governance is thus about ensuring that this AI alignment process we engage in doesn’t skip too many corners. The purpose of AI governance is to decrease risks, to increase coordination, and to do all of these other things to ensure that, say, the benefits of AI are spread widely and robustly, that we don’t get locked into any negative governance systems or value systems, and that this process of bringing AIs in alignment with the good doesn’t have researchers, or companies, or governments skipping too many corners on safety. In this context, and this interplay between governance and AI alignment, how much of a concern are malicious use cases relative to the AI alignment concerns within the context of AI governance?

Jade: That’s a hard one to answer, both because there is a fair amount of uncertainty around how you discuss the scale of the thing. But also because I think there are some interesting interactions between these two problems. For example, if you’re talking about how AI alignment interacts with this AI governance problem. You mentioned before AI alignment research is, in some ways, contingent on other things going well. I generally agree with that.

For example, it depends on AI safety taking place in research cultures and important labs. It requires institutional buy-in and coordination between institutions. It requires this mitigation of race dynamics so that you can actually allocate resources towards AI alignment research. All those things. And so in some ways, that particular problem being solved is contingent on us doing AI governance well. But then, also to the point of how big is malicious use risk relative to AI alignment, I think in some ways that’s hard to answer. But in some ideal world, you could sequence the problems that you could solve. If you solve the AI alignment problem first, then AI governance research basically becomes a much narrower space, addressing how an aligned AI could still cause problems because we’re not thinking about the concentration of power, the concentration of economic gains. And so you need to think about things like the windfall clause, to distribute that, or whatever it is. And you also need to think about the transition to creating an aligned AI, and what could be messy in that transition, how you avoid public backlash so that you can actually see the fruits of you having solved this AI alignment problem.

So that becomes more the kind of nature of the thing that AI governance research becomes, if you assume that you’ve solved the AI alignment problem. But if we assume that, in some world, it’s not that easy to solve, and both problems are hard, then I think there’s this interaction between the two. In some ways, it becomes harder. In some ways, they’re dependent. In some ways, it becomes easier if you solve bits of one problem.

Lucas: I generally model the risks of malicious use cases as being less than the AI alignment stuff.

Jade: I mean, I’m not sure I agree with that. But two things I could say to that. I think, one, intuition is something like you have to be a pretty awful person to really want to use a very powerful system to cause terrible ends. And it seems more plausible that people will just do it by accident, or unintentionally, or inadvertently.

Lucas: Or because the incentive structures aren’t aligned, and then we race.

Jade: Yeah. And then the other way to sort of support this claim is, if you look at biotechnology and bio-weapons, specifically, bio-security/bio-terrorism issues, so like malicious use equivalent. Those have been far less, in terms of frequency, compared to just bio-safety issues, which are the equivalent of accident risks. So people causing unintentional harm because we aren’t treating biotechnology safely, that’s cause a lot more problems, at least in terms of frequency, compared to people actually just trying to use it for terrible means.

Lucas: Right, but don’t we have to be careful here with the strategic properties and capabilities of the technology, especially in the context in which it exists? Because there’s nuclear weapons, which are sort of the larger more absolute power imbuing technology. There has been less of a need for people to take bio-weapons to that level. You know? And also there’s going to be limits, like with nuclear weapons, on the ability of a rogue actor to manufacture really effective bio-weapons without a large production facility or team of research scientists.

Jade: For sure, yeah. And there’s a number of those considerations, I think, to bear in mind. So it definitely isn’t the case that you haven’t seen malicious use in bio strictly because people haven’t wanted to do it. There’s a bunch of things like accessibility problems, and tacit knowledge that’s required, and those kinds of things.

Lucas: Then let’s go ahead and abstract away malicious use cases, and just think about technical AI alignment, and then AI/AGI governance. How do you see the relative importance of AI and AGI governance, and the process of AI alignment that we’re undertaking? Is solving AI governance potentially a bigger problem than AI alignment research, since AI alignment research will require the appropriate political context to succeed? On our path to AGI, we’ll need to mitigate a lot of the race conditions and increase coordination. And then even after we reach AGI, the AI governance problem will continue, as we sort of explored earlier that we need to be able to maintain a space in which humanity, AIs, and all earth originating sentient creatures are able to idealize harmoniously and in unity.

Jade: I both don’t think it’s possible to actually assess them at this point, in terms of how much we understand this problem. I have a bias towards saying that AI governance is the harder problem because I’m embedded in it and see it a lot more. And maybe ways to support that claim are things we’ve talked about. So AI alignment going well, or happening at all, is sort of contingent on a number of other factors that AI governments are trying to solve, so social political economic context needs to be right in order for that to actually happen, and then in order for that to have an impact.

There are some interesting things that are made maybe easier by AI alignment being solved, or somewhat solved, if you are thinking about the AI governance problem. In fact, it’s just like a general cluster of AI being safer and more robust and more transparent, or whatever, makes certain AI governance challenges just easier. The really obvious example here that comes to mind is the verification problem. The inability to verify what certain systems are designed to do and will do causes a bunch of governance problems. Like, arms control agreements are very hard. Establishing trust between parties to cooperate and coordinate is very hard.

If you happen to be able to solve some of those problems in the process of trying to tackle this AI alignment problem. And that makes AI governance a little bit easier. I’m not sure which direction it cashes out, in terms of which problem is more important. I’m certain that there are interactions between the two, and I’m pretty certain that one depends on the other, to some extent. So it becomes imminently really hard to govern the thing, if you can’t align the thing. But it also is probably the case that by solving some of the problems in one domain, you can help make the other problem a little bit tractable and easier.

Lucas: So now I’d like to get into lethal autonomous weapons. And we can go ahead and add whatever caveats are appropriate here. So in terms of lethal autonomous weapons, some people think that there are major stakes here. Lethal autonomous weapons are a major AI enabled technology that’s likely to come on the stage soon, as we make some moderate improvements to already existing technology, and then package it all together into the form of a lethal autonomous weapon. Some take the view that this is a crucial moment, or that there are high stakes here to get such weapons banned. The thinking here might be that by demarcating unacceptable uses of AI technology, such as for autonomously killing people, and by showing that we are capable of coordinating on this large and initial AI issue, that we might be taking the first steps in AI alignment, and the first steps in demonstrating our ability to take the technology and its consequences seriously.

And so we mentioned earlier how there’s been a lot of thinking, but not much action. This seems to be an initial place where we can take action. We don’t need to keep delaying our direction action and real world participation. So if we can’t get a ban on autonomous weapons, maybe it would seem that we have less hope for coordinating on more difficult issues. And so the lethal autonomous weapons may exacerbate global conflict by increasing skirmishing at borders, decrease the cost of war, dehumanize killing, taking the human element out of death, et cetera.

And other people disagree with this. Other people might argue that banning lethal autonomous weapons isn’t necessary in the long game. It’s not, as we’re framing it, a high stakes thing. Just because this sort of developmental step in this technology is not really crucial for coordination, or for political military stability. Or that coordination later would be born of other things, and that this would just be some other new military technology without much impact. So curious here, to gather what your views, or the views of FHI, or the Center for the Governance of AI, might have on autonomous weapons. Should there be a ban? Should the AI alignment community be doing more about this? And if not, why?

Jade: In terms of caveats, I’ve got a lot of them. So I think the first one is that I’ve not read up on this issue at all, followed it very loosely, but not nearly closely enough, that I feel like I have a confident well-informed opinion.

Lucas: Can I ask why?

Jade: Mostly because of bandwidth issues. It’s not because I have categorized them ahead of something not worth engaging in. I’m actually pretty uncertain about that. The second caveat is, definitely don’t claim to speak on behalf of anyone but myself in this case. The Center for the Governance of AI, we don’t have a particular position on this, nor the FHI.

Lucas: Would you say that this is because the Center for the Governance of AI, would it be for bandwidth issues again? Or would it be because it’s de-prioritized.

Jade: The main thing is bandwidth. Also, I think the main reason why it’s probably been de-prioritized, at least subconsciously, has been the framing of sort of focusing on things that are neglected by folks around the world. It seems like there are people at least with sort of somewhat good intentions tentatively engaged in the LAWS (lethal autonomous weapons) discussion. And so within that frame, I think de-prioritization because it’s not obviously neglected compared to other things that aren’t getting any focus at all.

With those things in mind, I could see a pretty decent case for investing more effort in engaging in this discussion, at least compared to what we currently have. I guess it’s hard to tell, compared to alternatives of how we could be spending those resources, giving it’s such a resource constrained space, in terms of people working in AI alignment, or just bandwidth, in terms of this community in general. So briefly, I think we’ve talked about this idea that there’s this fair amount of path dependency in the way that institutions and norms are built up. And if this is one of the first spaces, with respect to AI capabilities, where we’re going to be getting or driving towards some attempt at international norms, or establishing international institutions that could govern this space, then that’s going to be relevant in a general sense. And specifically, it’s going to be relevant for sort of defense and security related concerns in the AI space.

And so I think you both want to engage because there’s an opportunity to seed desirable norms and practices and process and information. But you also possibly want to engage because there could be a risk that bad norms are established. And so it’s important to engage, to prevent it going down something which is not a good path in terms of this path dependency.

Another reason I think that is maybe worth thinking through, in terms of making a case for engaging more, is that applications of AI in the military and defense spaces, possibly one of the most likely to cause substantial disruption in the near-ish future, and could be an example of something that I call the high stakes concerns in the future. And you can talk about AI and its impact on various aspects of the military domain, where it could have substantial risks. So, for example, in cyber escalation, or destabilizing nuclear security. Those would be examples where military and AI come together, and you can have bad outcomes that we do actually really care about. And so for the same reason, engaging specifically in any discussion that is touching on military and AI concerns, could be important.

And then the last one that comes to mind is the one that you mentioned. This is an opportunity to basically practice doing this coordination thing. And there are various things that are worth practicing or attempting. For one, I think even just observing how these discussions pan out is going to tell you a fair amount about how important actors think about the trade offs of using AI versus sort of going towards more safe outcomes or governance processes. And then our ability corral interest around good values or appropriate norms, or whatnot, that’s a good test of our ability to generally coordinate when we have some of those trade offs around, for example, military advantage versus safety. It gives you some insight into how we could be dealing with similarly shaped issues.

Lucas: All right. So let’s go ahead and bring it back here to concrete actionable real world things today, and understanding what’s actually going on outside of the abstract thinking. So I’m curious to know here more about private companies. At least, to me, they largely seem to be agents of capitalism, like we said. They have a bottom line that they’re trying to meet. And they’re not ultimately aligned with pro-social outcomes. They’re not necessarily committed to ideal governance, but perhaps forms of governance which best serve them. And as we sort of feed aligned people into tech companies, how should we be thinking about their goals, modulating their incentives? What does DeepMind really want? Or what can we realistically expect from key players? And what mechanisms, in addition to the windfall clause, can we use to sort of curb the worst aspects of profit-driven private companies?

Jade: If I knew what DeepMind actually wanted, or what Google actually thought, we’d be in a pretty different place. So a fair amount of what we’ve chatted through, I would echo again. So I think there’s both the importance of realizing that they’re not completely divorced from other people influencing them, or other actors influencing them. And so just thinking hard about which levers are in place already that actually constrain the action of companies, is a pretty good place to start, in terms of thinking about how you can have an impact on their activities.

There’s this common way of talking about big tech companies, which is they can do whatever they want, and they run the world, and we’ve got no way of controlling them. Reality is that they are consistently constrained by a fair number of things. Because they are agents of capitalism, as you described, and because they have to respond to various things within that system. So we’ve mentioned things before, like governments have levers, consumers have levers, employees have levers. And so I think focusing on what those are is a good place to start. Anything that comes to mind is, there’s something here around taking a very optimistic view of how companies could behave. Or at least this is the way that I prefer to think about it, is that you both need to be excited, and motivated, and think that companies can change and create the conditions in which they can. But one also then needs to have a kind of hidden clinic, in some ways.

On both of these, I think the first one, I really want the public discourse to turn more towards the direction of, if we assume that companies want to have the option of demonstrating pro-social incentives, then we should do things like ensure that the market rewards them for acting in pro-social ways, instead of penalizing their attempts at doing so, instead of critiquing every action that they take. So, for example, I think we should be making bigger deals, basically, of when companies are trying to do things that at least will look like them moving in the right direction, as opposed to immediately critiquing them as ethics washing, or sort of just paying lip service to the thing. I want there to be more of an environment where, if you are a company, or you’re a head of a company, if you’re genuinely well-intentioned, you feel like your efforts will be rewarded, because that’s how incentive structures work, right?

And then on the second point, in terms of being realistic about the fact that you can’t just wish companies into being good, that’s when I think the importance of things like public institutions and civil society groups become important. So ensuring that there are consistent forms of pressure, and keep making sure that they feel like their actions are being rewarding if pro-social, but also that there are ways of spotting in which they can be speaking as if they’re being pro-social, but acting differently.

So I think everyone’s kind of basically got a responsibility here, to ensure that this goes forward in some kind of productive direction. I think it’s hard. And we said before, you know, some industries have changed in the past successfully. But that’s always been hard, and long, and messy, and whatnot. But yeah, I do think it’s probably more tractable than the average person would think, in terms of influencing these companies to move in directions that are generally just a little bit more socially beneficial.

Lucas: Yeah. I mean, also the companies were generally made up of fairly reasonable well-intentioned people. I’m not all pessimistic. There are just a lot of people who sit at desks and have their structure. So yeah, thank you so much for coming on, Jade. It’s really been a pleasure. And, yeah.

Jade: Likewise.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

ICLR Safe ML Workshop Report

This year the ICLR conference hosted topic-based workshops for the first time (as opposed to a single track for workshop papers), and I co-organized the Safe ML workshop. One of the main goals was to bring together near and long term safety research communities.

The workshop was structured according to a taxonomy that incorporates both near and long term safety research into three areas — specification, robustness, and assurance.

Specification: define the purpose of the system

  • Reward hacking
  • Side effects
  • Preference learning
  • Fairness

Robustness: design system to withstand perturbations

  • Adaptation
  • Verification
  • Worst-case robustness
  • Safe exploration

Assurance: monitor and control system activity

  • Interpretability
  • Monitoring
  • Privacy
  • Interruptibility

We had an invited talk and a contributed talk in each of the three areas.

Talks

In the specification area, Dylan Hadfield-Menell spoke about formalizing the value alignment problem in the Inverse RL framework.

David Krueger presented a paper on hidden incentives for the agent to shift its task distribution in the meta-learning setting.

In the robustness area, Ian Goodfellow argued for dynamic defenses against adversarial examples and encouraged the research community to consider threat models beyond small perturbations within a norm ball of the original data point.

Avraham Ruderman presented a paper on worst-case analysis for discovering surprising behaviors (e.g. failing to find the goal in simple mazes).

In the assurance area, Cynthia Rudin argued that interpretability doesn’t have to trade off with accuracy (especially in applications), and that it is helpful for solving research problems in all areas of safety.

Beomsu Kim presented a paper explaining why adversarial training improves the interpretability of gradients for deep neural networks.

Panels

The workshop panels discussed possible overlaps between different research areas in safety and research priorities going forward.

In terms of overlaps, the main takeaway was that advancing interpretability is useful for all safety problems. Also, adversarial robustness can contribute to value alignment – e.g. reward gaming behaviors can be viewed as a system finding adversarial examples for its reward function. However, there was a cautionary point that while near- and long-term problems are often similar, solutions might not transfer well between these areas (e.g. some solutions to near-term problems might not be sufficiently general to help with value alignment).

The research priorities panel recommended more work on adversarial examples with realistic threat models (as mentioned above), complex environments for testing value alignment (e.g. creating new structures in Minecraft without touching existing ones), fairness formalizations with more input from social scientists, and improving cybersecurity.

Papers

Out of the 35 accepted papers, 5 were on long-term safety / value alignment, and the rest were on near-term safety. Half of the near-term paper submissions were on adversarial examples, so the resulting pool of accepted papers was skewed as well: 14 on adversarial examples, 5 on interpretability, 3 on safe RL, 3 on other robustness, 2 on fairness, 2 on verification, and 1 on privacy. Here is a summary of the value alignment papers:

Misleading meta-objectives and hidden incentives for distributional shift by Krueger et al shows that RL agents in a meta-learning context have an incentive to shift their task distribution instead of solving the intended task. For example, a household robot whose task is to predict whether its owner will want coffee could wake up its owner early in the morning to make this prediction task easier. This is called a ‘self-induced distributional shift’ (SIDS), and the incentive to do so is a ‘hidden incentive for distributional shift’ (HIDS). The paper demonstrates this behavior experimentally and shows how to avoid it.

How useful is quantilization for mitigating specification-gaming? by Ryan Carey introduces variants of several classic environments (Mountain Car, Hopper and Video Pinball) where the observed reward differs from the true reward, creating an opportunity for the agent to game the specification of the observed reward. The paper shows that a quantilizing agent avoids specification gaming and performs better in terms of true reward than both imitation learning and a regular RL agent on all the environments.

Delegative Reinforcement Learning: learning to avoid traps with a little help by Vanessa Kosoy introduces an RL algorithm that avoids traps in the environment (states where regret is linear) by delegating some actions to an external advisor, and achieves sublinear regret in a continual learning setting. (Summarized in Alignment Newsletter #57)

Generalizing from a few environments in safety-critical reinforcement learning by Kenton et al investigates how well RL agents avoid catastrophes in new gridworld environments depending on the number of training environments. They find that both model ensembling and learning a catastrophe classifier (used to block actions) are helpful for avoiding catastrophes, with different safety-performance tradeoffs on new environments.

Regulatory markets for AI safety by Clark and Hadfield proposes a new model for regulating AI development where regulation targets are required to choose regulatory services from a private market that is overseen by the government. This allows regulation to efficiently operate on a global scale and keep up with the pace of technological development and better ensure safe deployment of AI systems. (Summarized in Alignment Newsletter #55)

The workshop got a pretty good turnout (around 100 people). Thanks everyone for participating, and thanks to our reviewers, sponsors, and my fellow organizers for making it happen!

(Cross-posted from the Deep Safety blog.)

FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

As we grapple with questions about AI safety and ethics, we’re implicitly asking something else: what type of a future do we want, and how can AI help us get there?

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.

Topics discussed in this episode include:

  • Hopes for the future of AI
  • AI-human collaboration
  • AI’s influence on art and creativity
  • The UN AI for Good Summit
  • Gaps in AI safety
  • Preparing AI for uncertainty
  • Holding AI accountable

Publications and resources discussed in this episode include:

Ariel: Hello and welcome to another episode of the FLI podcast. I’m your host Ariel Conn, and today we’ll be looking at how to address safety and ethical issues surrounding artificial intelligence, and how we can implement safe and ethical AIs both now and into the future. Joining us this month are Ashley Llorens and Francesca Rossi who will talk about what they’re seeing in academia, industry, and the military in terms of how AI safety is already being applied and where the gaps are that still need to be addressed.

Ashley is the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory where he directs research and development in machine learning, robotics, autonomous systems, and neuroscience all towards addressing national and global challenges. He has served on the Defense Science Board, the Naval Studies Board of the National Academy of Sciences, and the Center for a New American Security’s AI task force. He is also a voting member of the Recording Academy, which is the organization that hosts the Grammy Awards, and I will definitely be asking him about that later in the show.

Francesca is the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab. She is an advisory board member for FLI, a founding board member for the Partnership on AI, a deputy academic director of the Leverhulme Centre for the Future of Intelligence, a fellow with AAAI and EurAI (that’s e-u-r-a-i), and she will be the general chair of AAAI in 2020. She was previously Professor of Computer Science at the University of Padova in Italy, and she’s been president of IJCAI and the editor-in-chief of the Journal of AI Research. She is currently joining us from the United Nations AI For Good Summit, which I will also ask about later in the show.

So Ashley and Francesca, thank you so much for joining us today.

Francesca: Thank you.

Ashley: Glad to be here.

Ariel: Alright. The first question that I have for both of you, and Ashley, maybe I’ll direct this towards you first: basically, as you look into the future and you look at artificial intelligence becoming more of a role in our everyday lives — before we look at how everything could go wrong, what are we striving for? What do you hope will happen with artificial intelligence and humanity?

Ashley: My perspective on AI is informed a lot by my research and experiences at the Johns Hopkins Applied Physics Lab, which I’ve been at for a number of years. My earliest explorations had to do with applications of artificial intelligence to robotics systems, in particular underwater robotics systems, systems where signal processing and machine learning are needed to give the system situational awareness. And of course, light doesn’t travel very well underwater, so it’s an interesting task to make a machine see with sound for all of its awareness and all of its perception.

And in that journey, I realized how hard it is to have AI-enabled systems capable of functioning in the real world. That’s really been a personal research journey that’s turned into an institution-wide research journey for Johns Hopkins APL writ large. And we’re a large not-for-profit R & D organization that does national security, space exploration, and health. We’re about 7,000 folks or so across many different disciplines, but many scientists and engineers working on those kinds of problems — we say critical contributions to critical challenges.

So as I look forward, I’m really looking at AI-enabled systems, whether they’re algorithmic in cyberspace or they’re real-world systems that are really able to act with greater autonomy in the context of these important national and global challenges. So for national security: to have robotic systems that can be where people don’t want to be, in terms of being under the sea or even having a robot go into a situation that could be dangerous so a person doesn’t have to. And to have that system be able to deal with all the uncertainty associated with that.

You look at future space exploration missions where — in terms of AI for scientific discovery, we talk a lot about that — imagine a system that can perform science with greater degrees of autonomy and figure out novel ways of using its instruments to form and interrogate hypotheses when billions of miles away. Or in health applications where we can have systems more ubiquitously interpreting data and helping us to make decisions about our health to increase our lifespan, or health span as they say.

I’ve been accused of being a techno-optimist, I guess. I don’t think technology is the solution to everything, but it is my personal fascination. And in general, just having this AI capable of adding value for humanity in a real world that’s messy and sloppy and uncertain.

Ariel: Alright. Francesca, you and I have talked a bit in the past, and so I know you do a lot of work with AI safety and ethics. But I know you’re also incredibly hopeful about where we can go with AI. So if you could start by talking about some of the things that you’re most looking forward to.

Francesca: Sure. Partially focused on the need of developing autonomous AI systems that can act where humans cannot go, for example, and that’s definitely very, very important. I would like to focus more on the need also of AI systems that can actually work together with humans, augmenting our own capabilities to make decisions or to function in our work environment or in our private environment. That’s the focus of and the purpose of AI that I see, that I work on, and I focus on what are the challenges in making this system really work well with humans.

This means of course that while it may seem that in some sense it’s easier to develop an AI system that works together with humans because there is complementarity — some things are made by humans, some things are made by the machine. But actually, there are several additional challenges because you want these two entities, the human and the machine, to actually become a real team and work together and collaborate together to achieve a certain goal. You want these machines to be able to communicate, interact in a very natural way with human beings and you want these machines to be not just reactive to commands, but also proactive at trying to understand what the human being needs in that moment, in that context in order to provide all the information and knowledge that it needs from the data that surrounds whatever task is going to be addressed.

That’s the focus also of what IBM Business Model is, because of course IBM releases AI to be used in other companies so that their professional people can use it to do better the job that they’re doing. And it has many, many different interesting research directions. The one that I’m mostly focused on is around value alignment. How do you make sure that these systems know and are aware of the values that they should follow and of the ethical principles that they should follow, while trying to help human beings do whatever they need to do? And there are many ways that you can do that and many ways to model them to reason with these ethical principles and so on.

Being here in Geneva at AI For Good, I mean, in general, I think that here for example the emphasis is — and rightly so — about the sustainable development goals of the UN: these 17 goals that define a vision of the future, the future that we want. And we’re trying to understand how we can leverage technologies such as AI to achieve that vision. The vision can be slightly nuanced and different, but to me, the development of advanced AI is not the end goal, but is only a way to get to the vision of the future that I have. And so, to me, this AI For Good Summit and the 17 sustainable development goals define a vision of the future that is important to have when one has in mind how to improve technology.

Ariel: For listeners who aren’t as familiar with the sustainable development goals, we can include links to what all of those are in the podcast description.

Francesca: I was impressed at this AI For Good Summit. This Summit started three years ago with kind of 400 people. Then last year it was like 500 people, and this year there are 3,200 registered participants. To really give you an idea of how more and more everybody’s interested into these subjects.

Ariel: Have you also been equally impressed by the topics that are covered?

Francesca: Well, I mean, it started today. So I just saw in the morning there are five different parallel sessions that will go throughout the following two days. One is AI education and learning. One is health and wellbeing. One is AI human dignity and inclusive society. One is scaling AI for good. And one is AI for space. These five themes will go throughout two days together with many other smaller ones. But for what I’ve seen this morning, it’s really a very high level of the discussion. It’s going to be very impactful. Each event is unique, has its own specificity, but this event is unique because it’s focused on a vision of the future, which in this case are the sustainable development goals.

Ariel: Well, I’m really glad that you’re there. We’re excited to have you there. And so, you’re talking about moving towards futures where we have AIs that can do things that either humans can’t do or don’t want to do or isn’t safe, visions where we can achieve more because we’re working with AI systems as opposed to just humans trying to do things alone. But we still have to get to those points where this is being implemented safely and ethically.

I’ll come back to the question of what we’re doing right so far, but first, what do you see as the biggest gaps in AI safety and ethics? And this is a super broad question, but looking at it with respect to, say, the military or industry or academia. What are some of the biggest problems you see in terms of us safely applying AI to solve problems?

Ashley: It’s a really important question. My answer is going to center around uncertainty and dealing with that in the context of the operation of the system, and let’s say the implementation or the execution of the ethics of the system as well. But first, backing up to Francesca’s comment, I just want to emphasize this notion of teaming and really embrace this narrative in my remarks here.

I’ve heard it said before that every machine is part of some human workflow. I think a colleague Matt Johnson at the Florida Institute for Human and Machine Cognition says that, which I really like. And so, just to make clear, whether we’re talking about the cognitive enhancements, an application of AI where maybe you’re doing information retrieval, or even a space exploration example, it’s always part of a human-machine team. In the space exploration example, the scientists and the engineers are on the earth, maybe many light hours away, but the machines are helping them do science. But at the end of the day, the scientific discovery is really happening on earth with the scientists. And so, whether it’s a machine operating remotely or by cognitive assistance, it’s always part of a human-machine team. That’s just something I wanted to amplify that Francesca said.

But coming back to the gaps, a lot of times I think what we’re missing in our conversations is getting some structure around the role of uncertainty in these agents that we’re trying to create that are going to help achieve that bright future that Francesca was referring to. To help us think about this at APL, we think about agents as needing to perceive, decide, act in teams. This is a framework that just helps us understand these general capabilities that we’ll need and to start thinking about the role of uncertainty, and then combinations of learning and reasoning that would help agents to deal with that. And so, if you think about an agent pursuing goals, the first thing it has to do is get an understanding of the world states. This is this task of perception.

We often talk about, well, if an agent sees this or that, or if an agent finds itself in this situation, we want it to behave this way. Obviously, the trolley problem is an example we revisit often. I won’t go into the details there, but the question is, I think, given some imperfect observation of the world, how does the structure of that uncertainty factor into the correct functioning of the agent in that situation? And then, how does that factor into the ethical, I’ll say, choices or data-driven responses that an agent might have to that situation?

Then we talk about decision making. An agent has goals. In order to act on its goals, it has to decide about how certain sequences of actions would affect future states of the world. And then again how, in the context of an uncertain world, is the agent going to go about accurately evaluating possible future actions when it’s outside of a gaming environment, for example. How does uncertainty play into that and its evaluation of possible actions? And then in the carrying out of those actions, there may be physical reasoning, geometric reasoning that has to happen. For example, if an agent is going to act in a physical space, or reasoning about a cyber-physical environment where there’s critical infrastructure that needs to be protected or something like that.

And then finally, to Francesca’s point, the interactions, or the teaming with other agents that may be teammates or actually may be adversarial. And so, how does the reasoning about what my teammates might be intending to do, what the state of my teammates might be in terms of cognitive load if it’s a human teammate, what might the intent of adversarial agents be in confounding or interfering with the goals of the human-machine team?

And so, to recap a little bit, I think this notion of machines dealing with uncertainty in real world situations is one of the key challenges that we need to deal with over the coming decades. And so, I think having more explicit conversations about how uncertainty manifests in these situations, how you deal with it in the context of the real world operation of an AI-enabled system, and then how we give structure to the uncertainty in a way that should inform our ethical reasoning about the operation of these systems. I think that’s a very worthy area of focus for us over the coming decades.

Ariel: Could you walk us through a specific example of how an AI system might be applied and what sort of uncertainties it might come across?

Ashley: Yeah, sure. So think about the situation where there’s a dangerous environment, let’s say, in a policing action or in a terrorist situation. Hey, there might be hostiles in this building, and right now a human being might have to go into that building to investigate it. We’ll send a team of robots in there to do the investigation of the building to see if it’s safe, and you can think about that situation as analogous for a number of possible different situations.

And now, let’s think about the state of computer vision technology, where straight pattern recognition is hopefully a fair characterization of the state of the art, where we know we can very accurately recognize objects from a given universe of objects in a computer vision feed, for example. Well, now what happens if these agents encounter objects from outside of that universe of training classes? How can we start to bound the performance of the computer vision algorithm with respect to objects from unknown classes? You can start to get a sense from that progression, just from the perception part of that problem, from recognize, of these 200 possible objects, tell me which class it comes from, to having to do vision type tasks in environments that would present many new and novel objects that they may have to perceive and reason about.

You can think about that perception task now as extending to agents that might be in that environment and trying to ascertain from partial observations of what the agents might look like, partial observations of the things they might be doing to try to have some assessment of this is a friendly agent or this is an unfriendly agent, to reasoning about affordances of objects in the environment that might present our systems with ways of dealing with those agents that conform to ethical principles.

That was not a very, very concrete example, but hopefully starts to get one level deeper into the kinds of situations we want to put systems into and the kinds of uncertainty that might arise.

Francesca: To tie to what Ashley just said, we definitely need a lot more ways to have realistic simulations of what can happen in real life. So testbeds, sandboxes, that is definitely needed. But related to that, there is also this ongoing effort — which has already resulted in tools and mechanisms, but many people are still working on it — which is to understand better the error landscape that the machine learning approach may have. We know machine learning always has a small percentage of error in any given situation and that’s okay, but we need to understand what’s the robustness of the system in terms of that error, and also we need to understand the structure of that error space because this information can inform us on what are the most appropriate or less appropriate use cases for the system.

Of course, going from there, this understanding of the error landscape is just one aspect of the need for transparency on the capabilities and limitations of the AI systems when they are deployed. It’s a challenge that spans from academia or research centers to, of course, the business units and the companies developing and delivering AI systems. So that’s why at IBM we are working a lot on this issue of collecting information during the development and the design phase around the properties of the systems, because we think that understanding of this property is very important to really understand what should or should not be done with the system.

And then, of course, there is, as you know, a lot of work around understanding other properties of the system. Like, fairness is one of the values that we may want to inject, but of course it’s not as simple as it looks because there are many, many definitions of fairness and each one is more appropriate or less appropriate in certain scenarios and certain tasks. It is important to identify the right one at the beginning of the design and the development process, and then to inject mechanisms to detect and mitigate bias according to that notion of fairness that we have decided is the correct one for that product.

And so, this brings us also to the other big challenge, which is to help developers understand how to define these notions, these values like fairness that they need to use in developing the system — how to define them not just by themselves within the tech company, but also communicating with the communities that are going to be impacted by these AI product, and that may have something to say on what is the right definition of fairness that they care about. That’s why, for example, another thing that we did, besides developing research and also products, but we also invest a lot in educating developers in trying to help them understand in their everyday jobs how to think about these issues, whether it’s fairness, robustness, transparency, and so on.

And so, we built this very small booklet — we call it the Everyday AI Ethics Guide for Designers and Developers — that raises a lot of questions that should be in their mind in their everyday job. Because as you know, for example, if you don’t think about bias or fairness during these development phases and you just check whether your product is fair or not or when it’s ready to be deployed, then you may discover that actually you need to start from scratch again if you discover that it doesn’t have the right notion of fairness.

Another effort that we really care a lot about in this effort to build teams of humans and machines is the issue of explainability, to make sure that it is possible to understand why these systems are recommending certain decisions. Explainability is something that, especially in this environment of teaming AI machines, is very important, because without this capability of AI systems of explaining why they are recommending certain decision, then the human being part of the team will not in the long run trust the AI system, so will not adopt it possibly. And so we will also lose the positive and beneficial effect of the AI system.

The last thing that I want to say is that this education of developers extends actually much beyond the developers to also the policy makers. That’s why it’s important to have a lot of interaction with policy makers that need to really be educated about the state of the art, about the challenges, about the limits of current AI, in order to understand how to best drive the current technology, to be more and more advanced, but also beneficial and driven towards the beneficial efforts. And what are the right mechanisms to drive the technology into the direction that we want? Still needs a lot more multi-stakeholder discussion to really achieve the best results, I think.

Ashley: Just picking up on a couple of those themes that Francesca raised: first, I just want to touch on simulations. At the applied physics laboratory, one of the core things we do is develop systems for the real world. And so, as the tools of artificial intelligence are evolving, the art and the science of systems engineering is starting to morph into this AI systems engineering regime. And we see simulation as key, more key than it’s ever been, into developing real world systems that are enabled by AI.

One of the things we’re really looking into now is what we call live virtual constructive simulations. These are simulations that you can do distributed learning for agents in a constructive mode where you have highly parallelized learning, but where you actually have links and hooks for live interactions with humans to get the human-machine teaming. And then finally, bridging the gap between simulation and real world where some of the agents represented in the context of the human-machine teaming functionality can be virtual and some can actually be represented by real systems in the real world. And so, we think that these kinds of environments, these live virtual constructive environments, will be important for bridging the gap from simulation to real.

Now, in the context of that is this notion of sharing information. If you think about the complexity of the systems that we’re building, and the complexity and the uncertainty of the real world conditions — whether that’s physical or cyber or what have you — it’s going to be more and more challenging for a single development team to analytically characterize the performance of the system in the context of real-world environment. And so, I think as a community we’re really doing science; We’re performing science, fielding these complex systems in these real-world environments. And so, the more we can make that a collective scientific exploration where we’re setting hypotheses, performing these experiments — these experiments of deploying AI in real world situations — the more quickly we’ll make progress.

And then, finally, I just wanted to talk about accountability, which I think builds on this notion of transparency and explainability. From what I can see — and this is something we don’t talk about enough, I think — is I think we need to change our notion of accountability when it comes to AI-enabled systems. I think our human nature is we want individual accountability for individual decisions and individual actions. If an accident happens, our whole legal system, our whole accountability framework is, “Well, tell me exactly what happened that time,” and I want to get some accountability based on that and I want to see something improve based on that. Whether it’s a plane crash or a car crash, or let’s say there’s corruption in a Fortune 500 company — we want see the CFO fired and we want to see a new person hired.

I think when you look at these algorithms, they’re driven by statistics, and the statistics that drive these models are really not well suited for individual accountability. It’s very hard to establish the validity of a particular answer or classification or something that comes out of the algorithm. Rather, we’re really starting to look at the performance of these algorithms over a period of time. It’s hard to say, “Okay, this AI-enabled system: tell me what happened on Wednesday,” or, “Let me hold you accountable for what happened on Wednesday.” And more so, “Let me hold you accountable for everything that you did during the month of April that resulted in this performance.”

And so, I think our notion of accountability is going to have to embrace this notion of ensemble validity, validity over a collection of activities, actions, decisions. Because right now, I think if you look at the underlying mathematical frameworks for these algorithms, they’re not well supported for this notion of individual accountability for decisions.

Francesca: Accountability is very important. It needs a lot more discussion. This is one of the topic also that we have been discussing in this initiative by the European Commission in defining the AI Ethics Guidelines for Europe, and accountability is one of the seven requirements. But it’s not easy to define what it means. What Ashley said is one possibility: Change our idea of accountability from one specific instance to over several instances. That’s one possibility, but I think that that’s something that needs a lot more discussion with several stakeholders.

Ariel: You’ve both mentioned some things that sound like we’re starting to move in the right direction. Francesca, you talked about getting developers to think about some of the issues like fairness and bias before they start to develop things. You talked about trying to get policy makers more involved. Ashley, you mentioned the live virtual simulations. Looking at where we are today, what are some of the things that you think have been most successful in moving towards a world where we’re considering AI safety more regularly, or completely regularly?

Francesca: First of all, we’ve gone a really long way in a relatively short period of time, and the Future of Life Institute has been instrumental in building the community, and everybody understands that the only approach to address this issue is a multidisciplinary, multi-stakeholder approach. The Future of Life Institute, with the first Puerto Rico conference, showed very clearly that this is the approach to follow. So I think that in terms of building the community that discusses and identifies the issues, I think we have done a lot.

I think that at this point, what we need is greater coordination and also redundancy removal among all these different initiatives. I think we have to find, as a community, the main issues and the main principles and guidelines that we think are needed for the development of more advanced forms of AI, starting from the current state of the art. If you look at the values, at these guidelines or lists of principles around AI ethics from the values initiatives, they are of course different from each other but they have a lot in common. So we really were able to identify these issues, and this identification of the main issues is important as we move forward to more advanced versions of AI.

And then, I think another thing that also we are doing in a rather successful though not complete way is trying to move from research to practice. From high level principles to concrete, develop, and deploy the products that can embed these principles and guidelines into not just the scientific papers that are published, but also into the platform, the services, and the tool kits that companies use with their clients. We needed an initial phase where there were high level discussions about guidelines and principles, but now we are in the second phase where these go and percolate down to the business units and to how products are built and deployed.

Ashley: Yeah, just building on some of Francesca’s comments, I’ve been very inspired by the work of the Future of Life Institute and the burgeoning, I’ll say, emerging AI safety community. Similar to Francesca’s comment, I think that the real frontier here is now taking a lot of that energy, a lot of that academic exploration, research, and analysis and starting to find the intersections of a lot of those explorations with the real systems that we’re building.

You’re definitely seeing within IBM, as Francesca mentioned, within Microsoft, within more applied R & D organizations like Johns Hopkins APL, where I am, internal efforts to try to bridge the gap. And what I really want to try to work to catalyze in the coming years is a broader, more community-wide intersection between the academic research community looking out over the coming centuries and the applied research community that’s looking out over the coming decades, and find the intersection there. How do we start to pose a lot of these longer term challenge problems in the context of real systems that we’re developing?

And maybe we get to examples. Let’s say, for ethics, beyond the trolley problem and into posing problems that are more real-world or closer, better analogies to the kinds of systems we’re developing, the kinds of situations they will find themselves in, and start to give structure to some of the underlying uncertainty. Having our debates informed by those things.

Ariel: I think that transitions really nicely to the next question I want to ask you both, and that is, over the next 5 to 10 years, what do you want to see out of the AI community that you think will be most useful in implementing safety and ethics?

Ashley: I’ll probably sound repetitive, but I really think focusing in on characterizing — I think I like the way Francesca put it — the error landscape of a system as a function of the complex internal states and workings of the system, and the complex and uncertain real-world environments, whether cyber or physical that the system will be operating in, and really get deeper there. It’s probably clear to anyone that works in the space that we really need to fundamentally advance the science and the technology. I’ll start to introduce the word now: trust, as it pertains to AI-enabled systems operating in these complex and uncertain environments. And again, starting to better ground some of our longer-term thinking about AI being beneficial for humanity and grounding those conversations into the realities of the technologies as they stand today and as we hope to develop and advance them over the next few decades.

Francesca: Trust means building trust in the technology itself — and so the things that we already mentioned like making sure that it’s fair, value aligned, robust, explainable — but also building trust in those that produce the technology. But then, I mean, this is the current topic: How do we build trust? Because without trust we’re not going to adopt the full potential of the beneficial effect of the technology. It makes sense to also think in parallel, and more in the long-term, what’s the right governance? What’s the right coordination of initiatives around AI and AI ethics? And this is already a discussion that is taking place.

And then, after governance and coordination, it’s also important with more and more advanced versions of AI, to think about our identity, to think about the control issues, to think in general about this vision of the future, the wellbeing of the people, of the society, of the planet. And how to reverse engineer, in some sense, from a vision of the future to what it means in terms of a behavior of the technology, behavior of those that produce the technology, and behavior of those that regulate the technology, and so on.

We need a lot more of this reverse engineering approach, where instead of starting from the current state of the art of the technology and saying, “Okay, these are the properties that I think I want in this technology: fairness, robustness, transparency, and so on, because otherwise I don’t like this technology to be deployed without these properties.” And then see what happens in the next version, more advanced version of the technology, and think about possibly new properties and so on. This is one approach, but the other approach is that, “Okay, this is the vision of life in, I don’t know, 50 years from now. How do I go from that to the kind of the technology, to the direction that I want to push the technology towards to achieve that vision?

Ariel: We are getting a little bit short on time, and I did want to follow up with Ashley about his other job. Basically, Ashley, as far as my understanding, you essentially have a side job as a hip hop artist. I think it would be fun to just talk a little bit in the last couple of minutes that we have about how both you and Francesca see artificial intelligence impacting these more creative fields. Is this something that you see as enhancing artists’ abilities to do more? Do you think there’s a reason for artists to be concerned that AI will soon be a competition for them? What are your thoughts for the future of creativity and AI?

Ashley: Yeah. It’s interesting. As you point out, over the last decade or so, in addition to furthering my career as an engineer, I’ve also been a hip hop artist and I’ve toured around the world and put out some albums.I think where we see the biggest impact of technology on music and creativity, I think, is, one, in the democratization of access to creation. Technology is a lot cheaper. Having a microphone and a recording setup or something like that, from the standpoint of somebody that does vocals like me, is much more accessible to many more people. And then, you see advances and — you know, when I started doing music I would print CDs and press vinyl. There was no iTunes. And just, iTunes has revolutionized how music is accessed by people, and more generally how creative products are accessed by people in streaming, etc. So I think looking backward, we’ve seen most of the impact of technology on those two things: access to the creation and then access to the content.

Looking forward, will those continue to be the dominant factors in terms of how technology is influencing the creation of music, for example? Or will there be something more? Will AI start to become more of a creative partner? We’ll see that and it will be evolutionary. I think we already see technology being a creative partner more and more so over time. A lot of the things that I studied in school — digital signal processing, frequency, selective filtering — a lot of those things are baked into the tools already. And just as we see AI helping to interpret other kinds of signal processing products like radiology scans, we’ll see more and more of that in the creation of music where an AI assistant — for example, if I’m looking for samples from other music — an AI assistant that can comb through a large library of music and find good samples for me. Just as we do with Instagram filters — an AI suggesting good filters for pictures I take on my iPhone — you can see in music AI suggesting good audio filters or good mastering settings or something, given a song that I’m trying to produce or goals that I have for the feel and tone of the product.

And so, already I think as an evolutionary step, not even a revolutionary step, AI becoming more present in the creation of music. I think maybe, as in other application areas, we may see, again, AI being more of a teammate, not only in the creation of the music, but in the playing of the music. I heard an article or a cast on NPR about a piano player that developed an AI accompaniment for himself. And so, as he played in a live show, for example, there would be an AI accompaniment and you could dial back the settings on it in terms of how aggressive it was in rhythm and time, where it situated with respect to the lead performer. Maybe in hip hop we’ll see AI hype men or AI DJs. It’s expensive to travel overseas, and so somebody like me goes overseas to do a show, and instead of bringing a DJ with me, I have an AI program that can select my tracks and add cuts at the right places and things like that. So that was a long-winded answer, but there’s a lot there. Hopefully that was addressing your question.

Ariel: Yeah, absolutely. Francesca, did you have anything you wanted to add about what you think AI can do for creativity?

Francesca: Yeah. I mean, of course I’m less familiar of what AI is already doing right now, but I am aware of many systems from companies into the space of delivering content or music or so on, systems where the AI part is helping humans develop their own creativity even farther. And as Ashley said, I mean, I hope that in the future AI can help us be more creative — even people that maybe are less able than Ashley to be creative themselves. And I hope that this will enhance the creativity of everybody, because this will enhance the creativity, yes, in hip hop or in making songs or in other things, but also I think it will help to solve some very fundamental problems because having a population which is more creative, of course, is more creative in everything.

So in general, I hope that AI will help us human beings be more creative in all aspects of our life besides entertainment — which is of course very, very important for all of us for the wellbeing and so on — but also in all the other aspects of our life. And this is the goal that I think — going to the beginning where I said AI’s purpose should be the one of enhancing our own capabilities. And of course, creativity is also a very important capability that human beings have.

Ariel: Alright. Well, thank you both so much for joining us today. I really enjoyed the conversation.

Francesca: Thank you.

Ashley: Thanks for having me. I really enjoyed it.

Ariel: For all of our listeners, if you have been enjoying this podcast, please take a moment to like it or share it and maybe even give us a good review. And we will be back again next month.

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous moral and empirical implications for life’s purpose and role in the universe. Is existence without consciousness meaningful?

In this podcast, Lucas spoke with Mike Johnson and Andrés Gómez Emilsson of the Qualia Research Institute. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder. Mike is interested in neuroscience, philosophy of mind, and complexity theory.

Topics discussed in this episode include:

  • Functionalism and qualia realism
  • Views that are skeptical of consciousness
  • What we mean by consciousness
  • Consciousness and casuality
  • Marr’s levels of analysis
  • Core problem areas in thinking about consciousness
  • The Symmetry Theory of Valence
  • AI alignment and consciousness

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can learn more about consciousness research at the Qualia Research InstituteMike‘s blog, and Andrés blog. You can listen to the podcast above or read the transcript below. Thanks to Ian Rusconi for production and edits as well as Scott Hirsh for feedback.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe a information processing system. First of all, the competitional/behavioral level. What that is about is understanding the input output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, consciousness, it’s used in many different ways. There’s like eight to definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel of the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric plate space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a framing variant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a framing variant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here he is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is a objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connecting specific harmonic wave model and what we’ve done is combined it with our symmetry threory of valence and said this is sort of a way of basically getting a foyer transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way that has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, the link to the Symmetry Theory of Valence than it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to concealiance, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing Xrisk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long term frame

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last year global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what effective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario, don’t knock it until you’ve tried it, but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia of formalism and valence realism and offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religious actually extend that into the future or past with reincarnation or maybe with heaven.

The sense of ontological distinction between you and others while at the same time ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define the rational decision making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants so to speak. But I would say that there is a very tricky aspect here that has to do with a game theory. We evolved to believe in close individualism. the fact that it’s evolutionarily adaptive, it’s obviously not an argument for it being fundamentally true, but it does seem to be some kind of a evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way If you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to maybe acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.

These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.

Nevertheless, the development of military AI is accelerating. Below are the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. All information is from State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

“PAX calls on states to develop a legally binding instrument that ensures meaningful human control over weapons systems, as soon as possible,” says Daan Kayser, the report’s lead author. “Scientists and tech companies also have a responsibility to prevent these weapons from becoming reality. We all have a role to play in stopping the development of Killer Robots.”

The United States

UN Position

In April 2018, the US underlined the need to develop “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.”

AI in the Military

  • In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).”
  • The 2016 report ‘Preparing for the Future of AI’ also refers to the weaponization of AI and notably states: “Given advances in military technology and AI more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid.”
  • In September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
  • The Advanced Targeting and Lethality Automated System (ATLAS) program, a branch of DARPA, “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.”

Cooperation with the Private Sector

  • Establishing collaboration with private companies can be challenging, as the widely publicized case of Google and Project Maven has shown: Following protests from Google employees, Google stated that it would not renew its contract. Nevertheless, other tech companies such as Clarifai, Amazon and Microsoft still collaborate with the Pentagon on this project.
  • The Project Maven controversy deepened the gap between the AI community and the Pentagon. The government has developed two new initiatives to help bridge this gap.
  • DARPA’s OFFSET program, which has the aim of “using swarms comprising upwards of 250 unmanned aircraft systems (UASs) and/or unmanned ground systems (UGSs) to accomplish diverse missions in complex urban environments,” is being developed in collaboration with a number of universities and start-ups.
  • DARPA’s Squad X Experimentation Program, which aims for human fighters to “have a greater sense of confidence in their autonomous partners, as well as a better understanding of how the autonomous systems would likely act on the battlefield,” is being developed in collaboration with Lockheed Martin Missiles.

China

UN Position

China demonstrated the “desire to negotiate and conclude” a new protocol “to ban the use of fully
autonomous lethal weapons systems.” However, China does not want to ban the development of these
weapons, which has raised questions about its exact position.

AI in the Military

  • There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III.
  • Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.

Cooperation with the Private Sector

  • To advance military innovation, President Xi Jinping has called for China to follow “the road of military-civil fusion-style innovation,” such that military innovation is integrated into China’s national innovation system. This fusion has been elevated to the level of a national strategy.
  • The People’s Liberation Army (PLA) relies heavily on tech firms and innovative start-ups. The larger AI research organizations in China can be found within the private sector.
  • There are a growing number of collaborations between defense and academic institutions in China. For instance, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory to create “a platform for the pursuit of dual-use applications of emerging technologies, particularly artificial intelligence.”
  • Regarding the application of artificial intelligence to weapons, China is currently developing “next generation stealth drones,” including, for instance, Ziyan’s Blowfish A2 model. According to the company, this model “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”

Russia

UN Position

Russia has stated that the debate around lethal autonomous weapons should not ignore their potential benefits, adding that “the concerns regarding LAWS can be addressed through faithful implementation of the existing international legal norms.” Russia has actively tried to limit the number of days allotted for such discussions at the UN.

AI in the Military

  • While Russia does not have a military-only AI strategy yet, it is clearly working towards integrating AI more comprehensively.
  • The Foundation for Advanced Research Projects (the Foundation), which can be seen as the Russian equivalent of DARPA, opened the National Center for the Development of Technology and Basic Elements of Robotics in 2015.
  • At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to the technological and economic security of Russia.”
  • In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”

Cooperation with the Private Sector

  • A new city named Era, devoted entirely to military innovation, is currently under construction. According to the Kremlin, the “main goal of the research and development planned for the technopolis is the creation of military artificial intelligence systems and supporting technologies.”
  • In 2017, Kalashnikov — Russia’s largest gun manufacturer — announced that it had developed a fully automated combat module based on neural-network technologies that enable it to identify targets and make decisions.

The United Kingdom

UN Position

The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any
practical effect.”

AI in the Military

  • A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial
    intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive
    effects we need in this regard.”
  • The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
  • The Defense Science and Technology Laboratory (Dstl), the MoD’s research arm, launched the AI Lab in 2018.
  • In terms of weaponry, the best-known example of autonomous technology currently under development is the top-secret Taranis armed drone, the “most technically advanced demonstration aircraft ever built in the UK,” according to the MoD.

Cooperation with the Private Sector

  • The MoD has a cross-government organization called the Defense and Security Accelerator (DASA), launched in December 2016. DASA “finds and funds exploitable innovation to support UK defense and security quickly and effectively, and support UK property.”
  • In March 2019, DASA awarded a GBP 2.5 million contract to Blue Bear Systems, as part of the Many Drones Make Light Work project. On this, the director of Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”

France

UN Position

France understands the autonomy of LAWS as total, with no form of human supervision from the moment of activation and no subordination to a chain of command. France stated that a legally binding instrument on the issue would not be appropriate, describing it as neither realistic nor desirable. France did propose a political declaration that would reaffirm fundamental principles and “would underline the need to maintain human control over the ultimate decision of the use of lethal force.”

AI in the Military

  • France’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as […] in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.”
  • This has been echoed by French Minister for the Armed Forces, Florence Parly, who said that “giving a machine the choice to fire or the decision over life and death is out of the question.”
  • On defense and security, the Villani Report states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.
  • The Villani Report refers to DARPA as a model, though not with the aim of replicating it. However, the report states that some of DARPA’s methods “should inspire us nonetheless. In particular as regards the President’s wish to set up a European Agency for Disruptive Innovation, enabling funding of emerging technologies and sciences, including AI.”
  • The Villani Report emphasizes the creation of a “civil-military complex of technological innovation, focused on digital technology and more specifically on artificial intelligence.”

Cooperation with the Private Sector

  • In September 2018, the Defense Innovation Agency (DIA) was created as part of the Direction Générale de l’Armement (DGA), France’s arms procurement and technology agency. According to Parly, the new agency “will bring together all the actors of the ministry and all the programs that contribute to defense innovation.”
  • One of the most advanced projects currently underway is the nEUROn unmanned combat air system, developed by French arms producers Dassault on behalf of the DGA, which can fly autonomously for over three hours.
  • Patrice Caine, CEO of Thales, one of France’s largest arms producers, stated in January 2019 that Thales will never pursue “autonomous killing machines,” and is working on a charter of ethics related to AI.

Israel

UN Position

In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “[w]e should also be aware of the military and humanitarian advantages.”

AI in the Military

  • It is expected that Israeli use of AI tools in the military will increase rapidly in the near future.
  • The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
  • The Israeli military deploys weapons with a considerable degree of autonomy. One of the most relevant examples is the Harpy loitering munition, also known as a kamikaze drone: an unmanned aerial vehicle that can fly around for a significant length of time to engage ground targets with an explosive warhead.
  • Israel was one of the first countries to “reveal that it has deployed fully automated robots: self-driving military vehicles to patrol the border with the Palestinian-governed Gaza Strip.”

Cooperation with the Private Sector

  • Public-private partnerships are common in the development of Israel’s military technology. There is a “close connection between the Israeli military and the digital sector,” which is said to be one of the reasons for the country’s AI leadership.
  • Israel Aerospace Industries, one of Israel’s largest arms companies, has long been been developing increasingly autonomous weapons, including the above mentioned Harpy.

South Korea

UN Position

In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.” In 2018, it raised concerns about limiting civilian applications as well as the positive defense uses of autonomous weapons.

AI in the Military

  • In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.”
  • South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
  • South Korea is known to have used the armed SGR-A1 sentry robot, which has operated in the demilitarized zone separating North and South Korea. The robot has both a supervised mode and an unsupervised mode. In the unsupervised mode “the SGR-AI identifies and tracks intruders […], eventually firing at them without any further intervention by human operators.”

Cooperation with the Private Sector

  • Public-private cooperation is an integral part of the military strategy: the plan for the AI Research and Development Center is “to build a network of collaboration with local universities and research entities such as the KAIST [Korea Advanced Institute for Science and Technology] and the Agency for Defense Development.”
  • In September 2018, South Korea’s Defense Acquisition Program Administration (DAPA) launched a new
    strategy to develop its national military-industrial base, with an emphasis on boosting ‘Industry 4.0
    technologies’, such as artificial intelligence, big data analytics and robotics.

To learn more about what’s happening at the UN, check out this article from the Bulletin of the Atomic Scientists.

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

Topics discussed in this episode include:

  • Embedded agency
  • The field of “getting AI systems to do what we want”
  • Ambitious value learning
  • Corrigibility, including iterated amplification, debate, and factored cognition
  • AI boxing and impact measures
  • Robustness through verification, adverserial ML, and adverserial examples
  • Interpretability research
  • Comprehensive AI Services
  • Rohin’s relative optimism about the state of AI alignment

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today’s episode is the second part of our two part series with Rohin Shah, developing an overview of technical AI alignment efforts. If you haven’t listened to the first part, we highly recommend that you do, as it provides an introduction to the varying approaches discussed here. The second part is focused on exploring AI alignment methodologies in more depth, and nailing down the specifics of the approaches and lenses through which to view the problem.

In this episode, Rohin will begin by moving sequentially through the approaches discussed in the first episode. We’ll start with embedded agency, then discuss the field of getting AI systems to do what we want, and we’ll discuss ambitious value learning alongside this. Next, we’ll move to corrigibility, in particular, iterated amplification, debate, and factored cognition.

Next we’ll discuss placing limits on AI systems, things of this nature would be AI boxing and impact measures. After this we’ll get into robustness which consists of verification, adversarial machine learning, and adversarial examples to name a few.

Next we’ll discuss interpretability research, and finally comprehensive AI services. By listening to the first part of the series, you should have enough context for these materials in the second part. As a bit of announcement, I’d love for this podcast to be particularly useful and interesting for its listeners. So I’ve gone ahead and drafted a short three minute survey that you can find link to on the FLI page for this podcast, or in the description of where you might find this podcast. As always, if you find this podcast interesting or useful, please make sure to like, subscribe and follow us on your preferred listening platform.

For those of you that aren’t already familiar with Rohin, he is a fifth year PhD student in computer science at UC Berkeley with the Center for Human Compatible AI working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week he collects and summarizes recent progress relative to AI alignment in the Alignment Newsletter. With that, we’re going to start off by moving sequentially through the approached just enumerated. All right. Then let’s go ahead and begin with the first one, which I believe was embedded agency.

Rohin: Yeah, so embedded agency. I kind of want to just differ to the embedded agency sequence, because I’m not going to do anywhere near as good a job as that does. But the basic idea is that we would like to have this sort of theory of intelligence, and one major blocker to this is the fact that all of our current theories, most notably, the reinforcement learning make this assumption that there is a nice clean boundary between the environment and the agent. It’s sort of like the agent is playing a video game, and the video game is the environment. There’s no way for the environment to actually affect the agent. The agent has this defined input channel, takes actions, those actions get sent to the video game environment, the video game environment does stuff based on that and creates an observation, and that observation was then sent back to the agent who gets to look at it, and there’s this very nice, clean abstraction there. The agent could be bigger than the video game, in the same way that I’m bigger than tic tac toe.

I can actually simulate the entire game tree of tic tac toe and figure out what the optimal policy for tic tac toe is. It’s actually this cool XKCD that does just show you the entire game tree, it’s great.

So in the same way in the video game setting, the agent can be bigger than the video game environment, in that it can have a perfectly accurate model of the environment and know exactly what its actions are going to do. So there are all of these nice assumptions that we get in video game environment land, but in real world land, these don’t work. If you consider me on the Earth, I cannot have an exact model of the entire environment because the environment contains me inside of it, and there is no way that I can have a perfect model of me inside of me. That’s just not a thing that can happen. Not to mention having a perfect model of the rest of the universe, but we’ll leave that aside even.

There’s the fact that it’s not super clear what exactly my action space is. Once there is now a laptop available to me, does the laptop start talking as part of my action space? Do we only talk about motor commands I can give to my limbs? But then what happens if I suddenly get uploaded and now I just don’t have any lens anymore? What happened to my actions, are they gone? So Embedded Agency broadly factors this question out into four sub problems. I associate them with colors, because that’s what Scott and Abram do in their sequence. The red one is decision theory. Normally decision theory is consider all possible actions to simulate their consequences, choose the one that will lead to the highest expected utility. This is not a thing you can do when you’re an embedded agent, because the environment could depend on what policy you do.

The classic example of this is Newcomb’s problem where part of the environment is all powerful being, Omega. Omega is able to predict you perfectly, so it knows exactly what you’re going to do, and Omega is 100% trustworthy, and all those nice simplifying assumptions. Omega provides you with the following game. He’s going to put two transparent boxes in front of you. The first box will always contain $1,000 dollars, and the second box will either contain a million dollars or nothing, and you can see this because they’re transparent. You’re given the option to either take one of the boxes or both of the boxes, and you just get whatever’s inside of them.

The catch is that Omega only puts the million dollars in the box if he predicts that you would take only the box with the million dollars in it, and not the other box. So now you see the two boxes, and you see that one box has a million dollars, and the other box has a thousand dollars. In that case, should you take both boxes? Or should you just take the box with the million dollars? So the way I’ve set it up right now, it’s logically impossible for you to do anything besides take the million dollars, so maybe you’d say okay, I’m logically required to do this, so maybe that’s not very interesting. But you can relax this to a problem where Omega is 99.999% likely to get the prediction right. Now in some sense you do have agency. You could choose both boxes and it would not be a logical impossibility, and you know, both boxes are there. You can’t change the amounts that are in the boxes now. Man, you should just take both boxes because it’s going to give you $1,000 more. Why would you not do that?

But I claim that the correct thing to do in this situation is to take only one box because the fact that you are the kind of agent who would only take one box is the reason that the one box has a million dollars in it anyway, and if you were the kind of agent that did not take one box, took two boxes instead, you just wouldn’t have seen the million dollars there. So that’s the sort of problem that comes up in embedded decision theory.

Lucas: Even though it’s a thought experiment, there’s a sense though in which the agent in the thought experiment is embedded in a world where he’s making the observation of boxes that have a million dollars in them with genius posing these situations?

Rohin: Yeah.

Lucas: I’m just seeking clarification on the embeddedness of the agent and Newcomb’s problem.

Rohin: The embeddedness is because the environment is able to predict exactly, or with close to perfect accuracy what the agent could do.

Lucas: The genie being the environment?

Rohin: Yeah, Omega is part of the environment. You’ve got you, the agent, and everything else, the environment, and you have to make good decision. We’ve only been talking about how the boundary between agent and environment isn’t actually all that clear. But to the extent that it’s sensible to talk about you being able to choose between actions, we want some sort of theory for how to do that when the environment can contain copies of you. So you could think of Omega as simulating a copy of you and seeing what you would do in this situation before actually presenting you with a choice.

So we’ve got the red decision theory, then we have yellow embedded world models. With embedded world models, the problem that you have is that, so normally in our nice video game environment, we can have an exact model of how the environment is going to respond to our actions, even if we don’t know it initially, we can learn it overtime, and then once we have it, it’s pretty easy to see how you could plan in order to do the optimal thing. You can sort of trial your actions, simulate them all, and then see which one does the best and do that one. This is roughly AIXI works. AIXI is the model of the optimally intelligent RL agent in this four video game environment like settings.

Once you’re in embedded agency land, you cannot have an exact model of the environment because for one thing the environment contains you and you can’t have an exact model of you, but also the environment is large, and you can’t simulate it exactly. The big issue is that it contains you. So how you get any sort of sensible guarantees on what you can do, even though the environment can contain you is the problem off of embedded world models. You still need a world model. It can’t be exact because it contains you. Maybe you could do something hierarchical where things are fuzzy at the top, but then you can go focus in on each particular levels of hierarchy in order to get more and more precise about each particular thing. Maybe this is sufficient? Not clear.

Lucas: So in terms of human beings though, we’re embedded agents that are capable of creating robust world models that are able to think about AI alignment.

Rohin: Yup, but we don’t know how we do it.

Lucas: Okay. Are there any sorts of understandings that we can draw from our experience?

Rohin: Oh yeah, I’m sure there are. There’s a ton of work on this that I’m not that familiar with, and probably a cog psy or psychology or neuroscience, all of these fields I’m sure will have something to say about it. Hierarchical world models in particular are pretty commonly talked about as interesting. I know that there’s a whole field of hierarchical reinforcement learning in AI that’s motivated by this, but I believe it’s also talked about in other areas of academia, and I’m sure there are other insights to be getting from there as well.

Lucas: All right, let’s move on then from hierarchical world models.

Rohin: Okay. Next is blue robust delegation. So with robust delegation, the basic issue here, so we talked about Vingean reflection a little bit in the first podcast. This is a problem that falls under robust delegation. The headline difficulty under robust delegation is that the agent is able to do self improvement, it can reason about itself and do things based on that. So one way you can think of this is that instead of thinking about it as self modification, you can think about it as the agent is constructing a new agent to act at future time steps. So then in that case your agent has the problem of how do I construct an agent for future time steps such that I am happy delegating my decision making to that future agent? That’s why it’s called robust delegation. Vingean reflection in particular is about how can you take an AI system that uses a particular logical theory in order to make inferences and have it move to a stronger logical theory, and actually trust the stronger logical theory to only make correct inferences?

Stated this way, the problem is impossible because lots of theorems, it’s a well known result in logic that a weaker theory can not prove the consistency of well even itself, but also any stronger theory as a corollary. Intuitively in this pretty simple example, we don’t know how to get an agent that can trust a smarter version of itself. You should expect this problem to be hard, right? It’s in some sense dual to the problem that we have of AI alignment where we’re creating something smarter than us, and we need it to pursue the things we want it to pursue, but it’s a lot smarter than us, so it’s hard to tell what it’s going to do.

So I think of this aversion of the AI alignment problem, but apply to the case of some embedded agent reasoning about itself, and making a better version of itself in the future. So I guess we can move on to the green section, which is sub system alignment. The tagline for subsystem alignment would be the embedded agent is going to be made out of parts. Its’ not this sort of unified coherent object. It’s got different pieces inside of it because it’s embedded in the environment, and the environment is made of pieces that make up the agent, and it seems likely that your AI system is going to be made up of different cognitive sub parts, and it’s not clear that those sub parts will integrate together into a unified whole such that unified whole is pursuing a goal that you like.

It could be that each individual sub part has its own goal and they’re all competing with each other in order to further their own goals, and that the aggregate overall behavior is usually good for humans, at least in our current environment. But as the environment changes, which it will due to technological progression, one of the parts might just win out and be optimizing some goal that is not anywhere close to what we wanted. A more concrete example would be one way that you could imagine building a powerful AI system is to have a world model that is awarded for making accurate predictions about what the world will look like, and then you have a decision making model, which has a normal reward function that we program in, and tries to choose actions in order to maximize that reward. So now we have an agent that has two sub systems in it.

You might worry for example that once the world model gets sufficiently powerful, it starts realizing that the decision making thing is depending on my output in order to make decisions. I can trick it into making the world easier to predict. So maybe I give it some models of the world that say make everything look red, or make everything black, then you will get high reward somehow. Then if the agent actually then takes that action and makes everything black, and now everything looks black forever more, then the world model can very easily predict, yeah, no matter what action you take, the world is just going to look black. That’s what the world is now, and that gets the highest possible reward. That’s a somewhat weird story for what could happen. But there’s no real stronger unit that says nope, this will definitely not happen.

Lucas: So in total sort of, what is the work that has been done here on inner optimizers?

Rohin: Clarifying that they could exist. I’m not sure if there has been much work on it.

Lucas: Okay. So this is our fourth cornerstone here in this embedded agency framework, correct?

Rohin: Yup, and that is the last one.

Lucas: So surmising these all together, where does that leave us?

Rohin: So I think my main takeaway is that I am much more strongly agreeing with MIRI that yup, we are confused about how intelligence works. That’s probably it, that we are confused about how intelligence works.

Lucas: What is this picture that I guess is conventionally held of what intelligence is that is wrong? Or confused?

Rohin: I don’t think there’s a thing that’s wrong about the conventional. So you could talk about a definition of intelligence, of being able to achieve arbitrary goals. I think Eliezer says something like cross domain optimization power, and I think that seems broadly fine. It’s more that we don’t know how intelligence is actually implemented, and I don’t think we ever claim to know that, but embedded agency is like we really don’t know it. You might’ve thought that we were making progress on figuring out how intelligence might be implemented with a classical decision theory, or the Von Neumann–Morgenstern utility theorem, or results like value of perfect information and stuff like being always non negative.

You might’ve thought that we were making progress on it, even if we didn’t fully understand it yet, and then you read on method agency and you’re like no, actually there are lots more conceptual problems that we have not even begin to touch yet. Well MIRI has begun to touch them I would say, but we really don’t have good stories for how any of these things work. Classically we just don’t have a description of how intelligence works. MIRI’s like even the small threads of things we thought about how intelligence could work are definitely not the full picture, and there are problems with them.

Lucas: Yeah, I mean just on simple reflection, it seems to me that in terms of the more confused conception of intelligence, it sort of models it more naively as we were discussing before, like the simple agent playing a computer game with these well defined channels going into the computer game environment.

Rohin: Yeah, you could think of AIXI for example as a model of how intelligence could work theoretically. The sequence is like no, this is why I see it as not a sufficient theoretical model.

Lucas: Yeah, I definitely think that it provides an important conceptual shift. So we have these four corner stones, and it’s illuminating in this way, are there any more conclusions or wrap up you’d like to do on embedded agency before we move on?

Rohin: Maybe I just want to add a disclaimer that MIRI is notoriously hard to understand and I don’t think this is different for me. It’s quite plausible that there is a lot of work that MIRI has done, and a lot of progress that MIRI has made that I either don’t know about or know about but don’t properly understand. So I know I’ve been saying I want to differ to people a lot, or I want to be uncertain a lot, but on MIRI I especially want to do so.

Lucas: All right, so let’s move on to the next one within this list.

Rohin: The next one was doing what humans want. How do I summarize that? I read a whole sequence of posts on it. I guess the story for success, to the extent that we have one right now is something like use all of the techniques that we’re developing, or at least the insights from them, if not the particular algorithms to create an AI system that behaves corrigibly. In the sense that it is trying to help us achieve our goals. You might be hopeful about this because we’re creating a bunch of algorithms for it to properly infer our goals and then pursue them, so this seems like a thing that could be done. Now, I don’t think we have a good story for how that happens. I think there are several open problems that show that our current algorithms are insufficient to do this. But it seems plausible that with more research we could get to something like that.

There’s not really a good overall summary of the field because it’s more like a bunch of people separately having a bunch of interesting ideas and insights, and I mentioned a bunch of them in the first part of the podcast already. Mostly because I’m excited about these and I’ve read about them recently, so I just sort of start talking about them whenever they seem even remotely relevant. But to reiterate them, there is the notion of analyzing the human AI system together as pursuing some sort of goal, or being collectively rational as opposed to having an individual AI system that is individually rational. So that’s been somewhat formalized in Cooperative Inverse Reinforcement Learning. Typically with inverse reinforcement learning, so not the cooperative kind, you have a human, the human is sort of exogenous, the AI doesn’t know that they exist, and the human creates a demonstration of the sort of behavior that they want the AI to do. If you’re thinking about robotics, it’s picking up a coffee cup, or something like this. Then the robot just sort of sees this demonstration and comes out of thin air, it’s just data that it gets.

Let’s say that I had executed this demonstration, what reward function would I have been optimizing? And then it figures out a reward function, and then it uses that reward function however it wants. Usually you would then use reinforcement learning to optimize that reward function and recreate the behavior. So that’s normal inverse reinforcement learning. Notably in here is that you’re not considering the human and the robot together as a full collective system. The human is sort of exogenous to the problem, and also notable is that the robot is sort of taking the reward to be something that it has as opposed to something that the human has.

So CIRL basically says, no, no, no, let’s not model it this way. The correct thing to do is to have a two player game that’s cooperative between the human and the robot, and now the human knows the reward function and is going to take actions somehow. They don’t necessarily have to be demonstrations. But the human knows the reward function and will be taking actions. The robot on the other hand does not know the reward function, and it also gets to take actions, and the robot keeps a probability distribution over the reward that the human has, and updates this overtime based on what the human does.

Once you have this, you get this sort of nice, interactive behavior where the human is taking actions that teach the robot about the reward function. The robot learns the reward function over time and then starts helping the human achieve his or her goals. This sort of teaching and learning behavior comes simply under the assumption that the human and the robot are both playing the game optimally, such that the reward function gets optimized as best as possible. So you get this sort of teaching and learning behavior from the normal notion of optimizing a particular objective, just from having the objective be a thing that the human knows, but not a thing that the robot knows. One thing that, I don’t know if CIRL introduced it, but it was one of the key aspects of CIRL was having probability distribution over a reward function, so you’re uncertain about what reward you’re optimizing.

This seems to give a bunch of nice properties. In particular, once the human starts taking actions like trying to shut down the robot, then the robot’s going to think okay, if I knew the correct reward function, I would be helping the human, and given that the human is trying to turn me off, I must be wrong about the reward function, I’m not helping, so I should actually just let the human turn me off, because that’s what would achieve the most reward for the human. So you no longer have this incentive to disable your shutdown button in order to keep optimizing. Now this isn’t exactly right, because better than both of those option is to disable the shutdown button, stop doing whatever it is you were doing because it was clearly bad, and then just observe humans for a while until you can narrow down what their reward function actually is, and then you go and optimize that reward, and behave like a traditional goal directed agent. This sounds bad. It doesn’t actually seem that bad to me under the assumption that the true reward function is a possibility that the robot is considering and has a reasonable amount of support in the prior.

Because in that case, once the AI system eventually narrows down on the reward function, it will be either the true reward function, or a reward function that’s basically indistinguishable from it, because otherwise, there would be some other information that I could gather in order to distinguish between them. So you actually would get good outcomes. Now of course in practice it seems likely that we would not be able to specify the space of reward functions well enough for this to work. I’m not sure about that point. Regardless, it seems like there’s been some sort of conceptual advance here about when the AI’s trying to do something for the human, it doesn’t have the disabling the shutdown button, the survival incentive.

So while maybe reward uncertainty is not exactly the right way to do it, it seems like you could do something analogous that doesn’t have the problems that reward uncertainty does.

One other thing that’s kind of in this vein, but a little bit different is the idea of an AI system that infers and follows human norms, and the reason we might be optimistic about this is because humans seem to be able to infer and follow norms pretty well. I don’t think humans can infer the values that some other human is trying to pursue and then optimize them to lead to good outcomes. We can do that to some extent. Like I can infer that someone is trying to move a cabinet, and then I can go help them move that cabinet. But in terms of their long term values or something, it seems pretty hard to infer and help with those. But norms, we do in fact do infer and follow all the time. So we might think that’s an easier problem, like our AI systems could do it as well.

Then the story for success is basically that with these AI systems, we are able to accelerate technological progress as before, but the AI systems behave in a relatively human like manner. They don’t do really crazy things that a human wouldn’t do, because that would be against our norms. As with the accelerating technological progress, we get to the point where we can colonize space, or whatever else it is you want to do with the feature. Perhaps even along the way we do enough AI alignment research to build an actual aligned superintelligence.

There are problems with this idea. Most notably if you accelerate technological progress, bad things can happen from that, and norm following AI systems would not necessarily stop that from happening. Also to the extent that if you think human society, if left to its own devices would lead to something bad happening in the future, or something catastrophic, then a norm following AI system would probably just make that worse, in that it would accelerate that disaster scenario, without really making it any better.

Lucas: AI systems in a vacuum that are simply norm following seem to have some issues, but it seems like an important tool in the toolkit of AI alignment to have AIs which are capable of modeling and following norms.

Rohin: Yup. That seems right. Definitely agree with that. I don’t think I had mentioned the reference on this. So for this one I would recommend people look at Incomplete Contracting and AI Alignment I believe is the name of the paper by Dylan Hadfield-Menell, and Gillian Hadfield, or also my post about it in the Value Learning Sequence.

So far I’ve been talking about sort of high level conceptual things within the, ‘get AI systems to do what we want.’ There are also a bunch of more concrete technical approaches. It’s like inverse reinforcement learning, deep reinforcement learning from human preferences, and there you basically get a bunch of comparisons of behavior from humans, and use that to infer a reward function that your agent can optimize. There’s recursive reward modeling where you take the task that you are trying to do, and then you consider a new auxiliary task of evaluating your original task. So maybe if you wanted to train an AI system to write fantasy books, well if you were to give human feedback on that, it would be quite expensive because you’d have to read the entire fantasy book and then give feedback. But maybe you could instead outsource the task, even evaluating fantasy books, you could recursively apply this technique and train a bunch of agents that can summarize the plot of a book or comment on the pros of the book, or give a one page summary of the character development.

Then you can use all of these AI systems to help you give feedback on the original AI system that’s trying to write a fantasy book. So that’s a recursive reward modeling. I guess going a bit back into the conceptual territory, I wrote a paper recently on learning preferences from the state of the world. So the intuition there is that the AI systems that we create aren’t just being created into a brand new world. They’re being instantiated in a world where we have already been acting for a long time. So the world is already optimized for our preferences, and as a result, our AI systems can just look at the world and infer quite a lot about our preferences. So we gave an algorithm that did this in some poor environments.

Lucas: Right, so again, this covers the conceptual category of methodologies of AI alignment where we’re trying to get AI systems to do what we want?

Rohin: Yeah, current AI systems in a sort of incremental way, without assuming general intelligence.

Lucas: And there’s all these different methodologies which exist in this context. But again, this is all sort of within this other umbrella of just getting AI to do things we want them to do?

Rohin: Yeah, and you can actually compare across all of these methods on particular environments. This hasn’t really been done so far, but in theory it can be done, and I’m hoping to do it at some point in the future.

Lucas: Okay. So we’ve discussed embedded agency, we’ve discussed this other category of getting AIs to do what we want them to do. Just moving forward here through diving deep on these approaches.

Rohin: I think the next one I wanted to talk about was ambitious value learning. So here the basic idea is that we’re going to build a superintelligent AI system, it’s going to have goals, because that’s what the Von Neumann—Morgenstern theorem tells us is that anything with preferences, if they’re consistent and coherent, which they should be for a superintelligent system, or at least as far as we can tell they should be consistent. Any type system has a utility function. So natural thought, why don’t we just figure out what the right utility function is, and put it into the AI system?

So there’s a lot of good arguments that you’re not going to be able to get the one correct utility function, but I think Stuart’s hope is that you can find one that is sufficiently good or adequate, and put that inside of the AI system. In order to do this, he wants to, I believe the goal is to learn the utility function by looking at both human behavior as well as the algorithm that human brains are implementing. So if you see that the human brain, when it knows that something is going to be sweet, tends to eat more of it. Then you can infer that humans like to eat sweet things. As opposed to humans really dislike eating sweet things, but they’re really bad at optimizing their utility function. In this project of ambitious value learning, you also need to deal with the fact that human preferences can be inconsistent, that the AI system can manipulate the human preferences. The classic example of that would be the AI system could give you a shot of heroin, and that probably change your preferences from I do not want heroin to I do want heroin. So what does it even mean to optimize for human preferences when they can just be changed like that?

So I think the next one was corrigibility and the associated iterated amplification and debate basically. I guess factored cognition as well. To give a very quick recap, the idea with corrigibility is that we would like to build an AI system that is trying to help us, and that’s the property that we should aim for as opposed to an AI system that actually helps us.

One motivation for focusing on this weaker criteria is that it seems quite difficult to create a system that knowably actually helps us, because that means that you need to have confidence that your AI system is never going to make mistakes. It seems like quite a difficult property to guarantee. In addition, if you don’t make some assumption on the environment, then there’s a no free lens theorem that says this is impossible. Now it’s probably reasonable to put some assumption on the environment, but it’s still true that your AI system could have reasonable beliefs based on past experience, and nature still throws it a curve ball, and that leads to some sort of bad outcome happening.

While we would like this to not happen, it also seems hard to avoid, and also probably not that bad. It seems like the worst outcomes come when your superintelligent system is applying all of its intelligence in pursuit of their own goal. That’s the thing that we should really focus on. That conception of what we want to enforce is probably the thing that I’m most excited about. Then there are particular algorithms that are meant to create corrigible agents, assuming we have the capabilities to get general intelligence. So one of these is iterated amplification.

Iterated amplification is really more of a framework to describe particular methods of training systems. In particular, you alternate between amplification and distillation steps. You start off with an agent that we’re going to assume is already aligned. So this could be a human. A human is a pretty slow agent. So the first thing we’re going to do is distill the human down into a fast agent. So we could use something like imitation learning, or maybe inverse reinforcement learning plus reinforcement learning, followed by reinforcement learning or something like that in order to train a neural net or some other AI system that mostly replicates the behavior of our human, and remains aligned. By aligned maybe I mean corrigible actually. We start with a corrigible agent, and then we produce agents that continue to be corrigible.

Probably the resulting agent is going to be a little less capable than the one that you started out with just because if the best you can do is to mimic the agent that you stated with, that gives you exactly as much capabilities as that agent. So if you don’t succeed at properly mimicking, then you’re going to be a little less capable. Then you take this fast agent and you amplify it, such that it becomes a lot more capable, at perhaps the cost of being a lot slower to compute.

One way that you could image doing amplification would be to have a human get a top level task, and for now we’ll assume that the task is question answering, so they get this top level question and they say okay, I could answer this question directly, but let me make use of this fast agent that we have from the last turn. We’ll make a bunch of sub questions that seem relevant for answering the overall question, and ask our distilled agent to answer all of those sub questions, and then using those answers, the human can then make a decision for their top level question. It doesn’t have to be the human. You could also have a distilled agent at the top level if you want.

I think having the human there seems more likely. So with this amplification you’re basically using the agent multiple times, letting it reason for longer in order to get a better result. So the resulting human, plus many copies of the agent system is more capable than the original distilled agent, but also slower. So we started off with something, let’s call it capability level five, and then we distilled it and it became capability level four, but it was a lot faster. Then we amplified it and maybe now it’s capability level eight. But it’s a lot slower. So we can distill it again and get something at capability level seven that’s pretty fast, and then amplify it again and so on and so forth. So the hope is that this would allow us to continually train an agent that can reach arbitrary levels of capabilities that are actually physically possibly while remaining aligned or corrigible the entire time.

Lucas: Yeah, that’s quite a great picture. I’m happy to jump into debate next, and then I’d like to talk about the potential relationship between the two.

Rohin: Sure. So with debate, the key hope that it is based on is that it is easier to point out a lie then it is to lie in the first place. This would be under the assumption that if you’ve got two agents, one agent is trying to lie, the other agent is trying to detect when the first agent is lying, you would need the second to have access to all of the information that the first agent is using to make this decision. But if that can happen, then we would hope that it is easier to detect lies than it is to actually lie. If this is actually true, we can imagine the following training process. We give a top level question, and then we have two AI systems debate what the correct answer to that top level question is. One of the agents initially says the answer is 12, then the other agent says no, the answer is 15, then they start arguing about why the answer should be 12 versus 15.

Eventually, after some amount of time, a human is asked to judge this debate and say which of the AI systems provided more true and useful information. Whichever agent the human decided was correct gets a reward of plus one, and the other one gets a reward of minus one, and is then updated presumably by a gradient decent or something. So the hope here is basically that the two AI system that are debating each other are about equally powerful, such that if any one of them tries to lie or manipulate the user in some way, the other AI system would be able to point that out, and then the human would give the reward to the AI systems that point it out. So at equilibrium you should expect that both of the agents are just trying to give you true and useful information, and they’re not trying to lie, because if they do try to lie, then the other one will point it out.

Lucas: Right. So there’s this important feature here where as you mentioned, the claim is that it’s easier to spot out lies then to make lies, and this sort of asymmetry here is one of the motivations which says that generally it’ll be easier to tell when agents are telling the truth rather than making up a lie.

Rohin: As along as you have another AI system that can point this out. Certainly a super intelligent AI system could lie to me and I wouldn’t be able to tell, probably, but it’s a lot harder for a superintelligent AI system to lie to me when I have another superintelligent AI system that’s trying to point out lies that the first one makes.

Lucas: Right. So now I think we can go ahead and cover its relationship to iterated amplification?

Rohin: Sure. There is actually quite a close relationship between the two, even though it doesn’t seem like it on first site. The hope with both of them is that your AI systems will learn to do human like reasoning, but on a much larger scale than humans can do. In particular, consider the following kind of agent. You have a human who is given a top level question that they have to answer, and that human can create a bunch of sub questions and then delegate each of those sub questions to another copy of the same human, initialized from scratch or something like that so they don’t know what the top level human has thought.

Then they now have to answer the sub question, but they too can delegate to another human further down the line. And so on you can just keep delegating down until you get something that questions are so easy that the human can just straight up answer them. So I’m going to call this structure a deliberation tree, because it’s a sort of tree of considerations such that every node, the answer to that node, it can be computed from the answers to the children nodes, plus a short bit of human reasoning that happened at that node.

In iterated amplification, what’s basically happening is you start with leaf nodes, the human agent. There’s just a human agent, and they can answer questions quickly. Then when you amplify it the first time, you get trees of depth one, where at the top level there’s a human who can then delegate sub questions out, but then those sub questions have to be answered by an agent that was trained to be like a human. So you’ve got something that approximates depth one human deliberation trees. Then after another round of distillation and amplification, you’ve got human delegating to agents that were trained to mimic humans that could delegate to agents that were trained to mimic humans. An approximate version of a depth two deliberation tree.

So iterated amplification is basically just building up the depth of the tree that the agent is approximating. But we hope that these deliberation trees are always just basically implementing corrigible reasoning, and that eventually once they get deep enough, you get arbitrarily strong capabilities.

Lucas: Can you offer some clarification as to why one might expect a group of copies of an AI, plus the human to scale to be able to make sure that during distillation, that alignment is retained?

Rohin: That’s an open problem. Whether you can do a distillation step that does preserve alignment/corrigibility, it’s a thing that Paul in a few recent posts in the Iterated Amplification Sequence, he calls it the reward engineering problem. The hope is that if you believe that the amplified agent is corrigible, then they are going to be smarter than the agent that they are trying to train via distillation. So you can actually just use the amplified agent to create a reward signal in order to train an agent during distillation. Since the amplified agent is smarter than the agent you’re distilling, you could plausibly actually create a reward function that’s not easily gameable, and actually gets the AI system to do what you want. I think that’s the concise lacking nuance story of why you might be optimistic about this.

Lucas: All right.

Rohin: So I’ll move on to how debate is also related to this. So we talk about how iterated amplification is basically like growing the depth of deliberation trees that the agent is approximating. The human part of this is judging any one node and its children. In debate on the other hand, you can imagine the same sort of deliberation tree, although now they’re more like arguments and counter arguments as opposed to considerations and counter considerations. But broadly the same thing. So imagine there’s this actual debate tree of ways the debate could possibly go.

Then you could think of the AI systems as choosing a particular path in the debate tree that makes them most likely to win. The key point is that given that the entire question can be resolved by exponentially size deliberation tree, if the two AI systems are capable of competing this exponential deliberation tree, then optimal play in the debate game is to go along the path that is going to lead in your victory, even given that the other player is trying to win themselves. The relation between iterated amplification and debate is that they both want the agents to implicitly be able to compute this exponential sized deliberation tree that humans could not do, and then use humans to detect a particular part of that tree. In iterated amplification you check a parent and its children. Those nodes, you look at that one section of the debate tree, and you make sure that it looks good, and then debate you look at a particular path on the debate tree and judge whether that path is good. One critique about these methods, is it’s not actually clear that an exponential sized deliberation tree is able to solve all problems that we might care about. Especially if the amount of work done at each node is pretty short, like ten minutes of a stent of a normal human.

One question that you would care about if you wanted to see if an iterated amplification could work is can these exponential sized deliberation trees actually solve hard problems? This is the factored cognition hypothesis. These deliberation trees can in fact solve arbitrarily complex tasks. And Ought is basically working on testing this hypothesis to see whether or not it’s true. It’s like finding the tasks, which seemed hardest to do in this decompositional way, and then seeing if teams of humans can actually figure out how to do them.

Lucas: Do you have an example of what would be one of these tasks that are difficult to decompose?

Rohin: Yeah. Take a bunch of humans who don’t know differential geometry or something, and have them solve the last problem in a textbook on differential geometry. They each only get ten minutes in order to do anything. None of them can read the entire textbook. Because that takes way more than ten minutes. I believe Ought is maybe not looking into that one in particular, that one sounds extremely hard, but they might be doing similar things with books of literature. Like trying to answer questions about a book that no one has actually read.

But I remember that Andreas was actually talking about this particular problem that I mentioned as well. I don’t know if they actually decided to do it.

Lucas: Right. So I mean just generally in this area here, it seems like there are these interesting open questions and considerations about I guess just the general epistemic efficacy of debate. And how good AI and human systems will be at debate, and again also as you just pointed out, whether or not arbitrarily difficult tasks can be solved through this decompositional process. I mean obviously we do have proofs for much simpler things. Why is there a question as to whether or not it would scale? How would it eventually break?

Rohin: With iterated amplification in particular, if you’re starting with humans who have only ten minutes to look at resources and come up with an answer, the particular thing I would say they might not be able to do is take a math textbook that the human did not know already and solve the final problem in it. Iterated amplification, to the extent that it starts with a human who’s limited to ten minutes, and whenever it does a decomposition, does it with a human who’s limited to ten minutes, would inherit all of the same problems if those problems are real problems.

Lucas: Cool, that’s interesting. I don’t have anything to add here then. So happy to keep on moving.

Rohin: I think I want to talk about limited AI systems next. So with AI boxing, the hope is to create a box around an AI system that prevents anything from inside the box from going out, and anything from outside the box from going in, except a particular input, output channels that we explicitly allow. So the hope here is that this allows us to make assumptions about the input, output channels that actually will hold in practice, and can be used to enter that AI’s function more like tool AIs or something, rather than agential AIs that will do bad things.

I think most literature on boxing has had an AI system in a box, and a human operator outside of it. One interesting proposal I’ve heard recently is to have the human also be inside the box, because if you’ve got information pulling from an AI to a human, that’s a pretty big hole in your box. But if the human is actually inside the box, then there’s not as much that can be done. There’s not as much information flowing into or out of the box.

Lucas: What does it actually mean to have the human locked in the box?

Rohin: One proposal would be have a human go inside a secured room from which no information an enter or leave, apart from the fact of whether or not the human has entered or left the room. The human can then interact with the AI as normal. As soon as the human leaves the room, the AI’s reward function is zero. Basically just doesn’t care about anything that happens in the future after that. So in this setting you would hope that the AI system has no incentive to affect the outside world because the only way it can affect the outside world is by sending the human outside. But as soon as it sends the human outside, its reward gets set to zero forever, so it doesn’t care about any of that.

Lucas: But wouldn’t it care before the human went out?

Rohin: Yes, but before the human goes out, there is nothing it can do to influence the outside world because both the human and the AI are in the box. It could influence the human while the human is in the box. But the human also can’t affect the outside world while it’s in the box.

Lucas: Yeah, but the human leaves the box and then can affect the world.

Rohin: Right. But any change to the world that happens after the human leaves, doesn’t make it into the reward function.

Lucas: But it does because the AI system would be planning over time from its initial moment where the reward isn’t set to zero.

Rohin: But your reward can be limited to the next K times steps, or something like this. Basically you’re limited to until the point that the human leaves. That’s the proposal here.

Lucas: I see, okay. That makes sense.

Rohin: The only information leaving the box, assuming security is a box in all the normal ways, is whether or not the human has chosen to leave. Pretty low information, doesn’t seem like an AI system would be able to use that to influence the outside world in some ridiculous way. So that’s boxing.

The next thing on my list for limited AGI is impact measures of side effects. There is also mild optimization and conservative concepts, but let’s start with impact measures. The basic hope is to create some quantification of how much impact a particular action that the AI chooses, has on the world, and to then penalize the AI for having a lot of impact so that it only does low impact things, which presumably will not cause catastrophe. One approach to this relative reachability. With relative reachability, you’re basically trying to not decrease the number of states that you can reach from the current state. So you’re trying to preserve option value. You’re trying to keep the same states reachable.

It’s not okay for you to make one state unreachable as long as you make a different state reachable. You need all of the states that were previously reachable to continue being reachable. The relative part is that the penalty is calculated relative to a baseline that measures what would’ve happened if the AI had done nothing, although there are other possible baselines you could use. The reason you do this is so that we don’t penalize the agent for side affects that happen in the environment. Like maybe I eat a sandwich, and now these states where there’s a sandwich in front of me are no longer accessible because I can’t un-eat a sandwich. We don’t want to penalize our AI system for that impact, because then it’ll try to stop me from eating a sandwich. We want to isolate the impact of the agent as opposed to impact that were happening in the environment anyway. So that’s what we need the relative part.

There is also attainable utility preservation from Alex Turner, which makes two major changes from relative reachability. First, instead of talking about reachability of states, it talk about how much you can achieve different utility functions. So if previously you were able to make lots of paperclips, then you want to make sure that you can still make lots of paperclips. If previously you were able to travel across the world within a day, then you want to still be able to travel across the world in a day. So that’s the first change I would make.

The second change is not only does it penalize decreases in attainable utility, it also penalizes increase in attainable utility. So if previously you could not mine asteroids in order to get their natural resources, you should still not be able to mine asteroids and get their resources. This seems kind of crazy when you first hear it, but the rational for it is that all of the convergent instrumental sub goals are about increases in power of your AI system. For example, for a broad range of utility functions, it is useful to get a lot of resources and a lot of power in order to achieve those utility functions. Well, if you penalize increases in attainable utility, then you’re going to penalize actions that just broadly get more resources, because those are helpful for many, many, many different utility functions.

Similarly, if you were going to be shutdown, but then you disable the shutdown button, well that just makes it much more possible for you to achieve pretty much every utility, because instead of being off, you are still on and can take actions. So that also will get heavily penalized because it led to such a large increase in attainable utilities. So those are I think the two main impact measures that I know of.

Okay, we’re getting to the things where I have less things to say about them, but now we’re at robustness. I mentioned this before, but there are two main challenges with verification. There’s the specification problem, making it computationally efficient, and all of the work is on the computationally efficient side, but I think the hardest part is the specification side, and I’d like to see more people do work on that.

I don’t think anyone is really working on verification with an eye to how to apply it to powerful AI systems. I might be wrong about that. Like I know something people who do care about AI safety who are working on verification, and it’s possible that they have thoughts about this that aren’t published and that I haven’t talked to them about. But the main thing I would want to see is what specifications can we actually give to our verification sub routines. At first glance, this is just the full problem of AI safety. We can’t just give a specification for what we want to an AGI.

What specifications can we get for a verification that’s going to increase our trust in the AI system. For adversarial training, again, all of the work done so far is in the adversarial example space where you try to frame an image classifier to be more robust to adversarial examples, and this kind of work sometimes, but doesn’t work great. For both verification and adversarial training, Paul Christiano has written a few blog posts about how you can apply this to advance AI systems, but I don’t know if anyone actively working on these with AGI in mind. With adversarial examples, there is too much work for me to summarize.

The thing that I find interesting about adversarial examples is that is shows that are we no able to create image classifiers that have learned human preferences. Humans have preferences over how we classify images, and we didn’t succeed at that.

Lucas: That’s funny.

Rohin: I can’t take credit for that framing, that one was due to Ian Goodfellow. But yeah, I see adversarial examples as contributing to a theory of deep learning that tells us how do we get deep learning systems to be closer to what we want them to be rather than these weird things that classify pandas as givens, even when they’re very clearly still pandas.

Lucas: Yeah, the framing’s pretty funny, and makes me feel kind of pessimistic.

Rohin: Maybe if I wanted to inject some optimism back in, there’s a frame under which an adversarial examples happen because our data sets are too small or something. We have some pretty large data sets, but humans do see more and get far richer information than just pixel inputs. We can go feel a chair and build 3D models of a chair through touch in addition to sight. There is actually a lot more information that humans have, and it’s possible that what we need as AI systems is just to have way more information, and are good to narrow it down on the right model.

So let us move on to I think the next thing is interpretability, which I also do not have much to say about, mostly because there is tons and tons of technical research on interpretability, and there is not much on interpretability from an AI alignment perspective. One thing to note with interpretability is you do want to be very careful about how you apply it. If you have a feedback cycle where you’re like I built an AI system, I’m going to use interpretability to check whether it’s good, and then you’re like oh shit, this AI system was bad, it was not making decisions for the right reasons, and then you go and fix your AI system, and then you throw interpretability at it again, and then you’re like oh, no, it’s still bad because of this other reason. If you do this often enough, basically what’s happening is you’re training your AI system to no longer have failures that are obvious to interpretability, and instead you have failures that are not obvious to interpretability, which will probably exist because your AI system seems to have been full of failures anyway.

So I would be pretty pessimistic about the system that interpretability found 10 or 20 different errors in. I would just expect that the resulting AI system has other failure modes that we were not able to uncover with interpretability, and those will at some point trigger and cause bad outcomes.

Lucas: Right, so interpretability will cover things such as super human intelligence interpretability, but also more mundane examples of present day systems correct, where the interpretability of say neural networks is basically, my understand is nowhere right now.

Rohin: Yeah, that’s basically right. There have been some techniques developed like sailiency maps, feature visualization, neural net models that hallucinate explanations post hoc, people have tried a bunch of things. None of them seem especially good, though some of them definitely are giving you more insight than you had before.

So I think that only leaves CAIS. With comprehensive AI service, it’s like a forecast for how AI will develop in the future. It also has some prescriptive aspects to it, like yeah, we should probably not do these things, because these don’t seem very safe, and we can do these other things instead. In particular, CAIS takes a strong stance AGI agents that are God-like fully integrated systems that are optimizing some utility function over the long term future.

It should be noted that it’s arguing against a very specific kind of AGI agent. This sort of long term expected utility maximizer that’s fully integrated and is okay to black box, can be broken down into modular components. That entire cluster of features, it’s what CAIS is talking about when it says AGI agent. So it takes a strong sense against that, saying A, it’s not likely that this is the first superintelligent thing that we built, and B, it’s clearly dangerous. That’s what we’ve been saying the entire time. So here’s a solution, why don’t we just not build it? And we’ll build these other things instead? As for what the other things are, the basic intuition pump here is that if you look at how AI is developed today, there is a bunch of research in development practices that we do. We try out a bunch of models, we try some different ways to clean our data, we try different ways of collecting data sets, and we try different algorithms and so on and so forth, and these research and development practices allow us to create better and better AI systems.

Now, our AI systems currently are also very bounded in their tasks that they do. There are specific tasks, and they do that task and that task alone, they do it in episodic ways. They are only trying to optimize over a bounded amount of time, they use a bounded amount of computation and other resources. So that’s what we’re going to call a service. It’s an AI system that does a bounded task, in bounded time, with bounded computation. Everything is bounded. Now our research and development practices are themselves bound to tasks, and AI has shown itself to be quite good at automating bounded tasks. We’ve definitely not automated all bounded tasks yet, but it does seem like we are in general are pretty good at automating bounded tasks with enough effort. So probably we will also automate research and development tasks.

We’re seeing some of this already with neural architecture search for example, and once AI R and D processes have been sufficiently automated, then we get this cycle where AI systems are doing the research and development needed to improve AI systems, and so we get to this point of recursive improvement that’s not self improvement anymore, because there’s not really an agentic itself to improve, but you do have recursive AI improving AI. So this can lead to the sort of very quick improvement and capabilities that we often associate with superintelligence. With that we can eventually get to a situation where any task that we care about, we could have a service that breaks that task down into a bunch of simple, automatable bounded tasks, and then we can create services that do each of those bounded tasks and interact with each other in order to in tandem complete the long term task.

This is how humans do engineering and building things. We have these research and development things, we have these modular systems that are interacting with each other via a well defined channel, so this seems more likely to be the firs thing that we build that’s capable of super intelligent reasoning rather than an AGI agent that’s optimizing the utility function of a long term, yada, yada, yada.

Lucas: Is there no risk? Because the superintelligence is the distributed network collaborating. So is there no risk for the collective distributed network to create some sort of epiphenomenal optimization effects?

Rohin: Yup, that’s definitely a thing that you should worry about. I know that Erik agrees with me on this because he explicitly lists this out in the tech report as a thing that needs more research and that we should be worried about. But the hope is that there are other things that you can do that normally we wouldn’t think about with technical AI safety research that would make more sense in this context. For example, we could train a predictive model of human approval. Given any scenario, the AI system should predict how much humans are going to like it or approve of it, and then that service can be used in order to check that other services are doing reasonable things.

Similarly, we might look at each individual service and see which of the other services it’s accessing, and then make sure that those are reasonable services. If we see a CEO of paper clip company going and talking to the synthetic biology service, we might be a bit suspicious and be like why is this happening? And then we can go and check to see why exactly that has happened. So there are all of these other things that we could do in this world, which aren’t really options in the AGI agent world.

Lucas: Aren’t they options in the AGI agential world where the architectures are done such that these important decision points are analyzable to the same degree as they would be in a CAIS framework?

Rohin: Not to my knowledge. As far as I can tell, most end to end train things, you might have the architectures be such that there are these points at which you expect that certain kinds of information will be flowing there, but you can’t easily look at the information that’s actually there and deduce what the system is doing. It’s just not interpretable enough to do that.

Lucas: Okay. I don’t think that I have any other questions or interesting points with regards to CAIS. It’s a very different and interesting conception of the kind of AI world that we can create. It seems to require its own new coordination challenge as if your hypothesis is true and that the agential AIs will be afforded more causal power in the world, and more efficiency than sort of the CAIS systems, that’ll give them a competitive advantage that will potentially bias civilization away from CAIS systems.

Rohin: I do want to note that I think the agential AI systems will be more expensive and take longer to develop than CAIS. So I do think CAIS will come first. Again, this is all in a particular world view.

Lucas: Maybe this might be abstracting too large, but does CAIS claim to function as an AI alignment methodology to be used on the long term? Do we retain the sort of CAIS architecture path, CAIS creating super intelligence or some sort of distributed task force?

Rohin: I’m not actually sure. There’s definitely a few chapters in the technical report that are like okay, what if we build AGI agents? How could we make sure that goes well? As long as CAIS comes before AGI systems, here’s what we can do in that setting.

But I feel like I personally think that AGI systems will come. My guess is that Erik does not think that this is necessary, and we could actually just have CAIS systems forever. I don’t really have a model for when to expect AGI separately of the CAIS world. I guess I have a few different potential scenarios that I can consider, and I can compare it to each of those, but it’s not like it’s CAIS and not CAIS. It’s more like it’s CAIS and a whole bunch of other potential scenarios, and in reality it’ll be some mixture of all of them.

Lucas: Okay, that makes more sense. So, there’s sort of an overload here, or just a ton of awesome information with regards to all of these different methodologies and conceptions here. So just looking at all of it, how do you feel about all of these different methodologies in general, and how does AI alignment look to you right now?

Rohin: Pretty optimistic about AI alignment, but I don’t think that’s so much from the particular technical safety research that we have. That’s some of it. I do think that there are promising approaches, and the fact that there are promising approaches makes me more optimistic. But I think more so my optimism comes from the strategic picture. A belief that A, that we will be able to convince people that this is important, such that people start actually focusing on this problem more broadly, and B that we would be able to get a bunch of people to coordinate such that they’re more likely to invest in safety. C, that I don’t place as much weight on the AI systems that are at long term, utility maximizers, and therefor we’re basically all screwed, which seems to be the position of many other people in the field.

I say optimistic. I mean optimistic relative to them. I’m probably pessimistic relative to the average person.

Lucas: A lot of these methodologies are new. Do you have any sort of broad view about how the field is progressing?

Rohin: Not a great one. Mostly because I would consider myself, maybe I’ve just recently stopped being new to the field, so I didn’t really get to observe the field very much in the past, but it seems like there’s been more of a shift towards figuring out how all of the things people were thinking about apply to real machine learning systems, which seems nice. The fact that it does connect is good. I don’t think the connections are super natural, or they just sort of clicked, but they did mostly work out. I’d say in many cases, and that seems pretty good. So yeah, the fact that we’re now doing a combination of theoretical, experimental, and conceptual work seems good.

It’s no longer the case that we’re mostly doing theory. That seems probably good.

Lucas: You’ve mentioned already a lot of really great links in this podcast, places people can go to learn more about these specific approaches and papers and strategies. And one place that is just generally great for people to go is to the Alignment Forum, where a lot of this information already exists. So are there just generally in other places that you recommend people check out if they’re interested in taking more technical deep dives?

Rohin: Probably actually at this point, one of the best places for a technical deep dive is the alignment newsletter database. I write a newsletter every week about AI alignment, all the stuff that’s happened in the past week, that’s the alignment newsletter, not the database, which also people can sign up for, but that’s not really a thing for technical deep dives. It’s more a thing for keeping a pace with developments in the field. But in addition, everything that ever goes into the newsletter is also kept in a separate database. I say database, it’s basically a Google sheets spreadsheet. So if you want to do a technical deep dive on any particular area, you can just go, look for the right category on the spreadsheet, and then just look at all the papers there, and read some or all of them.

Lucas: Yeah, so thanks so much for coming on the podcast Rohin, it was a pleasure to have you, and I really learned a lot and found it to be super valuable. So yeah, thanks again.

Rohin: Yeah, thanks for having me. It was great to be on here.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI alignment series.

End of recorded material

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 1)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin’s take on these different approaches.

You can take a short (3 minute) survey to share your feedback about the podcast here.

In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others
  • Where and why they disagree on technical alignment
  • The kinds of properties and features we are trying to ensure in our AI systems
  • What Rohin is excited and optimistic about
  • Rohin’s recommended reading and advice for improving at AI alignment research

Lucas: Hey everyone, welcome back to the AI Alignment podcast. I’m Lucas Perry, and today we’ll be speaking with Rohin Shah. This episode is the first episode of two parts that both seek to provide an overview of the state of AI alignment. In this episode, we cover technical research organizations in the space of AI alignment, their research methodologies and philosophies, how these all come together on our path to beneficial AGI, and Rohin’s take on the state of the field.

As a general bit of announcement, I would love for this podcast to be particularly useful and informative for its listeners, so I’ve gone ahead and drafted a short survey to get a better sense of what can be improved. You can find a link to that survey in the description of wherever you might find this podcast, or on the page for this podcast on the FLI website.

Many of you will already be familiar with Rohin, he is a fourth year PhD student in Computer Science at UC Berkeley with the Center For Human-Compatible AI, working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter. And so, without further ado, I give you Rohin Shah.

Thanks so much for coming on the podcast, Rohin, it’s really a pleasure to have you.

Rohin: Thanks so much for having me on again, I’m excited to be back.

Lucas: Yeah, long time, no see since Puerto Rico Beneficial AGI. And so speaking of Beneficial AGI, you gave quite a good talk there which summarized technical alignment methodologies approaches and broad views, at this time; and that is the subject of this podcast today.

People can go and find that video on YouTube, and I suggest that you watch that; that should be coming out on the FLI YouTube channel in the coming weeks. But for right now, we’re going to be going in more depth, and with more granularity into a lot of these different technical approaches.

So, just to start off, it would be good if you could contextualize this list of technical approaches to AI alignment that we’re going to get into within the different organizations that they exist at, and the different philosophies and approaches that exist at these varying organizations.

Rohin: Okay, so disclaimer, I don’t know all of the organizations that well. I know that people tend to fit CHAI in a particular mold, for example; CHAI’s the place that I work at. And I mostly disagree with that being the mold for CHAI, so probably anything I say about other organizations is also going to be somewhat wrong; but I’ll give it a shot anyway.

So I guess I’ll start with CHAI. And I think our public output mostly comes from this perspective of how do we get AI systems to do what we want? So this is focusing on the alignment problem, how do we actually point them towards a goal that we actually want, align them with our values. Not everyone at CHAI takes this perspective, but I think that’s the one most commonly associated with us and it’s probably the perspective on which we publish the most. It’s also the perspective I, usually, but not always, take.

MIRI, on the other hand, takes a perspective of, “We don’t even know what’s going on with intelligence. Let’s try and figure out what we even mean by intelligence, what it means for there to be a super-intelligent AI system, what would it even do or how would we even understand it; can we have a theory of what all of this means? We’re confused, let’s be less confused, once we’re less confused, then we can think about how to actually get AI systems to do good things.” That’s one of the perspectives they take.

Another perspective they take is that there’s a particular problem with AI safety, which is that, “Even if we knew what goals we wanted to put into an AI system, we don’t know how to actually build an AI system that would, reliably, pursue those goals as opposed to something else.” That problem, even if you know what you want to do, how do you get an AI system to do it, is a problem that they focus on. And the difference from the thing I associated with CHAI before is that, with the CHAI perspective, you’re interested both in how do you get the AI system to actually pursue the goal that you want, but also how do you figure out what goal that you want, or what is the goal that you want. Though, I think most of the work so far has been on supposing you know the goal, how do you get your AI system to properly pursue it?

I think DeepMind safety came, at least, is pretty split across many different ways of looking at the problem. I think Jan Leike, for example, has done a lot of work on reward modeling, and this sort of fits in with the how do we get our AI systems be focused on the right task, the right goal. Whereas Vika has done a lot of work on side effects or impact measures. I don’t know if Vika would say this, but the way I interpret it how do we impose a constraint upon the AI system such that it never does anything catastrophic? But it’s not trying to get the AI system to do what we want, just not do what we don’t want, or what we think would be catastrophically bad.

OpenAI safety also seems to be, okay how do we get deep enforcement learning to do good things, to do what we want, to be a bit more robust? Then there’s also the iterated amplification debate factored cognition area of research, which is more along the lines of, can we write down a system that could, plausibly, lead to us building an aligned AGI or aligned powerful AI system?

FHI, no coherent direction, that’s all of FHI. Eric Drexler is also trying to understand how AI will develop it in the future is somewhat very different from what MIRI’s doing, but the same general theme of trying to figure out what is going on. So he just recently published a long technical report on comprehensive AI services, which is the general worldview for predicting what AI development will look like in the future. If we believed that that was, in fact, the way AI would happen, we would probably change what we work on from the technical safety point of view.

And Owain Evans does a lot of stuff, so maybe I’m just not going to try to categorize him. And then Stuart Armstrong works on this, “Okay, how do we get value learning to work such that we actually infer a utility function that we would be happy for an AGI system to optimize, or a super-intelligent AI system to optimize?”

And then Ought works on factory cognition, so it’s very adjacent to be iterated amplification and debate research agendas. Then there’s a few individual researchers, scattered, for example, Toronto, Montreal, and AMU and EPFL, maybe I won’t get into all of them because, yeah, that’s a lot; but we can delve into that later.

Lucas: Maybe a more helpful approach, then, would be if you could start by demystifying some of the MiRI stuff a little bit; which may seem most unusual.

Rohin: I guess, strategically, the point would be that you’re trying to build this AI system that’s going to be, hopefully, at some point in the future vastly more intelligent than humans, because we want them to help us colonize the universe or something like that, and lead to lots and lots of technological progress, etc., etc.

But this, basically, means that humans will not be in control unless we very, very specifically arrange it such that we are in control; we have to thread the needle, perfectly, in order to get this to work out. In the same way that, by default you, would expect that the most intelligent creatures, beings are the ones that are going to decide what happens. And so we really need to make sure and, also it’s probably hard to ensure, that these vastly more intelligent beings are actually doing what we want.

Given that, it seems like what we want is a good theory that allows us to understand and predict what these AI systems are going to do. Maybe not in the fine nitty, gritty details, because if we could predict what they would do, then we could do it ourselves and be just as intelligent as they are. But, at least, in broad strokes what sorts of universes are they going to create?

But given that they can apply so much more intelligence that we can, we need our guarantees to be really, really strong; like almost proof level. Maybe actual proofs are a little too much to expect, but we want to get as close to it as possible. Now, if we want to do something like that, we need a theory of intelligence; we can’t just sort of do a bunch of experiments, look at the results, and then try to extrapolate from there. Extrapolation does not give you the level of confidence that we would need for a problem this difficult.

And so rather, they would like to instead understand intelligence deeply, deconfuse themselves about it. Once you understand how intelligence works at a theoretical level, then you can start applying that theory to actual AI systems and seeing how they approximate the theory, or make predictions about what different AI systems will do. And, hopefully, then we could say, “Yeah, this system does look like it’s going to be very powerful as approximating this particular idea, this particular part of theory of intelligence. And we can see that with this particular theory of intelligence, we can align it with humans somehow, and you’d expect that this was going to work out.” Something like that.

Now, that sounded kind of dumb even to me as I was saying it, but that’s because we don’t have the theory yet; it’s very fun to speculate how you would use the theory before you actually have the theory. So that’s the reason they’re doing this, the actual thing that they’re focusing on is centered around problems of embedded agency. And I should say this is one of their, I think, two main strands of research, the other stand of research, I do not know anything about because they have not published anything about it.

But one of their strands of research is about embedded agency. And here the main point is that in the real world, any agent, any AI system, or a human is a part of their environment. They are smaller than the environment and the distinction between agent and environment is not crisp. Maybe I think of my body as being part of me but, I don’t know, to some extent, my laptop is also an extension of my agency; there’s a lot of stuff I can do with it.

Or, on the other hand, you could think maybe my arms and limbs aren’t actually a part of me, I could maybe get myself uploaded at some point in the future, and then I will no longer have arms or legs; but in some sense I am still me, I’m still an agent. So, this distinction is not actually crisp, and we always pretend that it is in AI, so far. And it turns out that once you stop making this crisp distinction and start allowing the boundary to be fuzzy, there are a lot of weird, interesting problems that show up and we don’t know how to deal with any of them, even in theory, so that’s what they focused on.

Lucas: And can you unpack, given that AI researchers control of the input/output channels for AI systems, why is it that there is this fuzziness? It seems like you could extrapolate away the fuzziness given that there are these sort of rigid and selected IO channels.

Rohin: Yeah, I agree that seems like the right thing for today’s AI systems; but I don’t know. If I think about, “Okay, this AGI is a generally intelligent AI system.” I kind of expect it to recognize that when we feed it inputs which, let’s say, we’re imagining a money maximizing AI system that’s taking in inputs like stock prices, and it outputs which stocks to buy. And maybe it can also read the news that lets it get newspaper articles in order to make better decisions about which stocks to buy.

At some point, I expect this AI system to read about AI and humans, and realize that, hey, it must be an AI system, it must be getting inputs and outputs. Its reward function must be to make this particular number in a bank account be as high as possible and then once it realizes this, there’s this part of the world, which is this number in the bank account, or it could be this particular value, this particular memory block in its own CPU, and its goal is now make that number as high as possible.

In some sense, it’s now modifying itself, especially if you’re thinking of the memory block inside the CPU. If it goes and edits that and sets that to a million, a billion, the highest number possible in that memory block, then it seems like it has, in some sense, done some self editing; it’s changed the agent part of it. It could also go and be like, “Okay actually what I care about is this particular award function box is supposed to output as high a number as possible. So what if I go and change my input channels such that it feeds me things that caused me to believe that I’ve made tons and tons of profit?” So this is a delusion backs consideration.

While it is true that I don’t see a clear, concrete way that an AI system ends up doing this, it does feel like an intelligent system should be capable of this sort of reasoning, even if it initially had these sort of fixed inputs and outputs. The idea here is that its outputs can be used to affect the inputs or future outputs.

Lucas: Right, so I think that that point is the clearest summation of this; it can affect its own inputs and outputs later. If you take human beings who are, by definition, human level intelligences we have, say, in a classic computer science sense if you thought of us, you’d say we strictly have five input channels: hearing seeing, touch, smell, etc.

Human beings have a fixed number of input/output channels but, obviously, human beings are capable of self modifying on those. And our agency is sort of squishy and dynamic in ways that would be very unpredictable, and I think that that unpredictability and the sort of almost seeming ephemerality of being an agent seems to be the crux of a lot of the problem.

Rohin: I agree that that’s a good intuition pump, I’m not sure that I agree it’s the crux. The crux, to me, it feels more like you specify some sort of behavior that you want which, in this case, was make a lot of money or make this number in a bank account go higher, or make this memory cell go as high as possible.

And when you were thinking about the specification, you assumed that the inputs and outputs fell within some strict parameters, like the inputs are always going to be news articles that are real and produced by human journalists, as opposed to a fake news article that was created by the AI in order to convince the reward function that actually it’s made a lot of money. And then the problem is that since the AI’s outputs can affect the inputs, the AI could cause the inputs to go outside of the space of possibilities that you imagine the inputs could be in. And this then allows the AI to game the specification that you had for it.

Lucas: Right. So, all the parts which constitute some AI system are all, potentially, modified by other parts. And so you have something that is fundamentally and completely dynamic, which you’re trying to make predictions about, but whose future structure is potentially very different and hard to predict based off of the current structure?

Rohin: Yeah, basically.

Lucas: And that in order to get past this we must, again, tunnel down on this decision theoretic and rational agency type issues at the bottom of intelligence to sort of have a more fundamental theory, which can be applied to these highly dynamic and difficult to understand situations?

Rohin: Yeah, I think the MIRI perspective is something like that. And in particular, it would be like trying to find a theory that allows you to put in something that stays stable even while the system, itself, is very dynamic.

Lucas: Right, even while your system, whose parts are all completely dynamic and able to be changed by other parts, how do you maintain a degree of alignment amongst that?

Rohin: One answer to this is give the AI a utility function. There is a utility function that’s explicitly trying to maximize that and in that case, it probably has an incentive in order to keep that to protect that the utility function, because if it gets changed, well then it’s not going to maximize that utility function anymore, it’ll maximize something else which will lead to worse behavior by the likes of the original utility function. That’s a thing that you could hope to do with a better theory of intelligence is, how do you create a utility function in an AI system stays stable, even as everything else is dynamically changing?

Lucas: Right, and without even getting into the issues of implementing one single stable utility function.

Rohin: Well, I think they’re looking into those issues. So, for example, Vingean Reflection is a problem that is entirely about how you create better, more improved version of yourself without having any value drift, or a change to the utility function.

Lucas: Is your utility function not self-modifying?

Rohin: So in theory, it could be. The hook would be that we could design an AI system that does not self-modify its utility function under almost all circumstances. Because if you change your utility function, then you’re going to start maximizing that new utility function which, by the original utility function’s evaluation, is worse. If I told you, “Lucas, you have got to go fetch coffee.” That’s the only thing in life you’re concerned about. You must take whatever actions are necessary in order to get the coffee.

And then someone goes like, “Hey Lucas, I’m going to change your utility function so that you want to fetch tea instead.” And then all of your decision making is going to be in service of getting tea. You would probably say, “No, don’t do that, I want to fetch coffee right now. If you change my utility function for being ‘fetch tea’, then I’m going to fetch tea, which is bad because I want to fetch coffee.” And so, hopefully, you don’t change your utility function because of this effect.

Lucas: Right. But isn’t this where corrigibility comes in, and where we admit that as we sort of understand more about the world and our own values, we want to be able to update utility functions?

Rohin: Yeah, so that is a different perspective; I’m not trying to describe that perspective right now. It’s a perspective for how you could get something stable in an AI system. And I associate it most with Eliezer, though I’m not actually sure if he holds this opinion.

Lucas: Okay, so I think this was very helpful for the MIRI case. So why don’t we go ahead and zoom in, I think, a bit on CHAI, which is the Center For Human-Compatible AI.

Rohin: So I think rather than talking about CHAI, I’m going to talk about the general field of trying to get AI systems do what we want; a lot of people at CHAI work on that but not everyone. And also a lot of people outside of CHAI work on that, because that seems to become more useful carving of the field. So there’s this broad argument for AI safety which is, “We’re going to have very intelligent things based on the orthagonality thesis, we can’t really say anything about their goals.” So, the really important thing is to make sure that the intelligence is pointed at the right goals, it’s pointed at doing what we actually want.

And so then the natural approach is, how do we get our AI systems to infer what we want to do and then actually pursue that? And I think, in some sense, it’s one of the most obvious approaches to AI safety. This is a clear enough problem, even with narrow current systems that there are plenty of people outside of AI safety working on this, as well. So this incorporates things like inverse reinforcement learning, preference learning, reward modeling, the CIRL cooperative IRL paper also fits into all of this. So yeah, I can begin to ante up those in more depth.

Lucas: Why don’t you start off by talking about the people who exist within the field of AI safety, give sort of a brief characterization of what’s going on outside of the field, but primarily focusing on those within the field. How this approach, in practice, I think generally is, say, different from MIRI to start off with, because we have a clear picture of them painted right next to what we’re delving into now.

Rohin: So I think difference of MiRI is that this is more targeted directly at the problem right now, in that you’re actually trying to figure out how do you build an AI system that does what you want. Now, admittedly, most of the techniques that people have come up with are not likely to scale up to super-intelligent AI, they’re not meant to, no one claims that they’re going to scale up to super-intelligent AI. They’re more like some incremental progress on figuring out how to get AI systems to do what we want and, hopefully, with enough incremental progress, we’ll get to a point where we can go, “Yes, this is what we need to do.”

Probably the most well known person here would be Dylan Hadfield-Menell, who you had on your podcast. And so he talked about CIRL and associated things quite a bit there, there’s not really that much I would say in addition to it. Maybe a quick summary of Dylan’s position is something like, “Instead of having AI systems that are optimizing for their own goals, we need to have AI systems that are optimizing for our goals, and try to infer our goals in order to do that.”

So rather than having an AI system that is individually rational with respect to its own goals, you instead want to have a human AI system such that the entire system is rationally optimizing for the human’s goals. This is sort of the point made by CIRL, where you have an AI system, you’ve got a human, they’re playing those two player game, the humans is the only one who knows the reward function, the robot is uncertain about what the reward function is, and has to learn by observing what the humans does.

And so, now you see that the robot does not have a utility function that it is trying to optimize; instead is learning about a utility function that the human has and then helping the human optimize that reward function. So summary, try to build human AI systems that are group rational, as opposed to an AI system that is individually rational; so that’s Dylan’s view. Then there’s Jan Leike at DeepMind, and a few people at OpenAI.

Lucas: Before we pivot into OpenAI and DeepMind, just sort of focusing here on the CHAI end of things and this broad view, and help me explain here how you would characterize it. The present day actively focused view on current issues, and present day issues and alignment and making incremental progress there. This view here you see as a sort of subsuming multiple organizations?

Rohin: Yes, I do.

Lucas: Okay. Is there a specific name you would, again, use to characterize this view?

Rohin: Oh, getting AI systems to do what we want. Let’s see, do I have a pithy name for this? Helpful AI systems or something.

Lucas: Right which, again, is focused on current day things, is seeking to make incremental progress, and which subsumes many different organizations?

Rohin: Yeah, that seems broadly true. I do think there are people who are doing more conceptual work, thinking about how this will scale to AGI and stuff like that; but it’s a minority of work in the space.

Lucas: Right. And so the question of how do we get AI systems to do what we want them to do, also includes these views of, say, Vingean Reflection or how we become idealized versions of ourselves, or how we build on value over time, right?

Rohin: Yeah. So, those are definitely questions that you would need to answer at some point. I’m not sure that you would need to answer Vingean Reflection at some point. But you would definitely need to answer how do you update, given that humans don’t actually know what they want, for a long-term future; you need to be able to deal with that fact at some point. It’s not really a focus of current research, but I agree that that is a thing about this approach will have to deal with, at some point.

Lucas: Okay. So, moving on from you and Dylan to DeepMind and these other places that you view as this sort of approach also being practice there?

Rohin: Yeah, so while Dylan and I and other at CHAI has been focused on sort of conceptual advances, like in toy environments, does this do the right thing? What are some sorts of data that we can learn from? Do they work in these very simple environments with quite simple algorithms? I would say that OpenAI and DeepMind safety teams are more focused on trying to get this to work in complex environments of the sort that we’re getting this to work on state-of-the-art environments, the most complex ones that we have.

Now I don’t mean DoTA and StarCraft, because running experiments with DoTAi and StarCraft is incredibly expensive, but can we get AI systems that do what we want for environments like Atari or MuJoCo? There’s some work on this happening at CHAI, there are pre-prints available online, but it hasn’t been published very widely yet. Most of the work, I would say, has been happening with an OpenAI/DeepMind collaboration, and most recently, there was a position paper from DeepMind on recursive reward modeling.

Right before that there was also a paper on combining first a paper, deeper enforcement learning from human preferences, which said, “Okay if we allow humans to specify what they want by just comparing between different pieces of behavior from the AI system, can we train an AI system to do what the human wants?” And then they built on that in order to create a system that could learn from demonstrations, initially, using a kind of imitation learning, and then improve upon the demonstrations using comparisons in the same way that deep RL from human preferences did.

So one way that you can do this research is that there’s this field of human computer interaction, which is about … well, it’s about many things. But one of the things that it’s about is how do you make the user interface for humans intuitive and easy to use such that you don’t have user error or operator? One comment from people that I liked is that most of the things that are classified as ‘user error’ or ‘operator error’ should not be classified as such, they should be classified as ‘interface errors’ where you had such a confusing interface that well, of course, at some point some user was going to get it wrong.

And similarly, here, what we want is a particular behavior out of the AI, or at least a particular set of outcomes from the AI; maybe we don’t know exactly how to achieve those outcomes. And AI is about giving us the tools to create that behavior in automated systems. The current tool that we all use is the reward function, we write down the reward function and then we give it to an algorithm, and it produces behaviors and the outcomes that we want.

And reward functions, they’re just a pretty terrible user interface, they’re better than the previous interface which is writing a program explicitly, which humans cannot do it if the task is something like image classification or continuous control in MuJoCo; it’s an improvement upon that. But reward functions are still a pretty poor interface, because they’re implicitly saying that they encode perfect knowledge of the optimal behavior in all possible environments; which is clearly not a thing that humans can do.

I would say that this area is about moving on from reward functions, going to the next thing that makes the human’s job even easier. And so we’ve got things like comparisons, we’ve got things like inverse award design where you specify a proxy to work function that only needs to work in the training environment. Or you do something like inverse reinforcement learning, where you learn from demonstrations; so I think that’s one nice way of looking at this field.

Lucas: So do you have anything else you would like to add on here about how we present-day get AI systems to do what we want them to do, section of the field?

Rohin: Maybe I want to plug my value learning sequence, because it talks about this much more eloquently than I can on this podcast?

Lucas: Sure. Where can people find your value learning sequence?

Rohin: It’s on the Alignment Forum. You just go to the Alignment Forum, at the top there’s ‘Recommended Sequences’, there’s ‘Embedded Agency’, which is from MIRI, the sort of stuff we already talked about; so that’s also great sequence, I would recommend it. There’s iterated amplification, also great sequence we haven’t talked about it yet. And then there’s my value learning sequence, so you can see it on the front page of the Alignment Forum.

Lucas: Great. So we’ve characterized these, say, different parts of the AI alignment field. And probably just so far it’s been cut into this sort of MIRI view, and then this broad approach of trying to get present-day AI systems to do what we want them to do, and to make incremental progress there. Are there any other slices of the AI alignment field that you would like to bring to light?

Rohin: Yeah, I’ve got four or five more. There’s the interated amplification and debate side of things, which is how do we build using current technologies, but imagining that they were way better? How do we build and align AGI? So they’re trying to solve the entire problem, as opposed to making incremental progress and, simultaneously, hopefully thinking about, conceptually, how do we fit all of these pieces together?

There’s limiting the AGI system, which is more about how do we prevent AI systems from behaving catastrophically? It makes no guarantees about the AI systems doing what we want, it just prevents them from doing really, really bad things. Techniques in that section includes boxing and avoiding side effects. There’s the robustness view, which is about how do we make AI systems well behaved or robustly? I guess that’s pretty self explanatory.

There’s transparency or interpretability, which I wouldn’t say is a technique by itself, but seems to be broadly useful for almost all of the other avenues, it’s something we would want to add to other techniques in order to make those techniques more effective. There’s also, in the same frame as MIRI, can we even understand intelligence? Can we even forecast what’s going to happen with AI? And within that, there’s comprehensive AI services.

here’s also lots of efforts on forecasting, but comprehensive AI services actually makes claims about what technical AI safety should do. So I think that one actually does have a place in this podcast, whereas most of the forecasting things do not, obviously. They have some implications on the strategic picture, but they don’t have clear implications on technical safety research directions, as far as I can tell it right now.

Lucas: Alright, so, do you want to go ahead and start off with the first one on the list there And then we’ll move sequentially down?

Rohin: Yeah, so iterated amplification and debate. This is similar to the helpful AGI section in the sense that we are trying to build an AI system that does what we want. That’s still the case here, but we’re now trying to figure out, conceptually, how can we do this using things like reinforcement learning and supervised learning, but imagining that they’re way better than they are right now? Such that the resulting agent is going to be aligned with us and reach arbitrary levels of intelligence; so in some sense, it’s trying to solve the entire problem.

We want to come up with a scheme such that if we run that scheme, we get good outcomes, we’ve solved almost all the problem. I think that it also differs in that the argument for why we can be successful is also different. This field is aiming to get a property of corrigibility, which I like to summarize as trying to help the overseer. It might fail to help the overseer, or the human, or the user, because it’s not very competent and maybe it makes a mistake and things that I like apples when actually I want oranges. But it was actually trying to help me; it actually thought I wanted apples.

So in corrigibility, you’re trying to help the overseer, whereas, in the previous thing about helpful AGI, you’re more getting an AI system that actually does what we want; there isn’t this distinction between what you’re trying to do versus what you actually do. So there’s a slightly different property that you’re trying to ensure, I think, on the strategic picture that’s the main difference.

The other difference is that these approaches are trying to make a single, unified generally intelligent AI system, and so they will make assumptions like, given that we’re trying to imagine something that’s generally intelligent, it should be able to do X, Y, and Z. Whereas the research agenda that’s let’s try to get AI systems that do want you want, tends not to make those assumptions. And so it’s more applicable to current systems or narrow system where you can’t assume that you have general intelligence.

For example, a claim that that Paul Christiano often talks about is that, “If your AI agent is generally intelligent and a little bit corrigible, it will probably easily be able to infer that its overseer, or the user, would like to remain in control of any resources that they have, and would like to be better informed about the situation, that the user would prefer that the agent does not lie to them etc., etc.” It was definitely not something that current day AI systems can do unless you really engineer them to, so this is presuming some level of generality, which we do not currently have.

So the next thing I said was limited AGI. Here the idea is, there are not very many policies or AI systems that will do what we want; what we want is a pretty narrow space in the space of all possible behaviors. Actually selecting one of the behaviors out of that space is quite difficult and requires a lot of information in order to narrow in on that piece of behavior. But if all you’re trying to do is avoid the catastrophic behaviors, then there are lots and lots of policies that successfully do that. And so it might be easier to find one of those policies; a policy that doesn’t ever kill all humans.

Lucas: At least the space of those policies, one might have this view and not think it sufficient for AI alignment, but see it as sort of a low hanging fruit to be picked. Because the space of non-catastrophic outcomes is larger than the space of extremely specific futures that human beings support.

Rohin: Yeah, exactly. And the success story here is, basically, that we develop this way of preventing catastrophic behaviors. All of our AI systems are filled with the system in place, and then technological progress continues as usual; it’s maybe not as fast as it would have been if we had an aligned AGI doing all of this for us, but hopefully it would still be somewhat fast, and hopefully enabled a bit by AI systems. Eventually, we will either make it to the future without ever building an AI system that doesn’t have a system in place, or we use this to do a bunch more AI research until we solve the full alignment problem, and then we can build, with high confidence that it’ll go well.

And actual proper aligned, super-intelligence that is helping us without any of these limitations systems in place. I think from a strategic picture, that’s basically the important parts about limited AGI. There are two subsections within those limits based on trying to change what the AI’s optimizing for, so this would be something like impact measures versus limits on the input/output channels of the AI system; so this would be something like AI boxing.

So, with robustness, I sort of think of the robustness mostly, it’s not going to give us safety by itself, probably, though there are some scenarios in which it could happen. It’s more meant to harden whichever other approach that we use. Maybe if we have an AI system that is trying to do what we want, to go back to the helpful AGI setting, maybe it does that 99.9 percent of the time. But we’re using this AI to make millions of decisions, which means it’s going to not do what we want 1,000 times. That seems like way too many times for comfort, because if it’s applying its intelligence to the wrong goal in those 1,000 times, you could get some pretty bad outcomes.

This is a super heuristic and fluffy argument, but there are lots of problems with it. I think it sets up the general reason that we would want robustness. So with robustness techniques, you’re basically trying to get some nice worst case guarantees that say, “Yeah, the AI system is never going to screw up super, super bad.” And this is helpful when you have an AI system that’s going to make many, many, many decisions, and we want to make sure that none of those decisions are going to be catastrophic.

And so some techniques in here include verification, adversarial training, and other adversarial ML techniques like Byzantine fault tolerance, or stuff like that. These are all the data poisoning, interpretability can also be helpful for robustness if you’ve got a strong overseer who can use interpretability to give good feedback to your AI system. But yeah, the overall goal is take something that doesn’t fail 99 percent of the time, and get it to not fail 100 percent of the time, or check whether or not it ever fails, so that you don’t have this very rare but very bad outcome.

Lucas: And so would you see this section as being within the context of any others or being sort of at a higher level of abstraction?

Rohin: I would say that it applies to any of the others, well okay, not the MIRI embedded agency stuff, because we don’t really have a story for how that ends up helping with AI safety. It could apply to however that caches out in the future, but we don’t really know right now. With limited AGI, many have this theoretical model, if you apply this sort of penalty, this sort of impact measure, then you’re never going to have any catastrophic outcomes.

But, of course, in practice, we train our AI systems to optimize that penalty and get the sort of weird black box thing out. And we’re not entirely sure if it’s respecting the penalty or something like this. Then you could use something like verification or your transparency in order to make sure that this is actually behaving the way we would predict them behave based on our analysis of what limits we need to put on the AI system.

Similarly, if you build AI systems that are doing what we want, maybe you want to use adversarial training to see if you can find any situations in which the AI system’s doing something weird, doing something which we wouldn’t classify as what we want, with iterated amplification or debate, maybe we want to verify that the corrigibility property happens all the time. It’s unclear how you would use verification for that, because it seems like a particularly hard property to formalize, but you could still do things like adversarial training or transparency.

We might have this theoretical arguments for why our systems will work, then once we turn them into actual real systems that will probably use neural nets and other messy stuff like that, are we sure that in the translation from theory to practice, all of our guarantees stayed? Unclear, we should probably use some robustness techniques to check that.

Interpretability, I believe, was next. It’s sort of similar in that it’s broadly useful for everything else. If you want to figure out whether an AI system is doing what you want, it would be really helpful to be able to look into the agent and see, “Oh, it chose to buy apples because it had seen me eat apples in the past.” Versus, “It chose to buy apples because there was this company that made it to buy the apples, so that it would make more profit.”

If we could see those two cases, if we could actually see into the decision making process, it becomes a lot easier to tell whether or not the AI system is doing what we want, or whether or not the AI system is corrigible, or whether or not be AI system is properly … Well, maybe it’s not as obvious for impact measures, but I wouldn’t expect it to be useful there as well, even if I don’t have a story off the top of my head.

Similarly with robustness, if you’re doing something like adversarial training, it sure would help if your adversary was able to look into the inner workings of the agent and be like, “Ah, I see this agent, it tends to underwrite this particular class of risky outcomes. So why don’t I search within that class of situations for one that is going to take a big risk on that it shouldn’t have taken otherwise?” It just makes all of the other problems a lot easier to do.

Lucas: And so how is progress made on interpretability?

Rohin: Right now I think most of the progress is in image classifiers. I’ve seen some work on interpretability for deep RL as well. Honestly, that’s probably most of the research is happening with classification systems, primarily image classifiers, but others as well. And then I also see the deep RL explanation systems because I read a lot of deep RL research.

But it’s motivated a lot, there are real problems with current AI systems, and interpretability helps you to diagnose and fix those, as well. For example, the problems of bias in classifiers, one thing that I remember from Deep Dream is you can ask Deep Dream to visualize barbells. And you always see these sort of muscular arms that are attached to the barbells because, in the training set, barbells were always being picked up by muscular people. So, that’s a way that you can tell that your classifier is not really learning the concepts that you wanted it to do.

In the bias case maybe your classifier always classifies anyone sitting at a computer as a man, because of bias in the data set. And using interpretability techniques, you could see that, okay when you look at this picture, the AI system is looking primarily at the pixels that represent the computer, as opposed to the pixels that represent the human. And making its decision to label this person as a man, based on that, and you’re like, no, that’s clearly the wrong thing to do. The classifier should be paying attention to the human, not to the laptop.

So I think a lot of interpretability research right now is you take a particular short term problem and figure out how you can make that problem easier to solve. Though a lot of it is also what would be the best way to understand what our model is doing? So I think a lot of the work that Chris Olah doing, for example, is in this vein, and then as we do this exploration, finding some sort of bias in the classifiers that you’re studying.

So, Comprehensive AI Services, an attempt to predict what the feature of AI development will look like, and the hope is that, by doing this, we can figure out what sort of technical safety things we will need to do. Or, strategically, what sort of things we should push for in the AI research community in order to make those systems safer.

There’s a big difference between, we are going to build a single unified AGI agent and it’s going to be generally intelligent to optimize the world according to a utility function versus we are going to build a bunch of disparate, separate, narrow AI systems that are going to interact with each other quite a lot. And because of that, they will be able to do a wide variety of tasks, none of them are going to look particularly like expected utility maximizers. And the safety research you want to do is different in those two different worlds. And CAIS is basically saying “We’re in the second of those worlds, not the first one.”

Lucas: Can you go ahead and tell us about ambitious value learning?

Rohin: Yeah, so with ambitious value learning, this is also an approach to how do we make an aligned AGI solve the entire problem in some sense? Which is look at not just human behavior, but also human brains of the algorithm that they implement, and use that to infer an adequate utility function, the one that we would be okay with the behavior that results from that.

Infer this utility function, I’m going to plug it into an expected utility maximizer. Now, of course, we do have to solve problems with even once we have the utility function, how do we actually build a system that maximizes that utility function, which is not a solved problem yet? But it does seem to be capturing from the main difficulties, if you could actually solve the problem. And so that’s an approach I associate most with Stuart Armstrong.

Lucas: Alright, and so you were saying earlier, in terms of your own view, it’s sort of an amalgamation of different credences that you have in the potential efficacy of all these different approaches. So, given all of these and all of their broad missions, and interests, and assumptions that they’re willing to make, what are you most hopeful about? What are you excited about? How do you, sort of, assign your credence and time here?

Rohin: I think I’m most excited about the concept of corrigibility. That seems like the right thing to aim for, it seems like it’s a thing we can achieve, it seems like if we achieve it, we’re probably okay, nothing’s going to go horribly wrong and probably will go very well. I am less confident on which approach to corrigibility I am most excited about. Iterated amplification and debate seem like if we were to implement them, they will probably lead to incorrigible behavior. But I am worried that either of those will be … Either we won’t actually be able to build generally intelligent agents, in which case both of those approaches don’t really work. Or another worry that I have is that those approaches might be too expensive to actually do in that other systems are just so much more computationally efficient that we just use those instead.

Due to economic pressures, Paul does not seem to be worried by either of these things. He’s definitely aware of both these issues, in fact, he was the one I think who listed computational efficiency as a desideratum, and he still is optimistic about them. So, I would not put a huge amount of credence in this view of mine.

If I were to say what I was excited about for portability instead of that, it would be something like take the research that we’re currently doing on how to get current AI systems to work, which often called ‘narrow value learning’. If you take that research, it seems plausible that this research, extended into the future, will give us some method of creating an AI system that’s implicitly learning our narrow values, and is corrigible as a result of that, even if it is not generally intelligent.

This is sort of a very hand wavey speculative intuition, certainly not as concrete as the hope that we have with iterated amplification. But I’m somewhat optimistic about it, and less optimistic about limiting AI systems, it seems like even if you succeed in finding a nice, simple rule that eliminates all catastrophic behaviors, which plausibly you could do, it seems hard to find one that both does that and also lets you do all of the things that you do want to do.

If you’re talking about impact metrics, for example, if you require AI to be a low impact, I expect that that would prevent you from doing many things that we actually want to do, because many things that we want to do are actually quite high impact. Now, Alex Turner disagrees with me on this, and he developed attainable utility preservation. He is explicitly working on this problem and disagree with me, so again I don’t know how much credence to put in this.

I don’t know if Vika agrees with me on this or not, she also might disagree with me and she is also directly working with this problem. So, yeah, seems hard to put a limit that also lets us do and things that we want. And in that case, it seems like due to economic pressures, we’d end up doing the things that don’t limit our AI systems from doing what they want.

I want to keep emphasizing my extreme uncertainty over all of this given that other people disagree with me on this, but that’s my current opinion. Similarly with boxing, it seems like it’s going to just make it very hard to actually use the AI system. Robustness and interpretability seems very broadly useful and supportive of most research on interpretability; maybe with an eye towards long term concerns, just because it seems to make every other approach to AI safety a lot more feasible and easier to solve.

I don’t think it’s a solution by itself, but given that it seems to improve almost every story I have for making an aligned AGI, seems like it’s very much worth getting a better understanding of it. Robustness is an interesting one, it’s not clear to me, if it is actually necessary. I kind of want to just voice lots of uncertainty about robustness and leave it at that. It’s certainly good to do in that it helps us be more confident in our AI systems, but maybe everything would be okay even if we just didn’t do anything. I don’t know, I feel like I would have to think a lot more about this and also see the techniques that we actually used to build AGI in order to have a better opinion on that.

Lucas: Could you give a few examples of where your intuitions here are coming from that don’t see robustness as an essential part of the AI alignment?

Rohin: Well, one major intuition, if you look at humans, they’re at least some human where I’m like, “Okay, I could just make this human a lot smarter, a lot faster, have them think for many, many years, and I still expect that they will be robust and not lead to some catastrophic outcome. They may not do exactly what I would have done, because they’re doing what they want. But they’re probably going to do something reasonable, they’re not going to do something crazy or ridiculous.

I feel like humans, some humans, the sufficiently risk averse and uncertain ones seem to be reasonably robust. I think that if you know that you’re planning over a very, very, very long time horizon, so imagine that you know you’re planning over billions of years, then the rational response to this is, “I really better make sure not to screw up right now, since there is just so much reward in the future, I really need to make sure that I can get it.” And so you get very strong pressures for preserving option value or not doing anything super crazy. So I think you could, plausibly, just get the reasonable outcomes from those effects. But again, these are not well thought out.

Lucas: All right, and so I just want to go ahead and guide us back to your general views, again, on the approaches. Is there anything that you’d like to add their own the approaches?

Rohin: I think I didn’t talk about CAIS yet. I guess my general view of CAIS, I broadly agree with it, that this does seem to be the most likely development path, meaning that it’s more likely than any other specific development path, but not more likely to have any other development path.

So I broadly agree with the worldview presented, I’m still trying to figure out what implications it has for technical safety research. I don’t agree with all of it, in particular, I think that you are likely to get AGI agents at some point, probably, after the CAIS soup of services happens. Which, I think, again, Drexler disagrees with me on that. So, put a bunch of uncertainty on that, but I broadly agree with that worldview that CAIS is proposing.

Lucas: In terms of this disagreement between you and Eric Drexler, are you imagining agenty AGI or super-intelligence which comes after the CAIS soup? Do you see that as an inevitable byproduct of CAIS or do you see that as an inevitable choice that humanity will make? And is Eric pushing the view that the agenty stuff doesn’t necessarily come later, it’s a choice that human beings would have to make?

Rohin: I do think it’s more like saying that this will be a choice that humans will make at some point. I’m sure that Eric, to some extent, is saying, “Yeah, just don’t do that.” But I think Eric and I do, in fact, have a disagreement on how much more performance you can get from an AGI agent, than a CAIS super of services. My argument is something like there is efficiency to be gained from going to an AGI agent, and Eric’s position as best I understand it, is that there is actually just not that much economic incentive to go to an AGI agent.

Lucas: What are your intuition pumps for why you think that you will gain a lot of computational efficiency from creating sort of an AGI agent? We don’t have to go super deep, but I guess a terse summary or something?

Rohin: Sure, I guess the main intuition pump is that in all of the past cases that we have of AI systems, you see that in speech recognition, in deep reinforcement learning, in image classification, we had all of the hand-built systems that separated these out into a few different modules that interacted with each other in a vaguely CAIS-like way. And then, at some point, we got enough computer and large enough data sets that we just threw deep learning at it, and deep learning just blew those approaches out of the water.

So there’s the argument from empirical experience, and there’s also the argument of if you try to modularize your systems yourself, you can’t really optimize the communication between them, you’re less integrated and you can’t make decisions based on global information, you have to make it based off of local information. And so the decisions tend to be a little bit worse. This could be taken as an explanation for the empirical observation that I made that we can already make; so that’s another intuition pump there.

Eric’s response would probably be something like, “Sure, this seems true for these narrow tasks, for narrow tasks.” You can get a lot of efficiency gains by integrating everything together and throwing deep learning and [inaudible 00:54:10] training at all of it. But for a sufficiently high level tasks, there’s not really that much to be gained by doing global information instead of local information, so you don’t actually lose much by having these separate systems, and you do get a lot of computational deficiency in generalization bonuses by modularizing. He had a good example of this that I’m not replicating and I don’t want to make my own example, because it’s not going to be as convincing; but that’s his current argument.

And then my counter-argument is that’s because humans have small brains, so given the size of our brains and the limits of our data, and the limits of the compute that we have, we are forced to do modularity and systematization to break tasks apart into modular chunks that we can then do individually. Like if you are running a corporation, you need each person to specialize in their own task without thinking about all the other tasks, because we just do not have the ability to optimize for everything all together because we have small brains, relatively speaking; or limited brains, is what I should say.

But this is not a limit that AI systems will have. An AI system would just vastly more computer than the human brain, vastly more data will, in fact, just be able to optimize all of this with global information and get better results. So that’s one thread of the argument taken down to two or three levels of arguments and counter-arguments. There are other threads of that debate, as well.

Lucas: I think that that serves a purpose for illustrating that here. So are there any other approaches here that you’d like to cover, or is that it?

Rohin: I didn’t talk about factored cognition very much. But I think it’s worth highlighting separately from iterated amplification in that it’s testing an empirical hypothesis of can humans decompose tasks into chunks of some small amount of time? And can we do arbitrarily complex tasks using these humans? I am particularly excited about this sort of work that’s trying to figure out what humans are capable of doing and what supervision they can give to AI systems.

Mostly because going back to a thing I said way back in the beginning, what we’re aiming for is a human AI system to be collectively rational as opposed to an AI system as individually rational. Part of the human-AI-system is the human, you want to be able to know what the human can do, what sort of policies they can implement, what sort of feedback they can be giving to the AI system. And something like factory cognition is testing a particular aspect of that; and I think that seems great and we need more of it.

Lucas: Right. I think that this seems to be the sort of emerging view of where social science or scientists are needed in AI alignment in order to, again as you said, sort of understand what human beings are capable in terms of supervised learning and analyzing the human component of the AI alignment problem as it requires us to be collectively rational with AI systems.

Rohin: Yeah, that seems right. I expect more writing on this in the future.

Lucas: All right, so there’s just a ton of approaches here to AI alignment, and our heroic listeners have a lot to take in here. In terms of getting more information, generally, about these approaches or if people are still interested in delving into all these different views that people take at the problem and methodologies of working on it, what would you suggest that interested persons look into or read into?

Rohin: I cannot give you a overview of everything, because that does not exist. To the extent that it exists, it’s either this podcast or the talk that I did at Beneficial AGI. I can suggest resources for individual items, so for embedded agency there’s the embedded agency sequence on the Alignment Forum; far and away the best thing for read for that.

For CAIS, Comprehensive AI Services, there was a 200 plus page tech report published by Eric Drexler at the beginning of this month, if you’re interested, you should go read the entire thing; it is quite good. But I also wrote a summary of it on the Alignment Forum, which is much more readable, in the sense that it’s shorter. And then there are a lot of comments on there that analyze it a bit more.

There’s also another summary written by Richard Ngo, also on the Alignment Forum. Maybe it’s only on Lesswrong, I forget; it’s probably on the Alignment Forum. But that’s a different take on comprehensive AI services, so I’d recommend reading that too.

For limited AGI, I have not really been keeping up with the literature on boxing, so I don’t have a favorite to recommend. I know that a couple have been written by, I believe, Jim Babcock and Roman Yampolskiy.

For impact measures, you want to read Vika’s paper on relative reachability. There’s also a blog post about it if you don’t want to read the paper. And Alex Turner’s blog posts on attainable utility preservation, I think it’s called ‘Towards A New Impact Measure’, and this is on the Alignment Forum.

For robustness, I would read Paul Christiano’s post called ‘Techniques For Optimizing Worst Case Performance’. This is definitely specific to how robustness will help under Paul’s conception of the problem and, in particular, his thinking of robustness in the setting where you have a very strong overseer for your AI system. But I don’t know of any other papers or blog post that’s talking about robustness, generally.

For AI systems that do what we want, there’s my value learning sequence that I mentioned before on the Alignment Forum. There’s CIRL or Cooperative Inverse Reinforcement Learning which is a paper by Dylan and others. There’s Deep Reinforcement Learning From Human Preferences and Recursive Reward Modeling, these are both papers that are particular instances of work in this field. I also want to recommend Inverse Reward Design, because I really like that paper; so that’s also a paper by Dylan, and others.

For corrigibility and iterated amplification, the iterated amplification sequence on the Alignment Forum or half of what Paul Christiano has written. If you want to read not an entire sequence of blog posts, then I think Clarifying AI alignment is probably the post I would recommend. It’s one of the posts in the sequence and talks about this distinction of creating an AI system that is trying to do what you want, as opposed to actually doing what you want and why we might want to aim for only the first one.

For iterated amplification, itself, that technique, there is a paper that I believe is called something like Supervising Strong Learners By Amplifying Weak Experts, which is a good thing to read and there’s also corresponding OpenAI blog posts, whose name I forget. I think if you search iterated amplification, OpenAI blog you’ll find it.

And then for debate, there’s AI Safety via Debate, which is a paper, there’s also a corresponding OpenAI blog post. For factory cognition, there’s a post called Factored Cognition, on the Alignment Forum; again, in the iterated amplification sequence.

For interpretability, there isn’t really anything talking about interpretability, from the strategic point of view of why we want it. I guess that same post I recommend before of techniques for optimizing worst case performance talks about it a little bit. For actual interpretability techniques, I recommend the distill articles, the building blocks of interpretability and feature visualization, but these are more about particular techniques for interpretability, as opposed to why we wanted interpretability.

And on ambitious value learning, the first chapter of my sequence on value learning talks exclusively about ambitious value learning; so that’s one thing I’d recommend. But also Stuart Armstrong has so many posts, I think there’s one that’s about resolving human values adequately and something else, something like that. That one might be one worth checking out, it’s very technical though; lots of math.

He’s also written a bunch of posts that convey the intuitions behind the ideas. They’re all split into a bunch of very short posts, so I can’t really recommend any one particular one. You could go to the alignment newsletter database and just search Stuart Armstrong, and click on all of those posts and read them. I think that was everything.

Lucas: That’s a wonderful list. So we’ll go ahead and link those all in the article which goes along with this podcast, so that’ll all be there organized in nice, neat lists for people. This is all probably been fairly overwhelming in terms of the number of approaches and how they differ, and how one is to adjudicate the merits of all of them. If someone is just sort of entering the space of AI alignment, or is beginning to be interested in sort of these different technical approaches, do you have any recommendations?

Rohin: Reading a lot, rather than trying to do actual research. This was my strategy, I started back in September of 2017 and I think for the first six months or so, I was reading about 20 hours a week, in addition to doing research; which was why it was only 20 hours a week, it wasn’t a full time thing I was doing.

And I think that was very helpful for actually forming a picture of what everyone was doing. Now, it’s plausible that you don’t want to actually learn about what everyone is doing, and you’re okay with like, “I’m fairly confident that this thing, this particular problem is an important piece of the problem and we need to solve it.” And I think it’s very easy to get that wrong, so I’m a little wary of recommending that but it’s a reasonable strategy to just say, “Okay, we probably will need to solve this problem, but even if we don’t, the intuitions that we get from trying to solve this problem will be useful.

Focusing on that particular problem, reading all of the literature on that, attacking that problem, in particular, lets you start doing things faster, while still doing things that are probably going to be useful; so that’s another strategy that people could do. But I don’t think it’s very good for orienting yourself in the field of AI safety.

Lucas: So you think that there’s a high value in people taking this time to read, to understand all the papers and the approaches before trying to participate in particular research questions or methodologies. Given how open this question is, all the approaches make different assumptions and take for granted different axioms which all come together to create a wide variety of things which can both complement each other and have varying degrees of efficacy in the real world when AI systems start to become more developed and advanced.

Rohin: Yeah, that seems right to me. Part of the reason I’m recommending this is because it seems to be that no one does this. I think, on the margin, I want more people who do this in a world where 20 percent of the people were doing this, and the other 80 percent were just taking particular piece of the problem and working on those. That might be the right balance, somewhere around there, I don’t know, it depends on how you count who is actually in the field. But somewhere between one and 10 percent of the people are doing this; closer to the one.

Lucas: Which is quite interesting, I think, given that it seems like AI alignment should be in a stage of maximum exploration just given the conceptually mapping the territory is very young. I mean, we’re essentially seeing the birth and initial development of an entirely new field and specific application of thinking. And there are many more mistakes to be made, and concepts to be clarified, and layers to be built. So, seems like we should be maximizing our attention in exploring the general space, trying to develop models, the efficacy of different approaches and philosophies and views of AI alignment.

Rohin: Yeah, I agree with you, that should not be surprising given that I am one of the people doing this, or trying to do this. Probably the better critique will come from people who are not doing this, and can tell both of us why we’re wrong about this.

Lucas: We’ve covered a lot here in terms of the specific approaches, your thoughts on the approaches, where we can find resources on the approaches, why setting the approaches matters. Are there any parts of the approaches that you feel deserve more attention in terms of these different sections that we’ve covered?

Rohin: I think I would want more work on looking at the intersection between things that are supposed to be complimentary, how interpretability can help you have AI systems that have the right goals, for example, would be a cool thing to do. Or what you need to do in order to get verification, which is a sub-part of robustness, to give you interesting guarantees on AI systems that we actually care about.

Most of the work on verification right now is like, there’s this nice specification that we have for adversarial examples, in particular, is there an input that is within some distance from a training data point, such that it gets classified differently from that training data point. And those are the nice formal specification and most of the work in verification takes this specification as given and that figures out more and more computationally efficient ways to actually verify that property, basically.

That does seem like a thing that needs to happen, but the much more urgent thing, in my mind, is how do we come up with these specifications in the first place? If I want to verify that my AI system is corrigible, or I want to verify that it’s not going to do anything catastrophic, or that it is going to not disable my value learning system, or something like that; how do I specify this at all in any way that lets me do something like a verification technique even given infinite computing power? It’s not clear to me how you would do something like that, and I would love to see people do more research on that.

That particular thing is my current reason for not being very optimistic about verification, in particular, but I don’t think anyone has really given it a try. So it’s plausible that there’s actually just some approach that could work that we just haven’t found yet because no one’s really been trying. I think all of the work on limited AGI is talking about, okay, does this actually eliminate all of the catastrophic behavior? Which, yeah, that’s definitely an important thing, but I wish that people would also do research on, given that we put this penalty or this limit on the AGI system, what things is it still capable of doing?

Have we just made it impossible for it to do anything of interest whatsoever, or can it actually still do pretty powerful things, even though we’ve placed these limits on it? That’s the main thing I want to see. From there, let’s have AI systems that do what we want, probably the biggest thing I want to see there, and I’ve been trying to do some of this myself, some conceptual thinking about how does this lead to good outcomes in the long term? So far, we’ve not been dealing with the fact that the human doesn’t actually know, doesn’t actually have a nice consistent utility function that they know and that can be optimized. So, once you relax that assumption, what the hell do you do? And then there’s also a bunch of other problems that would benefit from more conceptual clarification, maybe I don’t need to go into all of them right now.

Lucas: Yeah. And just to sort of inject something here that I think we haven’t touched on and that you might have some words about in terms of approaches. We discussed sort of agential views of advanced artificial intelligence, a services-based conception, though I don’t believe that we have talked about aligning AI systems that simply function as oracles or having a concert of oracles. You can get rid of the services thing, and the agency thing if the AI just tells you what is true, or answers your questions in a way that is value aligned.

Rohin: Yeah, I mostly want to punt on that question because I have not actually read all the papers. I might have read a grand total of one paper on the oracles, and also super intelligence which talks about oracles. So I feel like I know so little about the state of the art on oracles, that it should not actually say anything about them.

Lucas: Sure. So then just as a broad point to point out to our audience is that in terms of conceptualizing these different approaches to AI alignment, it’s important and crucial to consider the kind of AI system that you’re thinking about the kinds of features and properties that it has, and oracles are another version here that one can play with in one’s AI alignment thinking?

Rohin: I think the canonical paper there is something like Good and Safe Pieces of Oracles, but I have not actually read it. There is a list of things I want to read, it is on that list. But that list also has, I think, something like 300 papers on it, and apparently I have not gotten to oracles yet.

Lucas: And so for the sake of this whole podcast being as comprehensive as possible, are there any conceptions of AI, for example, that we have omitted so far adding on to this agential view, the CAIS view of it actually just being a lot of distributed services, or an oracle view?

Rohin: There’s also the Tool AI View. This is different from the services view, but it’s somewhat akin to the view you were talking about at the beginning of this podcast where you’ve got AI systems that have a narrowly defined input/output space, they’ve got a particular thing that they do with limit, and they just sort of take in their inputs and do some computation, they spit out their outputs and that’s it, that’s all that they do. You can’t really model them as having some long term utility function that they’re optimizing, they’re just implementing a particular input-output relation and it’s all they’re trying to do.

Even saying something like, “They are trying to do X.” Is basically using a bad model for them. I think the main argument against expecting tool AI systems is that they’re probably not going to be as useful as other services or agential AI, because tool AI systems would have to be programmed in a way where we understood what they were doing and why they were doing it. Whereas agential AI systems or services would be able to consider new possible ways of achieving goals that we hadn’t thought about and enact those plans.

And so they could get super human behavior by considering things that we wouldn’t consider. Whereas, true Ais … Like Google Maps is super human in some sense, but it’s super human only because it has a compute advantage over us. If we were given all of the data and all of the time, in human real time, that Google Maps had, we could implement a similar sort of algorithm as Google Maps and compute the optimal route ourselves.

Lucas: There seems to be this duality that is constantly being formed in our conception of AI alignment, where the AI system is this tangible external object which stands in some relationship to the human and is trying to help the human to achieve certain things.

Are there conceptions of value alignment which, however the procedure or methodology is done, changes or challenges the relationship between the AI system and the human system where it challenges what it means to be the AI or what it means to be human, whereas, there’s potentially some sort of merging or disruption of this dualistic scenario of the relationship?

Rohin: I don’t really know, I mean, it sounds like you’re talking about things like brain computer interfaces and stuff like that. I don’t really know of any intersection between AI safety research and that. I guess, this did remind me, too, that I want to make the point that all of this is about the relatively narrow, I claim, problem of aligning an AI system with a single human.

There is also the problem of, okay what if there are multiple humans, what if there are multiple AI systems, what if you’ve got a bunch of different groups of people and each group is value aligned within themselves, they build an AI that’s value aligned with them, but lots of different groups do this now what happens?

Solving the problem that I’ve been talking about does not mean that you have a good outcome in the long term future, it is merely one piece of a larger overall picture. I don’t think any of that larger overall picture removes the dualistic thing that you were talking about, but they dualistic part reminded me of the fact that I am talking about a narrow problem and not the whole problem, in some sense.

Lucas: Right and so just to offer some conceptual clarification here, again, the first problem is how do I get an AI system to do what I want it to do when the world is just me and that AI system?

Rohin: Me and that AI system and the rest of humanity, but the rest of humanity is treated as part of the environment.

Lucas: Right, so you’re not modeling other AI systems or how some mutually incompatible preferences and trained systems would interact in the world or something like that?

Rohin: Exactly.

Lucas: So the full AI alignment problem is… It’s funny because it’s just the question of civilization, I guess. How do you get the whole world and all of the AI systems to make a beautiful world instead of a bad world?

Rohin: Yeah, I’m not sure if you saw my lightning talk at Beneficial AGI, but I talked a bit about those. I think I called that top level problem, make AI related features stuff go well, very, very, very concrete, obviously.

Lucas: It makes sense. People know what you’re talking about.

Rohin: I probably wouldn’t call that broad problem the AI alignment problem. I kind of wonder is there a different alignment for the narrower trouble? We could maybe call it the ‘AI Safety Problem’ or the ‘AI Future Problem’, I don’t know. ‘Beneficially AI’ problem actually, I think that’s what I used last time.

Lucas: That’s a nice way to put it. So I think that, conceptually, leave us at a very good place for this first section.

Rohin: Yeah, seems pretty good to me.

Lucas: If you found this podcast interesting or useful, please make sure to check back for part two in a couple weeks where Rohin and I go into more detail about the strengths and weaknesses of specific approaches.

We’ll be back again soon with another episode in the AI Alignment podcast.

[end of recorded material]

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women’s International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions — especially for the military — and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2

When it comes to artificial intelligence, debates often arise about what constitutes “safe” and “unsafe” actions. As Ramana Kumar, an AGI safety researcher at DeepMind, notes, the terms are subjective and “can only be defined with respect to the values of the AI system’s users and beneficiaries.”

Fortunately, such questions can mostly be sidestepped when confronting the technical problems associated with creating safe AI agents, as these problems aren’t associated with identifying what is right or morally proper. Rather, from a technical standpoint, the term “safety” is best defined as an AI agent that consistently takes actions that lead to the desired outcomes, regardless of whatever those desired outcomes may be.

In this respect, Kumar explains that, when it comes to creating an AI agent that is tasked with improving itself, “the technical problem of building a safe agent is largely independent of what ‘safe’ means because a large part of the problem is how to build an agent that reliably does something, no matter what that thing is, in such a way that the method continues to work even as the agent under consideration is more and more capable.”

In short, making a “safe” AI agent should not be conflated with making an “ethical” AI agent. The respective terms are talking about different things..

In general, sidestepping moralistic definitions of safety makes AI technical work quite a bit easier It allows research to advance while debates on the ethical issues evolve. Case in point, Uber’s self-driving cars are already on the streets, despite the fact that we’ve yet to agree on a framework regarding whether they should safeguard their driver or pedestrians.

However, when it comes to creating a robust and safe AI system that is capable of self-improvement, the technical work gets a lot harder, and research in this area is still in its most nascent stages. This is primarily because we aren’t dealing with just one AI agent; we are dealing with generations of future self-improving agents.

Kumar clarifies, “When an AI agent is self-improving, one can view the situation as involving two agents: the ‘seed’ or ‘parent’ agent and the ‘child’ agent into which the parent self-modifies….and its total effects on the world will include the effects of actions made by its descendants.” As a result, in order to know we’ve made a safe AI agent, we need to understand all possible child agents that might originate from the first agent.

And verifying the safety of all future AI agents comes down to solving a problem known as “self-referential reasoning.”

Understanding the Self-Referential Problem

The problem with self-referential reasoning is most easily understood by defining the term according to its two primary components: self-reference and reasoning.

  • Self-reference: Refers to an instance in which someone (or something, such as a computer program or book) refers to itself. Any person or thing that refers to itself is called “self-referential.”
  • Reasoning: In AI systems, reasoning is a process through which an agent establishes “beliefs” about the world, like whether or not a particular action is safe or a specific reasoning system is sound. “Good beliefs” are beliefs that are sound or plausible based on the available evidence. The term “belief” is used instead of “knowledge” because the things that an agent believes may not be factually true and can change over time.

In relation to AI, then, the term “self-referential reasoning” refers to an agent that is using a reasoning process to establish a belief about that very same reasoning process. Consequently, when it comes to self-improvement, the “self-referential problem” is as follows: An agent is using its own reasoning system to determine that future versions of its reasoning system will be safe.

To explain the problem another way, Kumar notes that, if an AI agent creates a child agent to help it achieve its goal, it will want to establish some beliefs about the child’s safety before using it. This will necessarily involve proving beliefs about the child by arguing that the child’s reasoning process is good. Yet, the child’s reasoning process may be similar to, or even an extension of, the original agent’s reasoning process. And ultimately, an AI system can not use its own reasoning to determine whether or not its reasoning is good.

From a technical standpoint, the problem comes down to Godel’s second incompleteness theorem, which Kumar explains, “shows that no sufficiently strong proof system can prove its own consistency, making it difficult for agents to show that actions their successors have proven to be safe are, in fact, safe.”

Investigating Solutions

To date, several partial solutions to this problem have been proposed; however, our current software doesn’t have sufficient support for self-referential reasoning to make the solutions easy to implement and study. Consequently, in order to improve our understanding of the challenges of implementing self-referential reasoning, Kumar and his team aimed to implement a toy model of AI agents using some of the partial solutions that have been put forth.

Specifically, they investigated the feasibility of implementing one particular approach to the self-reference problem in a concrete setting (specifically, Botworld) where all the details could be checked. The approach selected was model polymorphism. Instead of requiring proof that shows an action is safe for all future use cases, model polymorphism only requires an action to be proven safe for an arbitrary number of steps (or subsequent actions) that is kept abstracted from the proof system.

Kumar notes that the overall goal was ultimately “to get a sense of the gap between the theory and a working implementation and to sharpen our understanding of the model polymorphism approach.” This would be accomplished by creating a proved theorem in a HOL (Higher Order Logic) theorem prover that describes the situation.

To break this down a little, in essence, theorem provers are computer programs that assist with the development of mathematical correctness proofs. These mathematical correctness proofs are the highest safety standard in the field, showing that a computer system always produces the correct output (or response) for any given input. Theorem provers create such proofs by using the formal methods of mathematics to prove or disprove the “correctness” of the control algorithms underlying a system. HOL theorem provers, in particular, are a family of interactive theorem proving systems that facilitate the construction of theories in higher-order logic. Higher-order logic, which supports quantification over functions, sets, sets of sets, and more, is more expressive than other logics, allowing the user to write formal statements at a high level of abstraction.

In retrospect, Kumar states that trying to prove a theorem about multiple steps of self-reflection in a HOL theorem prover was a massive undertaking. Nonetheless, he asserts that the team took several strides forward when it comes to grappling with the self-referential problem, noting that they built “a lot of the requisite infrastructure and got a better sense of what it would take to prove it and what it would take to build a prototype agent based on model polymorphism.”

Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) Logical Inductors could also offer a satisfying version of formal self-referential reasoning and, consequently, provide a solution to the self-referential problem.

If you haven’t read it yet, find Part 1 here.

The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1

Today’s AI systems may seem like intellectual powerhouses that are able to defeat their human counterparts at a wide variety of tasks. However, the intellectual capacity of today’s most advanced AI agents is, in truth, narrow and limited. Take, for example, AlphaGo. Although it may be the world champion of the board game Go, this is essentially the only task that the system excels at.

Of course, there’s also AlphaZero. This algorithm has mastered a host of different games, from Japanese and American chess to Go. Consequently, it is far more capable and dynamic than many contemporary AI agents; however, AlphaZero doesn’t have the ability to easily apply its intelligence to any problem. It can’t move unfettered from one task to another the way that a human can.

The same thing can be said about all other current AI systems — their cognitive abilities are limited and don’t extend far beyond the specific task they were created for. That’s why Artificial General Intelligence (AGI) is the long-term goal of many researchers.

Widely regarded as the “holy grail” of AI research, AGI systems are artificially intelligent agents that have a broad range of problem-solving capabilities, allowing them to tackle challenges that weren’t considered during their design phase. Unlike traditional AI systems, which focus on one specific skill, AGI systems would be able to efficiently tackle virtually any problem that they encounter, completing a wide range of tasks.

If the technology is ever realized, it could benefit humanity in innumerable ways. Marshall Burke, an economist at Stanford University, predicts that AGI systems would ultimately be able to create large-scale coordination mechanisms to help alleviate (and perhaps even eradicate) some of our most pressing problems, such as hunger and poverty. However, before society can reap the benefits of these AGI systems, Ramana Kumar, an AGI safety researcher at DeepMind, notes that AI designers will eventually need to address the self-improvement problem.

Self-Improvement Meets AGI

Early forms of self-improvement already exist in current AI systems. “There is a kind of self-improvement that happens during normal machine learning,” Kumar explains; “namely, the system improves in its ability to perform a task or suite of tasks well during its training process.”

However, Kumar asserts that he would distinguish this form of machine learning from true self-improvement because the system can’t fundamentally change its own design to become something new. In order for a dramatic improvement to occur — one that encompasses new skills, tools, or the creation of more advanced AI agents — current AI systems need a human to provide them with new code and a new training algorithm, among other things.

Yet, it is theoretically possible to create an AI system that is capable of true self-improvement, and Kumar states that such a self-improving machine is one of the more plausible pathways to AGI.

Researchers think that self-improving machines could ultimately lead to AGI because of a process that is referred to as “recursive self-improvement.” The basic idea is that, as an AI system continues to use recursive self-improvement to make itself smarter, it will get increasingly better at making itself smarter. This will quickly lead to an exponential growth in its intelligence and, as a result, could eventually lead to AGI.

Kumar says that this scenario is entirely plausible, explaining that, “for this to work, we need a couple of mostly uncontroversial assumptions: that such highly competent agents exist in theory, and that they can be found by a sequence of local improvements.” To this extent, recursive self-improvement is a concept that is at the heart of a number of theories on how we can get from today’s moderately smart machines to super-intelligent AGI. However, Kumar clarifies that this isn’t the only potential pathway to AI superintelligences.

Humans could discover how to build highly competent AGI systems through a variety of methods. This might happen “by scaling up existing machine learning methods, for example, with faster hardware. Or it could happen by making incremental research progress in representation learning, transfer learning, model-based reinforcement learning, or some other direction. For example, we might make enough progress in brain scanning and emulation to copy and speed up the intelligence of a particular human,” Kumar explains.

Yet, he is also quick to clarify that recursive self-improvement is an innate characteristic of AGI. “Even if iterated self-improvement is not necessary to develop highly competent artificial agents in the first place, explicit self-improvement will still be possible for those agents,” Kumar said.

As such, although researchers may discover a pathway to AGI that doesn’t involve recursive self-improvement, it’s still a property of artificial intelligence that is in need of serious research.

Safety in Self-Improving AI

When systems start to modify themselves, we have to be able to trust that all their modifications are safe. This means that we need to know something about all possible modifications. But how can we ensure that a modification is safe if no one can predict ahead of time what the modification will be?  

Kumar notes that there are two obvious solutions to this problem. The first option is to restrict a system’s ability to produce other AI agents. However, as Kumar succinctly sums, “We do not want to solve the safe self-improvement problem by forbidding self-improvement!”

The second option, then, is to permit only limited forms of self-improvement that have been deemed sufficiently safe, such as software updates or processor and memory upgrades. Yet, Kumar explains that vetting these forms of self-improvement as safe and unsafe is still exceedingly complicated. In fact, he says that preventing the construction of one specific kind of modification is so complex that it will “require such a deep understanding of what self-improvement involves that it will likely be enough to solve the full safe self-improvement problem.”

And notably, even if new advancements do permit only limited forms of self-improvement, Kumar states that this isn’t the path to take, as it sidesteps the core problem with self-improvement that we want to solve. “We want to build an agent that can build another AI agent whose capabilities are so great that we cannot, in advance, directly reason about its safety…We want to delegate some of the reasoning about safety and to be able to trust that the parent does that reasoning correctly,” he asserts.

Ultimately, this is an extremely complex problem that is still in its most nascent stages. As a result, much of the current work is focused on testing a variety of technical solutions and seeing where headway can be made. “There is still quite a lot of conceptual confusion about these issues, so some of the most useful work involves trying different concepts in various settings and seeing whether the results are coherent,” Kumar explains.

Regardless of what the ultimate solution is, Kumar asserts that successfully overcoming the problem of self-improvement depends on AI researchers working closely together. “The key to [testing a solution to this problem] is to make assumptions explicit, and, for the sake of explaining it to others, to be clear about the connection to the real-world safe AI problems we ultimately care about.”

Read Part 2 here

AI Alignment Podcast: AI Alignment through Debate with Geoffrey Irving

“To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information…  In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment.” AI safety via debate

Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.

On today’s episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. 

We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • What debate is and how it works
  • Experiments on debate in both machine learning and social science
  • Optimism and pessimism about debate
  • What amplification is and how it fits in
  • How Geoffrey took inspiration from amplification and AlphaGo
  • The importance of interpretability in debate
  • How debate works for normative questions
  • Why AI safety needs social scientists
You can find out more about Geoffrey Irving at his website. Here you can find the debate game mentioned in the podcast. Here you can find Geoffrey Irving, Paul Christiano, and Dario Amodei’s paper on debate. Here you can find an Open AI blog post on AI Safety via Debate. You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Geoffrey Irving about AI safety via Debate. We discuss how debate fits in with the general research directions of OpenAI, what amplification is and how it fits in, and the relation of all this with AI alignment. As always, if you find this podcast interesting or useful, please give it a like and share it with someone who might find it valuable.

Geoffrey Irving is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. Without further ado, I give you Geoffrey Irving.

Thanks again, Geoffrey, for coming on the podcast. It’s really a pleasure to have you here.

Geoffrey: Thank you very much, Lucas.

Lucas: We’re here today to discuss your work on debate. I think that just to start off, it’d be interesting if you could provide for us a bit of framing for debate, and how debate exists at OpenAI, in the context of OpenAI’s general current research agenda and directions that OpenAI is moving right now.

Geoffrey: I think broadly, we’re trying to accomplish AI safety by reward learning, so learning a model of what humans want and then trying to optimize agents that achieve that model, so do well according to that model. There’s sort of three parts to learning what humans want. One part is just a bunch of machine learning mechanics of how to learn from small sample sizes, how to ask basic questions, how to deal with data quality. There’s a lot more work, then, on the human side, so how do humans respond to the questions we want to ask, and how do we sort of best ask the questions?

Then, there’s sort of a third category of how do you make these systems work even if the agents are very strong? So stronger than human in some or all areas. That’s sort of the scalability aspect. Debate is one of our techniques for doing scalability. Amplification being the first one and Debate is a version of that. Generally want to be able to supervise a learning agent, even if it is smarter than a human or stronger than a human on some task or on many tasks.

Debate is you train two agents to play a game. The game is that these two agents see a question on some subject, they give their answers. Each debater has their own answer, and then they have a debate about which answer is better, which means more true and more useful, and then a human sees that debate transcript and judges who wins based on who they think told the most useful true thing. The result of the game is, one, who won the debate, and two, the answer of the person who won the debate.

You can also have variants where the judge interacts during the debate. We can get into these details. The general point is that, in my tasks, it is much easier to recognize good answers than it is to come up with the answers yourself. This applies at several levels.

For example, at the first level, you might have a task where a human can’t do the task, but they can know immediately if they see a good answer to the task. Like, I’m bad at gymnastics, but if I see someone do a flip very gracefully, then I can know, at least to some level of confidence, that they’ve done a good job. There are other tasks where you can’t directly recognize the answer, so you might see an answer, it looks plausible, say, “Oh, that looks like a great answer,” but there’s some hidden flaw. If an agent were to point out that flaw to you, you’d then think, “Oh, that’s actually a bad answer.” Maybe it was misleading, maybe it was just wrong. You need two agents doing a back and forth to be able to get at the truth.

Then, if you apply this recursively through several levels, you might have tasks where you can’t recognize whether an answer is good directly. You can’t even recognize whether a counterargument is good. Maybe a counter-counterargument, then you could recognize it. If you do sort of three steps of Debate, back and forth with two agents, you get to the truth.

Depending on the task, you need some number of these steps to pin down what the true answer is. Broadly, what’s happening is that the debaters, when they’re giving these counterarguments, they’re skipping over a lot of the space where maybe the human doesn’t understand what would happen, but they know that it would go a certain way. Like, if you’d see two expert players playing a game of Go, they’re constantly ignoring a larger number of moves which are just very bad and would waste a lot of time if you try to analyze them, and they’re just looking at kind of the most important part of the arguments about what is the best move in the game.

For all of the experiments, there are simple image tasks, like MNIST. MNIST is a dataset of digits, and then also some preliminary kind of human experiments with just human debaters. On MNIST, an example is you have a sparse classifier that learns to recognize MNIST from just seeing a few pixels, so you might train it by picking random masks of just a few pixels of an MNIST image. This classifier is very bad, because it only sees a small part of the image. It might get accuracies below 50%. I forget exactly the number, I could look them up.

Then, you train debaters that choose which parts of the image to show to the judge. You fix the judge at this kind of weak training process, and then the debaters will play a game where they try to pick pixels that convince the judge of what they want to argue for. There’s kind of two ways to decide … In the first paper, we had one debater that was incentivized to tell the truth, so it knew the correct class. Like say, we’re looking at a seven, and the other debater had to pre-specify which digit it was going to lie for. It might say, “I’m going to lie and say things were an eight.”

The game would be each of these agents pick a single pixel, and you sort of iterate for, let’s say six rounds, picking one pixel after another, and then the judge sees those pixels, and chooses who wins the debate. The judge has seen kind of a small part of the evidence. The result of that is that you can boost a weak classifier up to sort of higher performance, even if the agents are incentivized only to win the game. In this case, if you pick one debater to be the honest player and one debater to be the liar, the honest player wins a significant majority of the time, much more than the classifier would win on random play.

Lucas: The judge here, sorry, was not a human?

Geoffrey: The judge here is not a human. It’s just a classifier trained to do a bad job at MNIST, because it sees only a little bit of information. It’s trained to convergence, but its input is just a few pixels of an image.

Lucas: Then, so the pixels that are then revealed by the debaters, each pixel is sort of the argument.

Geoffrey: Each pixel is the argument.

Lucas: Just to provide a little bit more framing, there’s this feature of debate, you can apply it to a very large domain of things that you’d be surprised about if you expand the notion of what it means to debate to showing pixels or something like this.

Geoffrey: It’s actually more important to debate in natural language. The end goal here is we want to extract a strengthened, kind of improved version of human performance at a task. The way we go about this, either in amplification or in debate, is we sort of factor through reasoning. Instead of trying to train directly on the task, like the answers to the task, you might have some questions and some answers, and you could train directly on question/answer pairs, we’re going to build a task which includes all possible human reasoning in the form of, say, in this case, debates, and then we’ll train the agents to do well in this space of reasoning, and then well pick out the answers at the very end. Once we’re satisfied that the reasoning all works out.

Because humans, sort of the way we talk about higher level concepts, especially abstract concepts, and say subtle moral concepts, is natural language, the most important domain here, in the human case, is natural language. What we’ve done so far, in all experiments for Debate, is an image space, because it’s easier. We’re trying now to move that work into natural language so that we can get more interesting settings.

Lucas: Right. In terms of natural language, do you just want to unpack a little bit about how that would be done at this point in natural language? It seems like our natural language technology is not at a point where I really see robust natural language debates.

Geoffrey: There’s sort of two ways to go. One way is human debates. You just replace the ML agents with human debaters and then a human judge, and you see whether the system works in kind of an all-human context. The other way is machine learning natural language is getting good enough to do interestingly well on sample question/answer datasets, and Debate is already interesting if you do a very small number of steps. In the general debate, you sort of imagine that you have this long transcript, dozens of statements long, with points and counterpoints and counterpoints, but if you already do just two steps, you might do question, answer, and then single counterargument. For some tasks, at least in theory, it already should be stronger than the baseline of just doing direct question/answer, because you have this ability to focus in on a counterargument that is important.

An example might be you see a question and an answer and then another debater just says, “Which part of the answer is problematic?” They might point to a word or to a small phrase, and say, “This is the point you should sort of focus in on.” If you learn how to self critique, then you can boost the performance by iterating once you know how to self critique.

The hope is that even if we can’t do general debates on the machine learning side just yet, we can do shallow debates, or some sort of simple first step in this direction, and then work up over time.

Lucas: This just seems to be a very fundamental part of AI alignment where you’re just breaking things down into very simple problems and then trying to succeed in those simple cases.

Geoffrey: That’s right.

Lucas: Just provide a little bit more illustration of debate as a general concept, and what it means in the context of AI alignment. I mean, there are open questions here, obviously, about the efficacy of debate, how debate exists as a tool within the space, so epistemological things that allow us to arrive at truth, and I guess, infer other people’s preferences. Sorry, again, in terms of reward learning, and AI alignment, and debate’s place in all of this, just contextualize, I guess, its sort of role in AI alignment, more broadly.

Geoffrey: It’s focusing, again, on the scalability aspect. One way to formulate that is we have this sort of notion of, either from a philosophy side, reflective equilibrium, or kind of from the AI alignment literature, coherent extrapolated volition, which is sort of what a human would do if we had thought very carefully for a very long time about a question, and sort of considered all the possible nuances, and counterarguments, and so on, and kind of reached the conclusion that is sort of free of inconsistencies.

Then, we’d like to take this kind of vague notion of, what happens when a human thinks for a very long time, and compress it into something we can use as an algorithm in a machine learning context. It’s also a definition. This vague notion of, let a human think for a very long time, that’s sort of a definition, but it’s kind of a strange one. A single human can’t think for a super long time. We don’t have access to that at all. You sort of need a definition that is more factored, where either a bunch of humans think for a long time, we sort of break up tasks, or you sort of consider only parts of the argument space at a time, or something.

You go from there to things that are both definitions of what it means to simulate thinking for long time and also algorithms. The first one of these is Amplification from Paul Christiano, and there you have some questions, and you can’t answer them directly, but you know how to break up a question into subquestions that are hopefully somewhat simpler, and then you sort of recursively answer those subquestions, possibly breaking them down further. You get this big tree of all possible questions that descend from your outer question. You just sort of imagine that you’re simulating over that whole tree, and you come up with an answer, and then that’s the final answer for your question.

Similarly, Debate is a variant of that, in the sense that you have this kind of tree of all possible arguments, and you’re going to try to simulate somehow what would happen if you considered all possible arguments, and picked out the most important ones, and summarized that into an answer for your question.

The broad goal here is to give a practical definition of what it means for people to take human input and push it to its inclusion, and then hopefully, we have a definition that also works as an algorithm, where we can do practical ML training, to train machine learning models.

Lucas: Right, so there’s, I guess, two thoughts that I sort of have here. The first one is that there is just sort of this fundamental question of what is AI alignment? It seems like in your writing, and in the writing of others at OpenAI, it’s to get AI to do what we want them to do. What we want them to do is … either it’s what we want them to do right now, or what we would want to do under reflective equilibrium, or at least we want to sort of get to reflective equilibrium. As you said, it seems like a way of doing that is compressing human thinking, or doing it much faster somehow.

Geoffrey: One way to say it is we want to do what humans want, even if we understood all of the consequences. It’s some kind of, Do what humans want, plus some side condition of: ‘imagine if we knew everything we needed to know to evaluate their question.”

Lucas: How does Debate scale to that level of compressing-

Geoffrey: One thing we should say is that everything here is sort of a limiting state or a goal, but not something we’re going to reach. It’s more important that we have closure under the relative things we might not have thought about. Here are some practical examples from kind of nearer-term misalignment. There’s an experiment in social science where they send out a bunch of resumes to job applications to classified ads, and the resumes were paired off into pairs that were identical except that the name of the person was either white sounding or black sounding, and the result was that you got significantly higher callback rates if the person sounded white, and even if they had an entirely identical resume to the person sounding black.

Here’s a situation where direct human judgment is bad in the way that we could clearly know. You could imagine trying to push that into the task by having an agent say, “Okay, here is a resume. We’d like you to judge it.” Either pointing explicitly to what they should judge, or pointing out, “You might be biased here. Try to ignore the name of the resume, and focus on this issue, like say their education or their experience.” You sort of hope that if you have a mechanism for surfacing concerns or surfacing counterarguments, you can get to a stronger version of human decision making. There’s no need to wait for some long term very strong agent case for this to be relevant, because we’re already pretty bad at making decisions in simple ways.

Then, broadly, I sort of have this sense that there’s not going to be magic in decision making. If I go to some very smart person, and they have a better idea for how to make a decision, or how to answer a question, I expect there to be some way they could explain their reasoning to me. I don’t expect I just have to take them on faith. We want to build methods that surface the reasons they might have to come to a conclusion.

Now, it may be very difficult for them to explain the process for how they came to those arguments. There’s some question about whether the arguments they’re going to make is the same as the reasons they’re giving the answers. Maybe they’re sort of rationalizing and so on. You’d hope that once you sort of surface all the arguments around the question that could be relevant, you get a better answer than if you just ask people directly.

Lucas: As we move out of debate in simple cases of image classifiers or experiments in similar environments, what does debate look like … I don’t really understand the ways in which the algorithms can be trained to elucidate all of these counterconcerns, and all of these different arguments, in order to help human beings arrive at the truth.

Geoffrey: One case we’re considering, especially on kind of the human experiment side, or doing debates with humans, is some sort of domain expert debate. The two debaters are maybe an expert in some field, and they have a bunch of knowledge, which is not accessible to the judge, which is maybe a reasonably competent human, but doesn’t know the details of some domain. For example, we did a debate where there were two people that knew computer science and quantum computing debating a question about quantum computing to a person who has some background, but nothing in that field.

The idea is you start out, there’s a question. Here, the question was, “Is the complexity class BQP equal to NP, or does it contain NP?” One point is that you don’t have to know what those terms mean for that to be a question you might want to answer, say in the course of some other goal. The first steps, things the debaters might say, is they might give short, intuitive definitions for these concepts and make their claims about what the answer is. You might say, “NP is the class of problems where we can verify solutions once we’ve found them, and BQP is the class of things that can run on a quantum computer.”

Now, you could have a debater that just straight up lies right away and says, “Well, actually NP is the class of things that can run on fast randomized computers.” That’s just wrong, and so what would happen then is that the counter debater would just immediately point to Wikipedia and say, “Well, that isn’t the definition of this class.” The judge can look that up, they can read the definition, and realize that one of the debaters has lied, and the debate is over.

You can’t immediately lie in kind of a simple way or you’ll be caught out too fast and lose the game. You have to sort of tell the truth, except maybe you kind of slightly veer towards lying. This is if you want to lie in your argument. At every step, if you’re an honest debater, you can try to pin the liar down to making sort of concrete statements. In this case, if say someone claims that quantum computers can solve all of NP, you might say, “Well, you must point me to an algorithm that does that.” The debater that’s trying to lie and say that quantum computers can solve all of NP might say, “Well, I don’t know what the algorithm is, but meh, maybe there’s an algorithm,” and then they’re probably going to lose, then.

Maybe they have to point to a specific algorithm. There is no algorithm, so they have to make one up. That will be a lie, but maybe it’s kind of a subtle complicated lie. Then, you could kind of dig into the details of that, and maybe you can reduce the fact that that algorithm is a lie to some kind of simple algebra, which either the human can check, maybe they can ask Mathematica or something. The idea is you take a complicated question that’s maybe very broad and covers a lot of the knowledge that the judge doesn’t know and you try to focus in closer and closer on details of arguments that the judge can check.

What the judge needs to be able to do is kind of follow along in the steps until they reach the end, and then there’s some ground fact that they can just look up or check and see who wins.

Lucas: I see. Yeah, that’s interesting. A brief passing thought is thinking about double cruxes and some tools and methods that CFAR employs, like how they might be interesting or used in debate. I think I also want to provide some more clarification here. Beyond debate being a truth-seeking process or a method by which we’re able to see which agent is being truthful, or which agent is lying, and again, there’s sort of this claim that you have in your paper that seems central to this, where you say, “In the debate game, it is harder to lie than to refute a lie.” This asymmetry in debate between the liar and the truth-seeker should hopefully, in general, bias towards people more easily seeing who is telling the truth.

Geoffrey: Yep.

Lucas: In terms of AI alignment again, in the examples that you’ve provided, it seems to help human beings arrive at truth for complex questions that are above their current level of understanding. How does this, again, relate directly to reward learning or value learning?

Geoffrey: Let’s assume that in this debate game, it is the case that it’s very hard to liar, so the winning move is to say the truth. What we want to do then is train kind of two systems. One system will be able to reproduce human judgment. That system would be able to look at the debate transcript and predict what the human would say is the correct winner of the debate. Once you get that system trained, so that’s sort of you’re learning not direct toward, but again, some notion of predicting how humans deal with reasoning. Once you learn that bit, then you can train an agent to play this game.

Then, we have a zero sum game, and then we can sort of apply any technique used to play a zero sum game, like Monte Carlo tree search in AlphaGo, or just straight up RL algorithms, as in some of OpenAI’s work. The hope is that you can train an agent to play this game very well, and therefore, it will be able to predict where counter-arguments exist that would help it win debates, and therefore, if it plays the game well, and the best way to play the game is to tell the truth, then you end up with a value aligned system. Those are large assumptions. You should be cautious if those are true.

Lucas: There’s also all these issues that we can get into about biases that humans have, and issues with debate. Whether or not you’re just going to be optimizing the agents for exploiting human biases and convincing humans. Definitely seems like, even just looking at how human beings value align to each other, debate is one thing in a large toolbox of things, and in AI alignment, it seems like potentially Debate will also be a thing in a large toolbox of things that we use. I’m not sure what your thoughts are about that.

Geoffrey: I could give them. I would say that there’s two ways of approaching AI safety and AI alignment. One way is to try to propose, say, methods that do a reasonably good job at solving a specific problem. For example, you might tackle reversibility, which means don’t take actions that can’t be undone, unless you need to. You could try to pick that problem out and solve it, and then imagine how we’re going to fit this together into a whole picture later.

The other way to do it is try to propose algorithms which have at least some potential to solve the whole problem. Usually, they won’t, and then you should use them as a frame to try to think about how different pieces might be necessary to add on.

For example, in debate, the biggest thing in there is that it might be the case that you train a debate agent that gets very good at this task, the task is rich enough that it just learns a whole bunch of things about the world, and about how to think about the world, and maybe it ends up having separate goals, or it’s certainly not clearly aligned because the goal is to win the game. Maybe winning the game is not exactly aligned.

You’d like to know sort of not only what it’s saying, but why it’s saying things. You could imagine sort of adding interpret ability techniques to this, which would say, maybe Alice and Bob are debating. Alice says something and Bob says, “Well, Alice only said that because Alice is thinking some malicious fact.” If we add solid interpret ability techniques, we could point into Alice’s thoughts at that fact, and pull it out, and service that. Then, you could imagine sort of a strengthened version of a debate where you could not only argue about object level things, like using language, but about thoughts of the other agent, and talking about motivation.

It is a goal here in formulating something like debate or amplification, to propose a complete algorithm that would solve the whole problem. Often, not to get to that point, but we have now a frame where we can think about the whole picture in the context of this algorithm, and then fix it as required going forwards.

I think, in the end, I do view debate, if it succeeds, as potentially the top level frame, which doesn’t mean it’s the most important thing. It’s not a question of importance. More of just what is the underlying ground task that we want to solve? If we’re training agents to either play video games or do question/answers, here the proposal is train agents to engage in these debates and then figure out what parts of AI safety and AI alignment that doesn’t solve and add those on in that frame.

Lucas: You’re trying to achieve human level judgment, ultimately, through a judge?

Geoffrey: The assumption in this debate game is that it’s easier to be a judge than a debater. If it is the case, though, that you need the judge to get to human level before you can train a debater, then you have a problematic bootstrapping issue where, first you must solve value alignment for training the judge. Only then do you have value alignment for training the debater. This is one of the concerns I have. I think the concern sort of applies to some of other scalability techniques. I would say this is sort of unresolved. The hope would be that it’s not actually sort of human level difficult to be a judge on a lot of tasks. It’s sort of easier to check consistency of, say, one debate statement to the next, than it is to do long, reasoning processes. There’s a concern there, which I think is pretty important, and I think we don’t quite know how it plays out.

Lucas: The view is that we can assume, or take the human being to be the thing that is already value aligned, and the process by which … and it’s important, I think, to highlight the second part that you say. You say that you’re pointing out considerations, or whichever debater is saying that which is most true and useful. The useful part, I think, shouldn’t be glossed over, because you’re not just optimizing debaters to arrive at true statements. The useful part smuggles in a lot issues with normative things in ethics and metaethics.

Geoffrey: Let’s talk about the useful part.

Lucas: Sure.

Geoffrey: Say we just ask the question of debaters, “What should we do? What’s the next step that I, as an individual person, or my company, or the whole world should take in order to optimize total utility?” The notion of useful, then, is just what is the right action to take? Then, you would expect a debate that is good to have to get into the details of why actions are good, and so that debate would be about ethics, and metaethics, and strategy, and so on. It would pull in all of that content and sort of have to discuss it.

There’s a large sea of content you have to pull in. It’s roughly kind of all of human knowledge.

Lucas: Right, right, but isn’t there this gap between training agents to say what is good and useful and for agents to do what is good and useful, or true and useful?

Geoffrey: The way in which there’s a gap is this interpretability concern. You’re getting at a different gap, which I think is actually not there. I like giving game analogies, so let me give a Go analogy. You could imagine that there’s two goals in playing the game of Go. One goal is to find the best moves. This is a collaborative process where all of humanity, all of sort of Go humanity, say, collaborates to learn, and explore, and work together to find the best moves in Go, defined by, what are the moves that most win this game? That’s a non-zero sum game, where we’re sort of all working together. Two people competing on the other side of the Go board are working together to get at what the best moves are, but within a game, it’s a zero sum game.

You sit down, and you have two players, two people playing a game of Go, one of them’s going to win, zero sum. The fact that that game is zero sum doesn’t mean that we’re not learning some broad thing about the world, if you’ll zoom out a bit and look at the whole process.

We’re training agents to win this debate game to give the best arguments, but the thing we want to zoom out and get is the best answers. The best answers that are consistent with all the reasoning that we can bring into this task. There’s huge questions to be answered about whether the system actually works. I think there’s an intuitive notion of, say, reflective equilibrium, or coherent extrapolated volition, and whether debate achieves that is a complicated question that’s empirical, and theoretical, and we have to deal with, but I don’t think there’s quite the gap you’re getting at, but I may not have quite voiced your thoughts correctly.

Lucas: It would be helpful if you could unpack how the alignment that is gained through this process is transferred to new contexts. If I take an agent trained to win the Debate game outside of that context.

Geoffrey: You don’t. We don’t take it out of the context.

Lucas: Okay, so maybe that’s why I’m getting confused.

Geoffrey: Ah. I see. Okay, this [inaudible 00:26:09]. We train agents to play this debate game. To use them, we also have them play the debate game. By training time, we give them kind of a rich space of questions to think about, or concerns to answer, like a lot of discussion. Then, we want to go and answer a question in the world about what we should do, what the answer to some scientific question is, is this theorem true, or this conjecture true? We state that as a question, and we have them debate, and then whoever wins, they gave the right answer.

There’s a couple of important things you can add to that. I’ll give like three levels of kind of more detail you can go. One thing is the agents are trained to look at state in the debate game, which could be I’ve just given the question, or there’s a question and there’s a partial transcript, and they’re trained to say the next thing, to make the next move in the game. The first thing you can do is you have a question that you want to answer, say, what should the world do, or what should I do as a person? You just say, “Well, what’s the first move you’d make?” The first move they’d make is to give an answer, and then you just stop there, and you’re done, and you just trust that answer is correct. That’s not the strongest thing you could do.

The next thing you can do is you’ve trained this model of a judge that knows how to predict human judgment. You could have them, from the start of this game, play a whole bunch of games, play 1,000 games of debate, and from that learn with more accuracy what the answer might be. Similar to how you’d, say if you’re playing a game of Go, if you want to know the best move, you would say, “Well, let’s play 1,000 games of Go from this state. We’ll get more evidence and we’ll know what the best move is.”

The most interesting thing you can do, though, is you yourself can act as a judge in this game to sort of learn more about what the relevant issues are. Say there’s a question that you care a lot about. Hopefully, “What should the world do,” is a question you care a lot about. You want to not only see what the answer is, but why. You could act as a judge in this game, and you could, say, play a few debates, or explore part of this debate tree, the tree of all possible debates, and you could do the judgment yourself. There, the end answer will still be who you believe is the right answer, but the task of getting to that answer is still playing this game.

The bottom line here is, at test time, we are also going to debate.

Lucas: Yeah, right. Human beings are going to be participating in this debate process, but does or does not debate translate into systems which are autonomously deciding what we ought to do, given that we assume that their models of human judgment on debate are at human level or above?

Geoffrey: Yeah, so if you turn off the human in the loop part, then you get an autonomous agent. If the question is, “What should the next action be in, say, an environment?” And you don’t have humans in the loop at test time, then you can get an autonomous agent. You just sort of repeatedly simulate debating the question of what to do next. Again, you can cut this process short. Because the agents are trained to predict moves in debate, you can stop them after they’ve predicted the first move, which is what the answer is, and then just take that answer directly.

If you wanted the maximally efficient autonomous agent, that’s the case you would do. At OpenAI, my view, our goal is I don’t want to take AGI and immediately deploy it in the most fast twitch tasks. Something like self-driving a car. If we get to human level intelligence, I’m not going to just replace all the self-driving cars with AGI and let them do their thing. We want to use this for the paths where we need very strong capabilities. Ideally, those tasks are slower and more deliberative, so we can afford to, say, take a minute to interact with the system, or take a minute to have the system engage in its own internal debates to get more confidence in these answers.

The model here is basically the Oracle AI model, that rather than the autonomous agent operating at an NDP model.

Lucas: I think that this is a very important part to unpack a bit more. This distinction here that it’s more like an oracle and less like an autonomous agent going around optimizing everything. What does a world look like right before, during, after AGI given debate?

Geoffrey: The way I think about this is that, an oracle here is a question/answer system of some complexity. You asked it questions, possibly with a bunch of context attached, and it gives you answers. You can reduce pretty much anything to an oracle, if oracle is sort of general enough. If your goal is to take actions in an environment, you can ask the oracle, “What’s the best action to take, and the next step?” And just iteratively ask that oracle over and over again as you take the steps.

Lucas: Or you could generate the debate, right? Over the future steps?

Geoffrey: The most direct way to do an NDP with Debate is to engage in a debate at every step, restart the debate process, showing all the history that’s happened so far, and say, the question at hand, that we’re debating, is what’s the best action to take next? I think I’m relatively optimistic that when we make AGI, for a while after we make it, we will be using it in ways that aren’t extremely fine grain NDP-like in the sense of we’re going to take a million actions in a row, and they’re all actions that hit the environment.

We’d mainly use this full direct reduction. There’s more practical reductions for other questions. I’ll give an example. Say you want to write the best book on, say, metaethics, and you’d like debaters to produce this books. Let’s say that debaters are optimal agents so they know how to do debates on any subject. Even if the book is 1,000 pages long, or say it’s a couple hundred pages long, that’s a more reasonable book, you could do it in a single debate as follows. Ask the agents to write the book. Each agent writes its own book, say, and you ask them to debate which book is better, and that debate all needs to point at small parts of the book.

One of the debaters writes a 300 page book and buried in the middle of it is a subtle argument, which is malicious and wrong. The other debater need only point directly at the small part of the book that’s problematic and say, “Well, this book is terrible because of the following malicious argument, and my book is clearly better.” The way this works is, if you are able to point to problematic parts of books in a debate, and therefore win, the best first move in the debate is to write the best book, so you can do it in one step, where you produce this large object with a single debate, or a single debate game.

The reason I mention this is that’s a little better in terms of practicality, then, writing the book. If the book is like 100,000 words, you wouldn’t want to have a debate about each word, one after another. That’s sort of a silly, very expensive process.

Lucas: Right, so just to back up here, and to provide a little bit more framing, there’s this beginning at which we can see we’re just at a very low level trying to optimize agents for debate, and there’s going to be an asymmetry here that we predict, that it should, in general, usually be easier to tell who’s telling the truth over who’s not, because it’s easier to tell the truth than to lie, and lie in convincing ways. Scaling from there, it seems that what we ultimately really want is to then be able to train a judge, right?

Geoffrey: The goal is to train … You need both.

Lucas: Right. You need both to scale up together.

Geoffrey: Yep.

Lucas: Through doing so, we will have oracles that will be able to assist in decision making, because what we’re doing is amplifying human explicit deliberative processes. Again, this tying into coherent extrapolated volition and reflective equilibrium. What we want is to be able to amplify human decision making, which I guess is, in a broad sense, value alignment.

Geoffrey: Right.

Lucas: Are there any other technical motivations behind debate that you would like to unpack for why within the set of all these different AI alignment methodologies, why debate is so important?

Geoffrey: Sure. I think maybe the right way to talk about that is to contrast amplification and debate. I got to Debate by thinking a bunch about Amplification, talking a bunch with Paul Christiano. To summarize it for listeners again, Amplification is take questions, decompose them into subquestions that are hopefully easier to answer, repeat that decomposition until we get to things we can check, and use that whole process to answer complicated questions.

The stock amplification, the human is doing the task of decomposing the questions. The human decomposition has to be complete. For example, you might have a question, going back to this resume example, we’re having this debate about whether someone whose name sounds white or black is a good candidate. For whatever reason, no one here wants to be racist, but they’re all just completely oblivious to the idea they could be racist. They’re going through and discussing whether the candidate is applicable, they’re kind of breaking these questions down. Just subtly at every point, racism is kind of inching its way into the answers, and they just never thought to kind of investigate.

If at some point they had asked, “Well, am I deciding this because the person has a white sounding name?” They would have realized their mistake, but they never think of that. In debate, you are relying on the agents to be able to do the decomposition, so the agents can, even if it’s hard for a human to do it, point out, “Maybe you’re being racist here,” and sort of get the correction. The advantage of debate there is you get some help doing this combination of tasks for you.

The other thing that happened, frankly, which is one of the reasons I thought of debate, was AlphaGo. In thinking about amplification, I’ve been sort of concerned. “Is this process going to be scalable? Are we going to lose a bunch of efficiency in doing this complicated decomposition process?” I was sort of concerned that we would lose a bunch of efficiency and therefore be not competitive with unsafe techniques to getting to AGI.

Then, AlphaGo came out, and AlphaGo got very strong performance, and it did it by doing an explicit tree search. As part of AlphaGo, it’s doing this kind of deliberative process, and that was not only important for performance at test time, but was very important for getting the training to work. What happens is, in AlphaGo, at training time, it’s doing a bunch of tree search through the game of Go in order to improve the training signal, and then it’s training on that improved signal. That was one thing kind of sitting in the back of my mind.

I was kind of thinking through, then, the following way of thinking about alignment. At the beginning, we’re just training on direct answers. We have these questions we want to answer, an agent answers the questions, and we judge whether the answers are good. You sort of need some extra piece there, because maybe it’s hard to understand the answers. Then, you imagine training an explanation module that tries to explain the answers in a way that humans can understand. Then, those explanations might be kind of hard to understand, too, so maybe you need an explanation explanation module.

For a long time, it felt like that was just sort of ridiculous epicycles, adding more and more complexity. There was no clear end to that process, and it felt like it was going to be very inefficient. When AlphaGo came out, that kind of snapped into focus, and it was like, “Oh. If I train the explanation module to find flaws, and I train the explanation explanation module to find flaws in flaws, then that becomes a zero-sum game. If it turns out that ML is very good at solving zero-sum games, and zero-sum games were a powerful route to drawing performance, then we should take advantage of this in safety.” Poof. We have, in this answer, explanation, explanation, explanation route, that gives you the zero-sum game of Debate.

That’s roughly sort of how I got there. It was a combination of thinking about Amplification and this kick from AlphaGo, that zero-sum games and search are powerful.

Lucas: In terms of the relationship between debate and amplification, can you provide a bit more clarification on the differences, fundamentally, between the process of debate and amplification? In terms of amplification, there’s a decomposition process, breaking problems down into subproblems, eventually trying to get the broken down problems into human level problems. The problem has essentially doubled itself many items over at this point, right? It seems like there’s going to be a lot of questions for human beings to answer. I don’t know how interrelated debate is to decompositional argumentative process.

Geoffrey: They’re very similar. Both Amplification and Debate operate on some large tree. In amplification, it’s the tree of all decomposed questions. Let’s be concrete and say the top level question in amplification is, “What should we do?” In debate, again, the question at the top level is, “What should we do?” In amplification, we take this question. It’s a very broad open-ended question, and we kind of break it down more and more and more. You sort of imagine this expanded tree coming out from that question. Humans are constructing this tree, but of course, the tree is exponentially large, so we can only ever talk about a small part of it. Our hope is that the agents learn to generalize across the tree, so they’re learning the whole structure of the tree, even given finite data.

In the debate case, similarly, you have top level question of, “What should we do,” or some other question, and you have the tree of all possible debates. Imagine every move in this game is, say, saying a sentence, and at every point, you have maybe an exponentially large number of sentences, so the branching factor, now in the tree, is very large. The goal in debate is kind of see this whole tree.

Now, here is the correspondence. In amplification, the human does the decomposition, but I could instead have another agent do the decomposition. I could say I have a question, and instead of a human saying, “Well, this question breaks down into subquestions X, Y, and Z,” I could have a debater saying, “The subquestion that is most likely to falsify this answer is Y.” It could’ve picked at any other question, but it picked Y. You could imagine that if you replace a human doing the decomposition with another agent in debate pointing at the flaws in the arguments, debate would kind of pick out a path through this tree. A single debate transcript, in some sense, corresponds to a single path through the tree of amplification.

Lucas: Does the single path through the tree of amplification elucidate the truth?

Geoffrey: Yes. The reason it does is it’s not an arbitrarily chosen path. We’re sort of choosing the path that is the most problematic for the arguments.

Lucas: In this exponential tree search, there’s heuristics and things which are being applied in general to the tree search in order to collapse onto this one branch or series?

Geoffrey: Let’s say, in amplification, we have a question. Our decomposition is, “Well, this decomposes into X, Y, and Z,” and then we recursively call the agent, and it says, “The answers are AX, AY, AZ, for these questions.” Now, if I trusted those subanswers, I could do the reconstruction of the answers to the original question. If I don’t trust the subanswers, I might say, “Well, which subanswer is most likely to be false?” The correspondence with debate is the other debating agent would point to which one of the subanswers is probably false, and then you recourse down just that tree.

Geoffrey: Here’s the way it works. You can trust amplification if all of the subanswers are correct. If at least one of them is false, a strong debater can find which of those subanswers is false and recourse down to that. That’s why, if it turns out that a single path chosen by strong debaters never finds a false subanswer, then you will have established that the whole tree is true.

Lucas: I see. I think this picture of, “What should we do,” at the top, and then the trees would fall from that in terms of one version of amplification and one version of debate is pretty helpful here.

Geoffrey: The most important thing here is that in Amplification, the branching factor is small. I’m a human, I see a question, I sort of decompose that into a relatively small number of questions. Say, two to 10. In debate, the branching factor is very large. A debate agent could make any statement as the next countermove. What this means is that if it works for strong play, because the branching factor is so large, a shallow debate tree is more powerful than a shallow amplification tree, because you can cover more terrain with a couple steps of debate that are a proxy for a large tree.

The problem with that statement, maybe the biggest advantage to amplification, is you have to really be assuming that you know the agents are strong enough to cover the tree. Say I’m debating whether there’s a black swan, and there in fact is a black swan, but there’s only one of them, and there’s only one in the entire world, and the correct countermove would be pointing at that black swan, finding it out of the entire world. A debater who wanted to say, “There are no black swans,” to just claim, “There are no black swans. Trust me. The other agent would be able to point to one if it exists.” Maybe it’s the case that a black swan does exist, but the other agent is just too weak to point at the black swan, and so that debate doesn’t work.

This argument that shallow debates are powerful leans a whole lot on debaters being very strong, and debaters in practice will not be infinitely strong, so there’s a bunch of subtlety there that we’re going to have to wrestle.

Lucas: It would also be, I think, very helpful if you could let us know how you optimize for strong debaters, and how is amplification possible here if human beings are the ones who are pointing out the simplifications of the questions?

Geoffrey: Whichever one we choose, whether it’s amplification, debate, or some entirely different scheme, if it depends on humans in one of these elaborate ways, we need to do a bunch of work to know that humans are going to be able to do this. At amplification, you would expect to have to train people to think about what kinds of decompositions are the correct ones. My sort of bias is that because debate gives the humans more help in pointing out the counterarguments, it may be cognitively kinder to the humans, and therefore, that could make it a better scheme. That’s one of the advantages of debate.

The technical analogy there is a shallow debate argument. The human side is, if someone is pointing out