FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but I want to start by looking at this from the research and development side of things. So my very first question for both of you is: do you agree that 2018 was more about impressive progress, and less about major breakthroughs? Or were there breakthroughs that really were important to the AI community that just didn’t make it into the mainstream press?

David: Broadly speaking I think I agree, although I have a few caveats for that. One is just that it’s a little bit hard to recognize always what is a breakthrough, and a lot of the things in the past that have had really big impacts didn’t really seem like some amazing new paradigm shift—it was sort of a small tweak that then made a lot of things work a lot better. And the other caveat is that there are a few works that I think are pretty interesting and worth mentioning, and the field is so large at this point that it’s a little bit hard to know if there aren’t things that are being overlooked.

Roman: So I’ll agree with you, but I think the pattern is more important than any specific breakthrough. We kind of got used to getting something really impressive every month, so relatively it doesn’t sound as good, all the AlphaStar, AlphaFold, AlphaZero happening almost every month. And it used to be it took 10 years to see something like that.

It’s likely it will happen even more frequently. We’ll conquer a new domain once a week or something. I think that’s the main pattern we have to recognize and discuss. There are significant accomplishments in terms of teaching AI to work in completely novel domains. I mean now we can predict protein folding, now we can have multi-player games conquered. That never happened before so frequently. Chess was impressive because it took like 30 years to get there.

David: Yeah, so I think a lot of people were kind of expecting or at least hoping for StarCraft or Dota to be solved—to see, like we did with AlphaGo, AI systems that are beating the top players. And I would say that it’s actually been a little bit of a let down for people who are optimistic about that, because so far the progress has been kind of unconvincing.

So the AlphaStar, which was a really recent result from last week, for instance: I’ve seen criticism of it that I think is valid that it was making more actions than a human could within a very short interval of time. So they carefully controlled the actions-per-minute that AlphaStar was allowed to take, but they didn’t prevent it from doing really short bursts of actions that really helped its micro-game, and that means that it can win without really being strategically superior to its human opponents. And I think the Dota results that OpenAI has had was also criticized as being sort of not the hardest version of the problem, and still the AI sort of is relying on some crutches.

Ariel: So before we get too far into that debate, can we take a quick step back and explain what both of those are?

David: So these are both real-time strategy games that are, I think, actually the two most popular real-time strategy games in the world that people play professionally, and make money playing. I guess that’s all to say about them.

Ariel: So a quick question that I had too about your description then, when you’re talking about AlphaStar and you were saying it was just making more moves than a person can realistically make. Is that it—it wasn’t doing anything else special?

David: I haven’t watched the games, and I don’t play StarCraft, so I can’t say that it wasn’t doing anything special. I’m basing this basically on reading articles and reading the opinions of people who are avid StarCraft players, and I think the general opinion seems to be that it is more sophisticated than what we’ve seen before, but the reason that it was able to win these games was not because it was out-thinking humans, it’s because it was out-clicking, basically, in a way that just isn’t humanly possible.

Roman: I would agree with this analysis, but I don’t see it as a bug, I see it as a feature. That just shows another way machines can be superior to people. Even if they are not necessarily smarter, they can still produce superior performance, and that’s what we really care about. Right? We found a different way, a non-human approach to solving this problem. That’s impressive.

David: Well, I mean, I think if you have an agent that can just click as fast as it wants, then you can already win at StarCraft, before this work. There needs to be something that makes it sort of a fair fight in some sense.

Roman: Right, but think what you’re suggesting: We have to handicap machines to make them even remotely within being comparative to people. We’re talking about getting to superintelligent performance. You can get there by many ways. You can think faster, you can have better memory, you can have better reaction time—as long as you’re winning in whatever domain we’re interested in, you have superhuman performance.

David Krueger: So maybe another way of putting this would be if they actually made a robot play StarCraft and made it use the same interface that humans do, such as a screen and mouse, there’s no way that it could have beat the human players. And so by giving it direct access to the game controls, it’s sort of not solving the same problem that a human is when they play this game.

Roman: I feel what you’re saying, I just feel that it is solving it in a different way, and we have pro-human bias saying, well that’s not how you play this game, you have an advantage. Human players usually rely on superior strategy, not just faster movements that may take advantage of it for a few nanoseconds, a couple of seconds. But it’s not a long-term sustainable pattern.

One of the research projects I worked on was this idea of artificial stupidity, we called it—kind of limiting machines to human-level capacity. And I think that’s what we’re talking about it here. Nobody would suggest limiting a chess program to just human-level memory, or human memorization of opening moves. But we don’t see it as a limitation. Machines have an option of beating us in ways humans can’t. That’s the whole point, and that’s why it’s interesting, that’s why we have to anticipate such problems. That’s where most of the safety and security issues will show up.

Ariel: So I guess, I think, Roman, your point earlier was sort of interesting that we’ve gotten so used to breakthroughs that stuff that maybe a couple of years ago would have seemed like a huge breakthrough is just run-of-the-mill progress. I guess you’re saying that that’s what this is sort of falling into. Relatively recently this would have been a huge deal, but because we’ve seen so much other progress and breakthroughs, that this is now interesting and we’re excited about it—but it’s not reaching that level of, oh my god, this is amazing! Is that fair to say?

Roman: Exactly! We get disappointed if the system loses one game. It used to be we were excited if it would match amateur players. Now it’s, oh, we played a 100 games and you lost one? This is just not machine-level performance, you disappoint us.

Ariel: David, do you agree with that assessment?

David: I would say mostly no. I guess, I think what really impressed me with AlphaGo and AlphaZero was that it was solving something that had been established as a really grand challenge for AI. And then in the case of AlphaZero, I think the technique that they actually used to solve it was really novel and interesting from a research point of view, and they went on to show that this same technique can solve a bunch of other board games as well.

And my impression from what I’ve seen about how they did AlphaStar and AlphaFold is that there were some interesting improvements and the performance is impressive but I think it’s neither, like, quite at the point where you can say we’ve solved it, we’re better than everybody, or in the case of protein folding, there’s not a bunch more room for improvement that has practical significance. And it’s also—I don’t see any really clear general algorithmic insights about AI coming out of these works yet. I think that’s partially because they haven’t been published yet, but from what I have heard about the details about how they work, I think it’s less of a breakthrough on the algorithm side than AlphaZero was.

Ariel: So you’ve mentioned AlphaFold. Can you explain what that is real quick?

David: This is the protein folding project that DeepMind did, and I think there’s a competition called C-A-S-P or CASP that happens every three years, and they sort of dominated that competition this last year doing what was described as two CASPs in one, so basically doubling the expected rate of improvement that people have seen historically at these tasks, or at least at the one that is the most significant benchmark.

Ariel: I find the idea of the protein folding thing interesting because that’s something that’s actually relevant to scientific advancement and health as opposed to just being able to play a game. Are we seeing actual applications for this yet?

David: I don’t know about that, but I agree with you that that is a huge difference that makes it a lot more exciting than some of the previous examples. I guess one thing that I want to say about that, though, is that it does look a little bit more to me like continuation of progress that was already happening in the communities. It’s definitely a big step up, but I think a lot of the things that they did there could have really happened over the next few years anyways, even without DeepMind being there. So, one of the articles I read put it this way: If this wasn’t done by DeepMind, if this was just some academic group, would this have been reported in the media? I think the answer is sort of like a clear no, and that says something about the priorities of our reporting and media as well as the significance of the results, but I think that just gives some context.

Roman: I’ll agree with David—the media is terrible in terms of what they report on, we can all agree on that. I think it was quite a breakthrough, I mean, to say that they not just beat the competition, but to actually kind of doubled performance improvement. That’s incredible. And I think anyone who got to that point would not be denied publication in a top journal; It would be considered very important in that domain. I think it’s one of the most important problems in medical research. If you can accurately predict this, possibilities are really endless in terms of synthetic biology, in terms of curing diseases.

So this is huge in terms of impact from being able to do it. As far as how applicable is it to other areas, is it a great game-changer for AI research? All those things can adapt between this ability to perform in real-life environments of those multiplayer games, and being able to do this. Look at how those things can be combined. Right? You can do things in the real world you couldn’t do before, both in terms of strategy games, which are basically simulations for economic competition, for wars, for quite a few applications where impact would be huge.

So all of it is very interesting. It’s easy to say that, “Well if they didn’t do it, somebody else maybe would do it in a couple of years.” But it’s almost always true for all inventions. If you look at the history of inventions, things like, I don’t know, telephone, have been invented at the same time by two or three people; radio, two or three people. It’s just the point where science gets enough ingredient technology where yeah, somebody’s going to do it, nice. But still, we give credit to whoever got there first.

Ariel: So I think that’s actually a really interesting point, because I think for the last few years we have seen sort of these technological advances but I guess we also want to be considering the advances that are going to have a major impact on humanity even if it’s not quite as technologically new.

David: Yeah, absolutely. I think the framing in terms of breakthroughs is a little bit unclear what we’re talking about when we talk about AI breakthroughs, and I think a lot of people in the field of AI kind of don’t like how much people talk about it in terms of breakthroughs because a lot of the progress is gradual and builds on previous work and it’s not like there was some sudden insight that somebody had that just changed everything, although that does happen in some ways.

And I think you can think of the breakthroughs both in terms of like what is the impact—is this suddenly going to have a lot of potential to change the world? You can also think of it, though, from the perspective of researchers as like, is this really different from the kind of ideas and techniques we’ve seen or seen working before? I guess I’m more thinking about the second right now in terms of breakthroughs representing really radical new ideas in research.

Ariel: Okay, well I will take responsibility for being one of the media people who didn’t do a good job with presenting AI breakthroughs. But I think both with this podcast and probably moving forward, I think that is actually a really important thing for us to be doing—is both looking at the technological progress and newness of something but also the impact it could have on either society or future research.

So with that in mind, you guys also have a good list of other things that did happen this year, so I want to start moving into some of that as well. So next on your list is manual dexterity in robots. What did you guys see happening there?

David: So this is something that’s definitely not my area of expertise, so I can’t really comment too much on it. But there are two papers that I think are significant and potentially representing something like a breakthrough in this application. In general robotics is really difficult, and machine learning for robotics is still, I think, sort of a niche thing, like most robotics is using more classical planning algorithms, and hasn’t really taken advantage of the new wave of deep learning and everything.

So there’s two works, one is QT-Opt, and the other one is Dactyl, and these are both by people from the Berkeley OpenAI crowd. And these both are showing kind of impressive results in terms of manual dexterity in robots. So there’s one that does a really good job at grasping, which is one of the basic aspects of being able to act in the real world. And then there’s another one that was sort of just manipulating something like a cube with different colored faces on it—that one’s Dactyl; the grasping one is QT-Opt.

And I think this is something that was paid less attention to in the media, because it’s been more of a story of kind of gradual progress I think. But my friend who follows this deep reinforcement learning stuff more told me that QT-Opt is the first convincing demonstration of deep reinforcement learning in the real world, as opposed to all these things we’ve seen in games. The real world is much more complicated and there’s all sorts of challenges with the noise of the environment dynamics and contact forces and stuff like this that have been really a challenge for doing things in the real world. And then there’s also the limited sample complexity where when you play a game you can sort of interact with the game as much as you want and play the game over and over again, whereas in the real world you can only move your robot so fast and you have to worry about breaking it, so that means in the end you can collect a lot less data, which makes it harder to learn things.

Roman: Just to kind of explain maybe what they did. So hardware’s expensive, slow: It’s very difficult to work with. Things don’t go well in real life; It’s a lot easier to create simulations in virtual worlds, train your robot in there, and then just transfer knowledge into a real robot in the physical world. And that’s exactly what they did, training that virtual hand to manipulate objects, and they could run through thousands, millions of situations and then it’s something you cannot do with an actual, physical robot at that scale. So, I think that’s a very interesting approach for why lots of people try doing things in virtual environments. Some of the early AGI projects all concentrated on virtual worlds as domain of learning. So that makes a lot of sense.

David: Yeah, so this was for the Dactyl project, which was OpenAI. And that was really impressive I think, because people have been doing this sim-to-real thing—where you train in simulation and then try and transfer it to the real world—with some success for like a year or two, but this one I think was really kind of impressive in that sense, because they didn’t actually train it in the real world at all, and what they had learned managed to transfer to the real world.

Ariel: Excellent. I’m going to keep going through your list. One thing that you both mentioned are GANs. So very quickly, if one of you, or both of you, could explain what a GAN is and what that stands for, and then we’ll get into what happened last year with those.

Roman: Sure, so this is a somewhat new way of doing creative generational visuals and audio. You have two neural networks competing, one is kind of creating fakes, and the other one is judging them, and you get to a point where they’re kind of 50/50. You can’t tell if it’s fake or real anymore. And it’s a great way to produce artificial faces, cars, whatever. Any type of input you can provide to the networks, they quickly learn to extract the essence of that image or audio and generate artificial data sets full of such images.

And there’s really exciting work on being able to extract properties from those, different styles. So if we talk about faces, for example: there could be a style for hair, a style for skin color, a style for age, and now it’s possible to manipulate them. So I can tell you things like, “Okay, Photoshop, I need a picture of a female, 20 years old, blonde, with glasses,” and it would generate a completely realistic face based on those properties. And we’re starting to see it show up not just in images but transferred to video, to generating whole virtual worlds. It’s probably the closest thing we ever had computers get to creativity: actually kind of daydreaming and coming up with novel outputs.

David: Yeah, I just want to say a little bit about the history of the research in GAN. So the first work on GANs was actually back four or five years ago in 2014, and I think it was actually kind of—didn’t make a huge splash at the time, but maybe a year or two after that it really started to take off. And research in GANs over the last few years has just been incredibly fast-paced and there’s been hundreds of papers submitted and published at the big conferences every year.

If you look just in terms of the quality of what is generated, this is, I think, just an amazing demonstration of the rate of progress in some areas of machine learning. The first paper had these sort of black and white pictures of really blurry faces, and now you can get giant—I think 256 by 256, or 512 by 512, or even bigger—really high resolution and totally indistinguishable from real photos, to the human eye anyway—images of faces. So it’s really impressive, and we’ve seen really consistent progress on that, especially in the last couple years.

Ariel: And also, just real quick, what does it stand for?

David: Oh, generative adversarial network. So it’s generative, because it’s sort of generating things from scratch, or from its imagination or creativity. And it’s adversarial because there are two networks: the one that generates the things, and then the one that tries to tell those fake images apart from real images that we actually collect by taking photos in the world.

Ariel: This is an interesting one because it can sort of transition into some ethics stuff that came up this past year, but I’m not sure if we want to get there yet, or if you guys want to talk a little bit more about some of the other things that happened on the research and development side.

David: I guess I want to talk about a few other things that have been making, I would say, sort of steady progress, like GANs. With a lot of interest in, I guess I would say, their ideas that are coming to fruition, even though some of these are not exactly from the last year, they sort of really started to improve themselves and become widely used in the last year.

Ariel: Okay.

David: I think this is actually used in maybe the latest, greatest GAN paper, is something that’s called feature-wise transformations. So this is an idea that actually goes back up to 40 years, depending on how you measure it, but has sort of been catching on in specific applications in machine learning in the last couple of years—starting with, I would say, style-transfer, which is sort of like what Roman mentioned earlier.

So the idea here is that in a neural network, you have what are called features, which basically correspond to the activations of different neurons in the network. Like how much that neuron likes what it’s seeing, let’s say. And those can also be interpreted as representing different kinds of visual patterns, like different kinds of textures, or colors. And these feature-wise transformations basically just take each of those different aspects of the image, like the color or texture in a certain location, and then allow you to manipulate that specific feature, as we call it, by making it stronger or amplifying whatever was already there.

And so you can sort of view this as a way of specifying what sort of things are important in the image, and that’s why it allows you to manipulate the style of images very easily, because you can sort of look at a certain painting style for instance, and say, oh this person uses a lot of wide brush strokes, or a lot of narrow brush strokes, and then you can say, I’m just going to modulate the neurons that correspond to wide or narrow brush strokes, and change the style of the painting that way. And of course you don’t do this by hand, by looking in and seeing what the different neurons represent. This all ends up being learned end-to-end. And so you sort of have an artificial intelligence model that predicts how to modulate the features within another network, and that allows you to change what that network does in a really powerful way.

So, I mentioned that it has been applied in the most recent GAN papers, and I think they’re just using those kinds of transformations to help them generate images. But other examples where you can explain what’s happening more intuitively, or why it makes sense to try and do this, would be something like visual question answering. So there you can have the modulation of the vision network being done by another network that looks at a question and is trying to help answer that question. And so it can sort of read the question and see what features of images might be relevant to answering that question. So for instance, if the question was, “Is it a sunny day outside?” then it could have the vision network try and pay more attention to things that correspond to signs of sun. Or if it was asked something like, “Is this person’s hair combed?” then you could look for the patterns of smooth, combed hair and look for the patterns of rough, tangled hair, and have those features be sort of emphasized in the vision network. That allows the vision network to pay attention to the parts of the image that are most relevant to answering the question.

Ariel: Okay. So, Roman, I want to go back to something on your list quickly in a moment, but first I was wondering if you have anything that you wanted to add to the feature-wise transformations?

Roman: All of it, you can ask, “Well why is this interesting, what are the applications for it?” So you are able to generate inputs, inputs for computers, inputs for people, images, sounds, videos. A lot of times they can be adversarial in nature as well—what we call deep fakes. Right? You can make, let’s say, a video of a famous politician say something, or do something.

Ariel: Yeah.

Roman: And this has very interesting implications for elections, for forensic science, for evidence. As those systems get better and better, it becomes harder and harder to tell if something is real or not. And maybe it’s still possible to do some statistical analysis, but it takes time, and we talked about media being not exactly always on top of it. So it may take 24 hours before we realize if this video was real or not, but the election is tonight.

Ariel: So I am definitely coming back to that. I want to finish going through the list of the technology stuff, but yeah I want to talk about deep fakes and in general, a lot of the issues that we’ve seen cropping up more and more with this idea of using AI to fake images and audio and video, because I think that is something that’s really important.

David: Yeah, it’s hard for me to estimate these things, but I would say this is probably, in terms of the impact that this is going to have societally, this is sort of the biggest story maybe of the last year. And it’s not like something that happened all of the sudden. Again, it’s something that has been building on a lot of progress in generative models and GANs and things like this. And it’s just going to continue, we’re going to see more and more progress like that, and probably some sort of arms’ race here where—I shouldn’t use that word.

Ariel: A competition.

David: A competition between people who are trying to use that kind of technology to fake things and people who are sort of doing forensics to try and figure out what is real and what is fake. And that also means that people are going to have to trust the people who have the expertise to do that, and believe that they’re actually doing that and not part of some sort of conspiracy or something.

Ariel: Alright, well are you guys ready to jump into some of those ethical questions?

David: Well, there are like two other broad things I wanted to mention, which I think are sort of interesting trends in the research community. One is just the way that people have been continuing to scale up AI systems. So a lot of the progress I think has arguably just been coming from more and more computation and more and more data. And there was a pretty great blog post by OpenAI about this last year that argued that the amount of computation that’s being used to train the most advanced AI systems is increasing by a factor of 10 times every year for the last several years, which is just astounding. But it also suggests that this might not be sustainable for a long time, so to the extent that you think that using more computation is a big driver of progress, we might start to see that slow down within a decade or so.

Roman: I’ll add another—what I think also is kind of building-on technology, not so much a breakthrough, we had it for a long time—but neural evolution is something I’m starting to pay a lot more attention to and that’s kind of borrowing from biology, trying to evolve ways for neural networks, optimized neural networks. And it’s producing very impressive results. It’s possible to run it in parallel really well, and it’s competitive with some of the leading alternative approaches.

So, the idea basically is you have this very large neural network, brain-like structure, but instead of trying to train it back, propagate errors, teach it in a standard neural networks way, you just kind of have a population of those brains competing for who’s doing best in a particular problem, and they share weights between good parents, and after a while you just evolve really well performing solutions to some of the most interesting problems.

Additionally you can kind of go meta-level on it and evolve architectures for the neural network itself—how many layers, how many inputs. This is nice because it doesn’t require much human intervention. You’re essentially letting the system figure out what the solutions are. We had some very successful results with genetic algorithms for optimization. We didn’t have much success with genetic programming, and now neural evolution kind of brings it back where you’re optimizing intelligence systems, and that’s very exciting.

Ariel: So you’re saying that you’ll have—to make sure I understand this correctly—there’s two or more neural nets trying to solve a problem, and they sort of play off of each other?

Roman: So you create a population of neural networks, and you give it a problem, and you see this one is doing really well, and that one. The others, maybe not so great. So you take weights from those two and combine them—like mom and dad, parent situation that produces offspring. And so you have this simulation of evolution where unsuccessful individuals are taken out of a population. Successful ones get to reproduce and procreate, and provide their high fitness weights to the next generation.

Ariel: Okay. Was there anything else that you guys saw this year that you want to talk about, that you were excited about?

David: Well I wanted to give a few examples of the kind of massive improvements in scale that we’ve seen. One of the most significant models and benchmarks in the community is ImageNet and training image classifiers that can tell you what a picture is a picture of on this dataset.So the whole sort of deep learning revolution was arguably started, or at least really came into the eyes of the rest of the machine learning community, because of huge success on this ImageNet competition. And training the model there took something like two weeks, and this last year there was a paper where you can train a more powerful model in less than four minutes, and they do this by using like 3000 graphics cards in parallel.

And then DeepMind also had some progress on parallelism with this model called IMPALA, which basically was in the context of reinforcement learning as opposed to classification, and there they sort of came up with a way that allowed them to do updates in parallels, like learn on different machines and combine everything that was learned in a way that’s asynchronous. So in the past the sort of methods that they would use for these reinforcement learning problems, you’d have to wait for all of the different machines to finish their learning on the current problem or instance that they’re learning about, and then combine all of that centrally—whereas the new method allows you to just as soon as you’re done computing or learning something, you can communicate it to the rest of the system, the other computers that are learning in parallel. And that was really important for allowing them to scale to hundreds of machines working on their problem at the same time.

Ariel: Okay, and so that, just to clarify as well, that goes back to this idea that right now we’re seeing a lot of success just scaling up the computing, but at some point that could slow things down essentially, if we had a limit for how much computing is possible.

David: Yeah, and I guess one of my points is also doing this kinds of scaling of computing requires some amount of algorithmic insight or breakthrough if you want to be dramatic as well. So this DeepMind paper I talked about, they had to devise new reinforcement learning algorithms that would still be stable when they had this real-time asynchronous updating. And so, in a way, yeah, a lot of the research that’s interesting right now is on finding ways to make the algorithm scale so that you can keep taking advantage of more and more hardware. And the evolution stuff also fits into that picture to some extent.

Ariel: Okay. I want to start making that transition into some of the concerns that we have for misuse around AI and how easy it is for people to be deceived by things that have been created by AI. But I want to start with something that’s hopefully a little bit more neutral, and talk about Google Duplex, which is the program that Google came out with, I think last May. I don’t know the extent to which it’s in use now, but they presented it, and it’s an AI assistant that can essentially make calls and set up appointments for you. So their examples were it could make a reservation at a restaurant for you, or it could make a reservation for you to get a haircut somewhere. And it got sort of mixed reviews, because on the one hand people were really excited about this, and on the other hand it was kind of creepy because it sounded human, and the people on the other end of the call did not know that they were talking to a machine.

So I was hoping you guys could talk a little bit I guess maybe about the extent to which that was an actual technological breakthrough versus just something—this one being more one of those breakthroughs that will impact society more directly. And then also I guess if you agree that this seems like a good place to transition into some of the safety issues.

David: Yeah, no, I would be surprised if they really told us about the details of how that worked. So it’s hard to know how much of an algorithmic breakthrough or algorithmic breakthroughs were involved. It’s very impressive, I think, just in terms of what it was able to do, and of course these demos that we saw were maybe selected for their impressiveness. But I was really, really impressed personally, just to see a system that’s able to do that.

Roman: It’s probably built on a lot of existing technology, but it is more about impact than what you can do with this. And my background is cybersecurity, so I see it as a great tool for like automating spear-phishing attacks on a scale of millions. You’re getting a real human calling you, talking to you, with access to your online data; Pretty much everyone’s gonna agree and do whatever the system is asking of you, if it’s credit card numbers, or social security numbers. So, in many ways it’s going to be a game changer.

Ariel: So I’m going to take that as a definite transition into safety issues. So, yeah, let’s start talking about, I guess, sort of human manipulation that’s happening here. First, the phrase “deep fake” shows up a lot. Can you explain what those are?

David: So “deep fakes” is basically just: you can make a fake video of somebody doing something or saying something that they did not actually do or say. People have used this to create fake videos of politicians, they’ve used it to create porn using celebrities. That was one of the things that got it on the front page of the internet, basically. And Reddit actually shut down the subreddit where people were doing that. But, I mean, there’s all sorts of possibilities.

Ariel: Okay, so I think the Reddit example was technically the very end of 2017. But all of this sort became more of an issue in 2018. So we’re seeing this increase in capability to both create images that seem real, create audio that seems real, create video that seems real, and to modify existing images and video and audio in ways that aren’t immediately obvious to a human. What did we see in terms of research to try to protect us from that, or catch that, or defend against that?

Roman: So here’s an interesting observation, I guess. You can develop some sort of a forensic tool to analyze it, and give you a percentage likelihood that it’s real or that it’s fake. But does it really impact people? If you see it with your own eyes, are you going to believe your lying eyes, or some expert statistician on CNN?

So the problem is it will still have tremendous impact on most people. We’re not very successful at convincing people about multiple scientific facts. They simply go outside, or it’s cold right now, so global warming is false. I suspect we’ll see exactly that with, let’s say, fake videos of politicians, where a majority of people easily believe anything they hear once or see once versus any number of peer reviewed publications disproving it.

David: I kind of agree. I mean, I think, when I try to think about how we would actually solve this kind of problem, I don’t think a technical solution that just allows somebody who has technical expertise to distinguish real from fake is going to be enough. We really need to figure out how to build a better trust infrastructure in our whole society which is kind of a massive project. I’m not even sure exactly where to begin with that.

Roman: I guess the good news is it gives you plausible deniability. If a video of me comes out doing horrible things I can play it straight.

Ariel: That’s good for someone. Alright, so, I mean, you guys are two researchers, I don’t know how into policy you are, but I don’t know if we saw as many strong policies being developed. We did see the implementation of the GDPR, and for people who aren’t familiar with the GDPR, it’s essentially European rules about what data companies can collect from your interactions online, and the ways in which you need to give approval for companies to collect your data, and there’s a lot more to it than that. One of the things that I found most interesting about the GDPR is that it’s entirely European based, but it had a very global impact because it’s so difficult for companies to apply something only in Europe and not in other countries. And so earlier this year when you were getting all of those emails about privacy policies, that was all triggered by the GDPR. That was something very specific that happened and it did make a lot of news, but in general I felt that we saw a lot of countries and a lot of national and international efforts for governments to start trying to understand how AI is going to be impacting their citizens, and then also trying to apply ethics and things like that.

I’m sort of curious, before we get too far into anything: just as researchers, what is your reaction to that?

Roman: So I never got as much spam as I did that week when they released this new policy, so that kind of gives you a pretty good summary of what to expect. If you look at history, we have regulations against spam, for example. Computer viruses are illegal. So that’s a very expected result. It’s not gonna solve technical problems. Right?

David: I guess I like that they’re paying attention and they’re trying to tackle these issues. I think the way GDPR was actually worded, it has been criticized a lot for being either much too broad or demanding, or vague. I’m not sure—there are some aspects of the details of that regulation that I’m not convinced about, or not super happy about. I guess overall it seems like people who are making these kinds of decisions, especially when we’re talking about cutting edge machine learning, it’s just really hard. I mean, even people in the fields don’t really know how you would begin to effectively regulate machine learning systems, and I think there’s a lot of disagreement about what a reasonable level of regulation would be or how regulations should work.

People are starting to have that sort of conversation in the research community a little bit more, and maybe we’ll have some better ideas about that in a few years. But I think right now it seems premature to me to even start trying to regulate machine learning in particular, because we just don’t really know where to begin. I think it’s obvious that we do need to think about how we control the use of the technology, because it’s just so powerful and has so much potential for harm and misuse and accidents and so on. But I think how you actually go about doing that is a really unclear and difficult problem.

Ariel: So for me it’s sort of interesting, we’ve been debating a bit today about technological breakthroughs versus societal impacts, and whether 2018 actually had as many breakthroughs and all of that. But I would guess that all of us agree that AI is progressing a lot faster than government does.

David: Yeah.

Roman: That’s almost a tautology.

Ariel: So I guess as researchers, what concerns do you have regarding that? Like do you worry about the speed at which AI is advancing?

David: Yeah, I would say I definitely do. I mean, we were just talking about this issue with fakes and how that’s going to contribute to things like fake news and erosion of trust in media and authority and polarization of society. I mean, if AI wasn’t going so fast in that direction, then we wouldn’t have that problem. And I think the rate that it’s going, I don’t see us catching up—or I should say, I don’t see the government catching up on its own anytime soon—to actually control the use of AI technology, and do our best anyways to make sure that it’s used in a safe way, and a fair way, and so on.

I think in and of itself it’s maybe not bad that the technology is progressing fast. I mean, it’s really amazing; Scientifically there’s gonna be all sorts of amazing applications for it. But there’s going to be more and more problems as well, and I don’t think we’re really well equipped to solve them right now.

Roman: I’ll agree with David, I’m very concerned at its relative rate of progress. AI development progresses a lot faster than anything we see in AI safety. AI safety is just trying to identify problem areas, propose some general directions, but we have very little to show in terms of solved problems.

If you look at our work in adversarial fields, maybe a little bit cryptography, the good guys have always been a step ahead of the bad guys, whereas here you barely have any good guys as a percentage. You have like less than 1% of researchers working directly on safety full-time. Same situation with funding. So it’s not a very optimistic picture at this point.

David: I think it’s worth definitely distinguishing the kind of security risks that we’re talking about, in terms of fake news and stuff like that, from long-term AI safety, which is what I’m most interested in, and think is actually even more important, even though I think there’s going to be tons of important impacts we have to worry about already, and in the coming years.

And the long-term safety stuff is really more about artificial intelligence that becomes broadly capable and as smart or smarter than humans across the board. And there, there’s maybe a little bit more signs of hope if I look at how the fields might progress in the future, and that’s because there’s a lot of problems that are going to be relevant for controlling or aligning or understanding these kind of generally intelligent systems that are probably going to be necessary anyways in terms of making systems that are more capable in the near future.

So I think we’re starting to see issues with trying to get AIs to do what we want, and failing to, because we just don’t know how to specify what we want. And that’s, I think, basically the core of the AI safety problem—is that we don’t have a good way of specifying what we want. An example of that is what are called adversarial examples, which sort of demonstrate that computer vision systems that are able to do a really amazing job at classifying images and seeing what’s in an image and labeling images still make mistakes that humans just would never make. Images that look indistinguishable to humans can look completely different to the AI system, and that means that we haven’t really successfully communicated to the AI system what our visual concepts are. And so even though we think we have done a good job of telling it what to do, it’s like, “tell us what this picture is of”—the way that it found to do that really isn’t the way that we would do it and actually there’s some very problematic and unsettling differences there. And that’s another field that, along with the ones that I mentioned, like generative models and GANs, has been receiving a lot more attention in the last couple of years, which is really exciting from the point of view of safety and specification.

Ariel: So, would it be fair to say that you think we’ve had progress or at least seen progress in addressing long-term safety issues, but some of the near-term safety issues, maybe we need faster work?

David: I mean I think to be clear, we have such a long way to go to address the kind of issues we’re going to see with generally intelligent and super intelligent AIs, that I still think that’s an even more pressing problem, and that’s what I’m personally focused on. I just think that you can see that there are going to be a lot of really big problems in the near term as well. And we’re not even well equipped to deal with those problems right now.

Roman: I’ll generally agree with David. I’m more concerned about long-term impacts. There are both more challenging and more impactful. It seems like short-term things may be problematic right now, but the main difficulty is that we didn’t start working on them in time. So problems like algorithmic fairness, bias, technological unemployment, are social issues which are quite solvable; They are not really that difficult from engineering or technical points of view. Whereas long-term control of systems which are more intelligent than you are—very much unsolved at this point in any even toy model. So I would agree with the part about bigger concerns but I think current problems we have today, they are already impacting people, but the good news is we know how to do better.

David: I’m not sure that we know how to do better exactly. Like I think a lot of these problems, it’s more of a problem of willpower and developing political solutions, so the ones that you mentioned. But with the deep fakes, this is something that I think requires a little bit more of a technical solution in the sense of how we organize our society so that people are either educated enough to understand this stuff, or so that people actually have someone they trust and have a reason to trust, who they can take their word for it on that.

Roman: That sounds like a great job, I’ll take it.

Ariel: It almost sounds like something we need to have someone doing in person, though.

So going back to this past year: were there, say, groups that formed, or research teams that came together, or just general efforts that, while maybe they didn’t produce something yet, you think could produce something good, either in safety or AI in general?

David: I think something interesting is happening in terms of the way AI safety is perceived and talked about in the broader AI and machine learning community. It’s a little bit like this phenomenon where once we solve something people don’t consider it AI anymore. So I think machine learning researchers, once they actually recognize the problem that the safety community has been sort of harping on and talking about and saying like, “Oh, this is a big problem”—once they say, “Oh yeah, I’m working on this kind of problem, and that seems relevant to me,” then they don’t really think that it’s AI safety, and they’re like, “This is just part of what I’m doing, making something that actually generalizes well and learns the right concept, or making something that is actually robust, or being able to interpret the model that I’m building, and actually know how it works.”

These are all things that people are doing a lot of work on these days in machine learning that I consider really relevant for AI safety. So I think that’s like a really encouraging sign, in a way, that the community is sort of starting to recognize a lot of the problems, or at least instances of a lot of the problems that are going to be really critical for aligning generally intelligent AIs.

Ariel: And Roman, what about you? Did you see anything sort of forming in the last year that maybe doesn’t have some specific result, but that seemed hopeful to you?

Roman: Absolutely. So I’ve mentioned that there is very few actual AI safety researchers as compared to the number of AI developers, researchers directly creating more capable machines. But the growth rate is much better I think. The number of organizations, the number of people who show interest in it, the number of papers I think is growing at a much faster rate, and it’s encouraging because as David said, it’s kind of like this convergence if you will, where more and more people realize, “I cannot say I built an intelligent system if it kills everyone.” That’s just not what an intelligent system is.

So safety and security become integral parts of it. I think Stuart Russell has a great example where he talks about bridge engineering. We don’t talk about safe bridges and secure bridges—there’s just bridges. If it falls down, it’s not a bridge. Exactly the same is starting to happen here: People realize, “My system cannot fail and embarrass the company, I have to make sure it will not cause an accident.”

David: I think that a lot of people are thinking about that way more and more, which is great, but there is a sort of research mindset, where people just want to understand intelligence, and solve intelligence. And I think that’s kind of a different pursuit. Solving intelligence doesn’t mean that you make something that is safe and secure, it just means you make something that’s really intelligent, and I would like it if people who had that mindset were still, I guess, interested in or respectful of or recognized that this research is potentially dangerous. I mean, not right now necessarily, but going forward I think we’re going to need to have people sort of agree on having that attitude to some extent of being careful.

Ariel: Would you agree though that you’re seeing more of that happening?

David: Yeah, absolutely, yeah. But I mean it might just happen naturally on its own, which would be great.

Ariel: Alright, so before I get to my very last question, is there anything else you guys wanted to bring up about 2018 that we didn’t get to yet?

David: So we were talking about AI safety and there’s kind of a few big developments in the last year. I mean, there’s actually too many I think for me to go over all of them, but I wanted to talk about something which I think is relevant to the specification problem that I was talking about earlier.

Ariel: Okay.

David: So, there are three papers in the last year, actually, on what I call superhuman feedback. The idea motivating these works is that even specifying what we want on a particular instance in some particular scenario can be difficult. So typically the way that we would think about training an AI that understands our intentions is to give it a bunch of examples, and say, “In this situation, I prefer if you do this. This is the kind of behavior I want,” and then the AI is supposed to pick up on the patterns there and sort of infer what our intentions are more generally.

But there can be some things that we would like AI systems to be competent at doing, ideally, that are really difficult to even assess individual instances of. Two examples that I like to use are designing a transit system for a large city, or maybe for a whole country, or the world or something. That’s something that right now is done by a massive team of people. Using that whole team to sort of assess a proposed design that the AI might make would be one example of superhuman feedback, because it’s not just a single human. But you might want to be able to do this with just a single human and a team of AIs helping them, instead of a team of humans. And there’s a few proposals for how you could do that that have come out of the safety community recently, which I think are pretty interesting.

Ariel: Why is it called superhuman feedback?

David: Actually, this is just my term for it. I don’t think anyone else is using this term.

Ariel: Okay.

David: Sorry if that wasn’t clear. The reason I use it is because there are three different, like, lines of work here. So there’s these two papers from OpenAI on what’s called amplification and debate, and then another paper from DeepMind on reward learning and recursive reward learning. And I like to view these as all kind of trying to solve the same problem. How can we assist humans and enable them to make good judgements and informed judgements that actually reflect what their preferences are when they’re not capable of doing that by themselves unaided. So it’s superhuman in the sense that it’s better than a single human can do. And these proposals are also aspiring to do things I think that even teams of humans couldn’t do by having AI helpers that sort of help you do the evaluation.

An example that Yan—who’s the lead author on the DeepMind paper, which I also worked on—gives is assessing an academic paper. So if you yourself aren’t familiar with the field and don’t have the expertise to assess this paper, you might not be able to say whether or not it should be published. But if you can decompose that task into things like: is the paper valid? Are the proofs valid? Are the experiments following a reasonable protocol? Is it novel? Is it formatted correctly for the venue where it’s submitted? And you got answers to all of those from helpers, then you could make the judgment. You’d just be like okay, it meets all of the criteria, so it should be published. The idea would be to get AI helpers to do those sorts of evaluations for you across a broad range of tasks, and allow us to explain to AIs, or teach AIs what we want across a broad range of tasks in that way.

Ariel: So, okay, and so then were there other things that you wanted to mention as well?

David: I do feel like I should talk about another thing that was, again, not developed last year, but really sort of took off last year—is this new kind of neural network architecture called the transformer, which is basically being used in a lot of places where convolutional neural networks and recurrent neural networks were being used before. And those were kind of the two main driving factors behind the deep learning revolution in terms of vision, where you use convolutional networks and things that have a sequential structure, like speech, or text, where people were using recurrent neural networks. And this architecture is actually motivated originally by the same sort of scaling consideration because it allowed them to remove some of the most computationally heavy parts of running these kind of models in the context of translation, and basically make it a hundred times cheaper to train a translation model. But since then it’s also been used in a lot of other contexts and has shown to be a really good replacement for these other kinds of models for a lot of applications.

And I guess the way to describe what it’s doing is it’s based on what’s called an attention mechanism, which is basically a way of giving a neural network the ability to pay more attention to different parts of an input than other parts. So like to look at one word that is most relevant to the current translation task. So if you’re imagining outputting words one at a time, then because different languages have words in different order, it doesn’t make sense to sort of try and translate the next word. You want to look through the whole input sentence, like a sentence in English, and find the word that corresponds to whatever word should come next in your output sentence.

And that was sort of the original inspiration for this attention mechanism, but since then it’s been applied in a bunch of different ways, including paying attention to different parts of the model’s own computation, paying attention to different parts of images. And basically just using this attention mechanism in the place of the other sort of neural architectures that people thought were really important to give you temporal dependencies across something sequential like a sentence that you’re trying to translate, turned out to work really well.

Ariel: So I want to actually pass this to Roman real quick. Did you have any comments that you wanted to add to either the superhuman feedback or the transformer architecture?

Roman: Sure, so superhuman feedback: I like the idea and I think people should be exploring that, but we can kind of look at similar examples previously. So, for a while we had situation where teams of human chess players and machines did better than just unaided machines or unaided humans. That lasted about ten years. And then machines became so much better, humans didn’t really contribute anything, it was kind of just like an additional bottleneck to consult with them. I wonder if long term this solution will face similar problems. It’s very useful right now, but it seems like, I don’t know if it will scale.

David: Well I want to respond to that, because I think it’s—the idea here is, in my mind, to have something that actually scales in the way that you’re describing, where it can sort of out-compete pure AI systems. Although I guess some people might be hoping that that’s the case, because that would make the strategic picture better in terms of people’s willingness to use safer systems. But this is more about just how can we even train systems—if we have the willpower, if people want to build a system that has the human in charge, and ends up doing what the human wants—how can we actually do that for something that’s really complicated?

Roman: Right. And as I said, I think it’s a great way to get there. So this part I’m not concerned about. It’s a long-term game with that.

David: Yeah, no, I mean I agree that that is something to be worried about as well.

Roman: There is a possibility of manipulation if you have a human in the loop, and that itself makes it not safer but more dangerous in certain ways.

David: Yeah, one of the biggest concerns I have for this whole line of work is that the human needs to really trust the AI systems that are assisting it, and I just don’t see that we have good enough mechanisms for establishing trust and building trustworthy systems right now, to really make this scale well without introducing a lot of risk for things like manipulation, or even just compounding of errors.

Roman: But those approaches, like the debate approach, it just feels like they’re setting up humans for manipulation from both sides, and who’s better at breaking the human psychological model.

David: Yep, I think it’s interesting, and I think it’s a good line of work. But I think we haven’t seen anything that looks like a convincing solution to me yet.

Roman: Agreed.

Ariel: So, Roman, was there anything else that you wanted to add about things that happened in the last year that we didn’t get to?

Roman: Well, as a professor, I can tell you that students stop learning after about 40 minutes. So I think at this point we’re just being counterproductive.

Ariel: So for what it’s worth, our most popular podcasts have all exceeded two hours. So, what are you looking forward to in 2019?

Roman: Are you asking about safety or development?

Ariel: Whatever you want to answer. Just sort of in general, as you look toward 2019, what relative to AI are you most excited and hopeful to see, or what do you predict we’ll see?

David: So I’m super excited for people to hopefully pick up on this reward learning agenda that I mentioned that Jan and me and people at DeepMind worked on. I was actually pretty surprised how little work has been done on this. So the idea of this agenda at a high level is just: we want to learn a reward function—which is like a score, that tells an agent how well it’s doing—learn reward functions that encode what we want the AI to do, and that’s the way that we’re going to specify tasks to an AI. And I think from a machine learning researcher point of view this is kind of the most obvious solution to specification problems and to safety—is just learner reward function. But very few people are really trying to do that, and I’m hoping that we’ll see more people trying to do that, and encountering and addressing some of the challenges that come up.

Roman: So I think by definition we cannot predict short-term breakthroughs. So what we’ll see is a lot of continuation of 2018 work, and previous work scaling up. So, if you have, let’s say, Texas hold ’em poker: so for two players, we’ll take it to six players, ten players, something like that. And you can make similar projections for other fields, so the strategy games will be taken to new maps, involve more players, maybe additional handicaps will be introduced for the bots. But that’s all we can really predict, kind of gradual improvement.

Protein folding will be even more efficient in terms of predicting actual structures: Any type of accuracy rates, if they were climbing from 80% to 90%, will hit 95, 96. And this is a very useful way of predicting what we can anticipate, and I’m trying to do something similar with accidents. So if we can see historically what was going wrong with systems, we can project those trends forward. And I’m happy to say that there is now at least two or three different teams working and collecting those examples and trying to analyze them and create taxonomies for them. So that’s very encouraging.

David: Another thing that comes to mind is—I mentioned adversarial examples earlier, which are these imperceptible differences to a human that change how the AI system perceives something like an image. And so far, for the most part, the field has been focused on really imperceptible changes. But I think now people are starting to move towards a broader idea of what counts as an adversarial example. So basically anything that a human thinks clearly should belong to this class and the AI system thinks clearly should belong to this other class that has sort have been constructed deliberately to create that kind of a difference.

And I think this going to be really interesting and exciting to see how the field tries to move in that direction, because as I mentioned, I think it’s hard to define how humans decide whether or not something is a picture of a cat or something. And the way that we’ve done it so far is just by giving lots of examples of things that we say are cats. But it turns out that that isn’t sufficient, and so I think this is really going to push a lot of people closer towards thinking about some of the really core safety challenges within the mainstream machine learning community. So I think that’s super exciting.

Roman: It is a very interesting topic, and I am in particular looking at a side subject in that, which is adversarial inputs for humans, and machines developing which I guess is kind of like optical illusions, and audio illusions, where a human is mislabeling inputs in a predictable way, which is allowing for manipulation.

Ariel: Along very similar lines, I think I want to modify my questions slightly, and also ask: coming up in 2019, what are you both working on that you’re excited about, if you can tell us?

Roman: Sure, so there has been a number of publications looking at particular limitations, either through mathematical proofs or through well known economic models, and what is possible in fact, from computational, complexity points of view. And I’m trying to kind of integrate those into a single model showing—in principle, not in practice, but even in principle—what can we do with the AI control problem? How solvable is it? Is it solvable? Is it not solvable? Because I don’t think there is a mathematically rigorous proof, or even a rigorous argument either way. So I think that will be helpful, especially with kind of arguing about importance of a problem and resource allocation.

David: I’m trying to think what I can talk about. I guess right now I have some ideas for projects that are not super well thought out, so I won’t talk about those. And I have a project that I’m trying to finish off which is a little bit hard to describe in detail, but I’ll give the really high level motivation for it. And it’s about something that people in the safety community like to call capability control. I think Nick Bostrom has these terms, capability control and motivation control. And so what I’ve been talking about most of the time in terms of safety during this podcast was more like motivation control, like getting the AI to want to do the right thing, and to understand what we want. But that might end up being too hard, or sort of limited in some respect. And the alternative is just to make AIs that aren’t capable of doing things that are dangerous or catastrophic.

A lot of people in the safety community sort of worry about capability control approaches failing because if you have a very intelligent agent, it will view these attempts to control it as undesirable, and try and free itself from any constraints that we give it. And I think a way of sort of trying to get around that problem is to sort of look at capability control from the lens of motivation control. So to basically make an AI that doesn’t want to influence certain things, and maybe doesn’t have some of these drives to influence the world, or to influence the future. And so in particular I’m trying to see how can we design agents that really don’t try to influence the future, and really only care about doing the right thing, right now. And if we try and do that in a sort of naïve way, or there ways that can fail, and we can get some sort of emergent drive to still try and optimize over the long term, or try and have some influence in the future. And I think to the extent we see things like that, that’s problematic from this perspective of let’s just make AIs that aren’t capable or motivated to influence the future.

Ariel: Alright! I think I’ve kept you both on for quite a while now. So, David and Roman, thank you so much for joining us today.

David: Yeah, thank you both as well.

Roman: Thank you so much.

AI Alignment Podcast: The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi (Beneficial AGI 2019)

Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals?

Byzantine fault is “a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the “Byzantine Generals’ Problem”, developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable.

The Byzantine Generals’ Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space, and the tradeoffs we will have to make for varying degrees of efficiency or safety.

The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi is the ninth podcast in the AI Alignment Podcast series, hosted by Lucas Perry. El Mahdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

Topics discussed in this episode include:

  • The Byzantine Generals’ Problem
  • What this has to do with artificial intelligence and machine learning
  • Everyday situations where this is important
  • How systems and models are to update in the context of asynchrony
  • Why it’s hard to do Byzantine resilient distributed ML.
  • Why this is important for long-term AI alignment

An overview of Adversarial Machine Learning and where Byzantine-resilient Machine Learning stands on the map is available in this (9min) video . A specific focus on Byzantine Fault Tolerant Machine Learning is available here (~7min)

In particular, El Mahdi argues in the first interview (and in the podcast) that technical AI safety is not only relevant for long term concerns, but is crucial in current pressing issues such as social media poisoning of public debates and misinformation propagation, both of which fall into Poisoning-resilience. Another example he likes to use is social media addiction, that could be seen as a case of (non) Safely Interruptible learning. This value misalignment is already an issue with the primitive forms of AIs that optimize our world today as they maximize our watch-time all over the internet.

The latter (Safe Interruptibility) is another technical AI safety question El Mahdi works on, in the context of Reinforcement Learning. This line of research was initially dismissed as “science fiction”, in this interview (5min), El Mahdi explains why it is a realistic question that arises naturally in reinforcement learning

“El Mahdi’s work on Byzantine-resilient Machine Learning and other relevant topics is available on his Google scholar profile. A modification of the popular machine learning library TensorFlow, to make it Byzantine-resilient (and also support communication over UDP channels among other things) has been recently open-sourced on Github by El Mahdi’s colleagues based on his algorithmic work we mention in the podcast.

To connect with him over social media

You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast series. I’m Lucas Perry, and today we’ll be speaking with El Mahdi El Mhamdi on the Byzantine problem, Byzantine tolerance, and poisoning in distributed learning and computer networks. If you find this podcast interesting or useful, please give it a like and follow us on your preferred listing platform. El Mahdi El Mhamdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. With that, El Mahdi’s going to start us off with a thought experiment.

El Mahdi: Imagine you are part of a group of three generals, say, from the Byzantine army surrounding a city you want to invade, but you also want to retreat if retreat is the safest choice for your army. You don’t want to attack when you will lose, so those three generals that you’re part of are in three sides of the city. They sent some intelligence inside the walls of the city, and depending on this intelligence information, they think they will have a good chance of winning and they would like to attack, or they think they will be defeated by the city, so it’s better for them to retreat. Your final decision would be a majority vote, so you communicate through some horsemen that, let’s say, are reliable for the sake of this discussion. But there might be one of you who might have been corrupt by the city.

The situation would be problematic if, say, there are General A, General B, and General C. General A decided to attack. General B decided to retreat based on their intelligence for some legitimate reason. A and B are not corrupt, and say that C is corrupt. Of course, A and B, they can’t figure out who was corrupt. Say C is corrupt. What this general would do they thing, so A wanted to attack. They will tell them, “I also want to attack. I will attack.” Then they will tell General B, “I also want to retreat. I will retreat.” A receives two attack votes and one retreat votes. General B receives two retreat votes and only one attack votes. If they trust everyone, they don’t do any double checking, this would be a disaster.

A will attack alone; B would retreat; C, of course, doesn’t care because he was corrupt by the cities. You can tell me they can circumvent that by double checking. For example, A and B can communicate on what C told them. Let’s say that every general communicates with every general on what he decides and on also what’s the remaining part of the group told them. A will report to B, “General C told me to attack.” Then B would tell C, “General C told me to retreat.” But then A and B wouldn’t have anyway of concluding whether the inconsistency is coming from the fact that C is corrupt or that the general reporting on what C told them is corrupt.

I am General A. I have all the valid reasons to think with the same likelihood that C is maybe lying to me or also B might also be lying to me. I can’t know if you are misreporting what C told you enough for the city to corrupt one general if there are three. It’s impossible to come up with an agreement in this situation. You can easily see that this will generalize to having more than three generals, like I say 100, as soon as the non-corrupt one are less than two-thirds because what we saw with three generals would happen with the fractions that are not corrupt. Say that you have strictly more than 33 generals out of 100 who are corrupt, so what they can do is they can switch the majority votes on each side.

But worse than that, say that you have 34 corrupt generals and the remaining 66 not corrupt generals. Say that those 66 not corrupt generals were 33 on the attack side, 33 on the retreat side. The problem is that when you are in some side, say that you are in the retreat side, you have in front of you a group of 34 plus 33 in which there’s a majority of malicious ones. This majority can collude. It’s part of the Byzantine hypothesis. The malicious ones can collude and they will report a majority of inconsistent messages on the minority on the 33 ones. You can’t provably realize that the inconsistency is coming from the group of 34 because they are a majority.

Lucas: When we’re thinking about, say, 100 persons or 100 generals, why is it that they’re going to be partitioned automatically into these three groups? What if there’s more than three groups?

El Mahdi: Here we’re doing the easiest form of Byzantine agreement. We want to agree on attack versus retreat. When it’s become multi-dimensional, it gets even messier. There are more impossibility results and impossibility results. Just like with the binary decision, there is an impossibility theorem on having agreement if you have unsigned messages to horsemen. Whenever the corrupt group exceeds 33%, you provably cannot come up with an agreement. There are many variants to this problem, of course, depending on what hypothesis you can assume. Here, without even mentioning it, we were assuming bounded delays. The horsemen would always arrive eventually. If the horsemen could die on the way and you don’t have any way to check whether they arrive or not or you can be waiting forever because you don’t have any proof that the horsemen died on the way.

You don’t have any mechanism to tell you, “Stop waiting for the horsemen. Stop waiting for the message from General B because the horsemen died.” You can be waiting forever and there are theorems that shows that when you have unbounded delays, and by the way, like in distributed computing, whenever you have in bounded delays, we speak about asynchrony. If you have a synchronous communication, there is a very famous theorem that tells you consensus is impossible, not even in the malicious case, but just like in …

Lucas: In the mundane normal case.

El Mahdi: Yes. It’s called the Fischer Lynch Patterson theorem theorem .

Lucas: Right, so just to dive down into the crux of the problem, the issue here fundamentally is that when groups of computers or groups of generals or whatever are trying to check who is lying amongst discrepancies and similarities of lists and everyone who’s claiming what is when there appears to be a simple majority within that level of corrupted submissions, then, yeah, you’re screwed.

El Mahdi: Yes. It’s impossible to achieve agreement. There are always fractions of malicious agents above which is provably impossible to agree. Depending on the situation, it will be a third or sometimes or a half or a quarter, depending on your specifications.

Lucas: If you start tweaking the assumptions behind the thought experiment, then it changes what number of corrupted machines or agents that are required in order to flip the majority and to poison the communication.

El Mahdi: Exactly. But for example, you mentioned something very relevant to today’s discussion, which is what if we were not agreeing on two decisions, retreat, attack. What if we were agreeing on some multi-dimensional decision? Attack or retreat on one dimension and then …

Lucas: Maybe hold, keep the siege going.

El Mahdi: Yeah, just like add possibilities or dimensions and multi-dimensional agreements. They’re even more hopeless results in that direction

Lucas: There are more like impossibility theorems and issues where these distributed systems are vulnerable to small amounts of systems being corrupt and screwing over the entire distributed network.

El Mahdi: Yes. Maybe now we can slightly move to machine learning.

Lucas: I’m happy to move into machine learning now. We’ve talked about this, and I think our audience can probably tell how this has to do with computers. Yeah, just dive in what this has to do with machine learning and AI and current systems today, and why it even matters for AI alignment.

El Mahdi: As a brief transition, solving the agreement problem besides this very nice historic thought experiment is behind consistencies of safety critical systems like banking systems. Imagine we have a shared account. Maybe you remove 10% of the amount and then she or he added some $10 to the accounts. You remove the $10 in New York and she or he put the $10 in Los Angeles. The banking system has to agree on the ordering because minus $10 plus 10% is not the same result as plus 10% then minus $10. The final balance of the account would not be the same.

Lucas: Right.

El Mahdi: The banking systems routinely are solving decisions that fall into agreement. If you work on some document sharing platform, like Dropbox or Google Docs, whatever, and we collaboratively are writing the document, me and you. The document sharing platform has to, on real time, solve agreements about the ordering of operations so that me and you always keep seeing the same thing. This has to happen while some of the machines that are interconnecting us are failing, whether just like failing because there was a electric crash or something. Data center has lost some machines or if it was like restart, a bug or a take away. What we want in distributed computing is that we would like communications schemes between machines that’s guarantee this consistency that comes from agreement as long as some fraction of machines are reliable. What this has to do with artificial intelligence and machine learning reliability is that with some colleagues, we are trying to encompass one of the major issues in machine learning reliability inside the Byzantine fault tolerance umbrella. For example, you take, for instance, poisoning attacks.

Lucas: Unpack what poisoning attacks are.

El Mahdi: For example, imagine you are training a model on what are good videos to recommend given some key word search. If you search for “medical advice for young parents on vaccine,” this is a label. Let’s assume for the sake of simplicity that a video that tells you not to take your kid for vaccines is not what we mean by medical advice for young parents on vaccine because that’s what medical experts agree on. We want our system to learn that anitvaxers, like anti-vaccine propaganda is not what people are searching for when they type those key words, so I suppose a world where we care about accuracy, okay? Imagine you want to train a machine learning model that gives you accurate results of your search. Let’s also for the sake of simplicity assume that a majority of people on the internet are honest.

Let’s assume that more than 50% of people are not actively trying to poison the internet. Yeah, this is very optimistic, but let’s assume that. What we can show and what me and my colleagues started this line of research with is that you can easily prove that one single malicious agent can provably poison a distributed machine learning scheme. Imagine you are this video sharing platform. Whenever people behave on your platform, this generates what we call gradients, so it updates your model. It only takes a few hyperactive accounts that could generate behavior that is powerful enough to pull what we call the average gradient because what distributed machine learning is using, at least up to today, if you read the source code of most distributed machine learning frameworks. Distributed machine learning is always averaging gradients.

Imagine you Lucas Perry just googled a video on the Parkland shootings. Then the video sharing platform shows you a video telling you that David Hogg and Emma Gonzalez and those kids behind the March for Our Lives movement are crisis actors. The video labels three kids as crisis actors. It obviously has a wrong label, so it is what I will call a poisoned data point. If you are non-malicious agents on the video sharing platform, you will dislike the video. You will not approve it. You’re likely to flag it. This should generate a gradient that pushes the model in that direction, so the gradient will update the model into a direction where it stops thinking that this video is relevant for someone searching “Parkland shooting survivors.” What can happen if your machine learning framework is just averaging gradients is that a bunch of hyperactive people on some topic could poison the average and pull it towards the direction where the models is enforcing this thinking that, “Yeah, those kids are crisis actors.”

Lucas: This is the case because the hyperactive accounts are seen to be given more weight than accounts which are less active in the same space. But this extra weighting that these accounts will get from their hyperactivity in one certain category or space over another, how is the weighting done? Is it just time spent per category or does it have to do with submissions that agree with the majority?

El Mahdi: We don’t even need to go into the details because we don’t know. I’m talking in a general setting where you have a video sharing platform aggregating gradients for behavior. Now, maybe let’s raise the abstraction level. You are doing gradient descents, so you have a lost function that you want to minimize. You have an error function. The error function is the mismatch between what you predict and what the user tells you. The user tells you this is a wrong prediction, and then you move to the direction where the users stop telling you this is the wrong direction. You are doing great in this sense minimizing the lost function. User behaves, and with their behavior, you generate gradients.

What you do now in the state of the arts way of distributed machine learning is that you average all those gradients. Averaging is well known not to be resilient. If you have a room of poor academics earning a few thousand dollars and then a billionaire jumps in the room, if your algorithm reasons with averaging, it will think that this is a room of millionaires because the average salary would be a couple of hundred millions. But then million is very obvious to do when it comes to salaries and numbers scalers because you can rank them.

Lucas: Right.

El Mahdi: You rank numbers and then decide, “Okay, this is the ordering. This is the number that falls in the middle. This is the upper half. This is the lower half and this is the median.” When it becomes high dimensional, the median is a bit tricky. It has some computational issues. Then even if you compute what we call the geometric median, an attacker can still know how to leverage the fact that you’re only approximating it because there’s no closed formula. There’s no closed form to compute the median in that dimension. But worse than that, what we showed in one of our follow up works is because of the fact that machine learning is done in very, very, very high dimensions, you would have a curse of the dimensionality issue that makes it possible for attackers to sneak in without being spot as a way of the median.

It can still look like the median vector. I take benefits from the fact that those vectors, those gradients, are extremely high dimensional. I would look for all the disagreements. Let’s say you have a group of a couple hundred gradients, and I’m the only malicious one. I would look at the group of correct vectors all updating you somehow in the same direction within some variants. On average, they’re like what we call unbiased estimators of the gradient. When you take out the randomness, the expected value they will give you is the real gradient of the loss function. What I will do as a malicious worker is I will look at the way they are disagreeing slightly on each direction.

I will sum that. I will see that they disagree by this much on direction one. They disagree by this much on direction two. They disagree by this much, epsilon one, epsilon two, epsilon three. I would look for all these small disagreements they have on all the components.

Lucas: Across all dimensions and high dimensional space. [crosstalk 00:16:35]

El Mahdi: Then add that up. It will be my budget, my leeway, my margin to attack you on another direction.

Lucas: I see.

El Mahdi: What we proved is that you have to mix ideas from geometric median with ideas from the traditional component-wise median, and that those are completely different things. The geometric median is a way to find a median by just minimizing the sum of distances between what you look for and all the vectors that were proposed, and then the component-wise median will do a traditional job of ranking the coordinates. It looks at each coordinate, and then rank all the propositions, and then look for the proposition that lies in the middle. Once we proved enough follow up work is that, yeah, the geometric median idea is elegant. It can make you converge, but it can make you converge to something arbitrarily bad decided by the attacker. When you train complex models like neural nets, the landscape you optimize inside is not convex. It’s not like a bowl or a cup where you just follow the descending slope you would end up in the lowest point.

Lucas: Right.

El Mahdi: It’s like a multitude of bowls with different heights.

Lucas: Right, so there’s tons of different local minima across the space.

El Mahdi: Exactly. So in the first paper what we showed is that ideas that look like the geometric median are enough to just converge. You converge. You provably converge, but in the follow up what we realized, like something we were already aware of, but not enough in my opinion, is that there is this square root D, this curse of dimensionality that will arise when you compute high dimensional distances. That the attacker can leverage.

So in what we call the hidden vulnerability of distributed learning, you can have correct vectors, agreeing on one component. Imagine in your head some three axis system.

Let’s say that they are completely in agreement on axis three. But then in axis one, two, so in the plane formed by the axis one and axis two, they have a small disagreement.

What I will do as the malicious agent, is that I will leverage this small disagreement, and inject it in axis three. And this will make you go to a bit slightly modified direction. And instead of going to this very deep, very good minima, you will go into a local trap that is just close ahead.

And that comes from the fact that loss functions of interesting models are clearly like far from being convex. The models are highly dimensional, and the loss function is highly un-convex, and creates a lot of leeway.

Lucas: It creates a lot of local minima spread throughout the space for you to attack the person into.

El Mahdi: Yeah. So convergence is not enough. So we started this research direction by formulating the following question, what does it take to guarantee convergence?

And any scheme that aggregates gradients, and guarantee convergence is called Byzantine resilient. But then you can realize that in very high dimensions, and highly non-convex loss functions, is convergence enough? Would you just want to converge?

There are of course people arguing the deep learning models, like there’s this famous paper by Anna Choromanska, and Yann LeCun, and  Gérard Ben Arous, about the landscape of neural nets, that basically say that, “Yeah, very deep local minimum of neural nets are some how as good.”

From an overly simplified point of view, it’s an optimistic paper, that tells you that you shouldn’t worry too much when you optimize neural nets about the fact that gradient descent would not necessarily go to a global like-

Lucas: To a global minima.

El Mahdi: Yeah. Just like, “Stop caring about that.”

Lucas: Because the local minima are good enough for some reason.

El Mahdi: Yeah. I think that’s a not too unfair way to summarize the paper for the sake of this talk, for the sake of this discussion. What we empirically illustrate here, and theoretically support is that that’s not necessarily true.

Because we show that with very low dimensional, not extremely complex models, trained on CIFAR-10 and MNIST, which are toy problems, very easy toy problems, low dimensional models etc. It’s already enough to have those amounts of parameters, let’s say 100,000 parameters or less, so that an attacker would always find a direction to take you each time away, away, away, and then eventually find an arbitrarily bad local minimum. And then you just converge to that.

So convergence is not enough. Not only you have to seek an aggregation rule that guarantees convergence, but you have to seek some aggregation rules that guarantee that you would not converge to something arbitrarily bad. You would keep converging to the same high quality local minimum, whatever that means.

The hidden vulnerability is this high dimensional idea. It’s the fact that because the loss function is highly non-convex, because there’s the high dimensionality, as an attacker I would always find some direction, so the attack goes this way.

Here the threat model is that an attacker can spy on your gradients, generated by the correct workers but cannot talk on their behalf. So I cannot corrupt the messages. Since you asked about the reliability of horsemen or not.

So horsemen are reliable. I can’t talk on your behalf, but I can spy on you. I can see what are you sending to the others, and anticipate.

So I would as an attacker wait for correct workers to generate their gradients, I will gather those vectors, and then I will just do a linear regression on those vectors to find the best direction to leverage the disagreement on the D minus one remaining directions.

So because there would be this natural disagreement, this variance in many directions, I will just do some linear regression and find what is the best direction to keep? And use the budget I gathered, those epsilons I mentioned earlier, like this D time epsilon on all the directions to inject it the direction that will maximize my chances of taking you away from local minima.

So you will converge, as proven in the early papers, but not necessarily to something good. But what we showed here is that if you combine ideas from multidimensional geometric medians, with ideas from single dimensional component-wise median, you improve your robustness.

Of course it comes with a price. You require three quarters of the workers to be reliable.

There is another direction where we expanded this problem, which is asynchrony. And asynchrony arises when as I said in the Byzantine generals setting, you don’t have a bounded delay. In the bounded delay setting, you know that horses arrive at most after one hour.

Lucas: But I have no idea if the computer on the other side of the planet is ever gonna send me that next update.

El Mahdi: Exactly. So imagine you are doing machine learning on smartphones. You are leveraging a set of smartphones all around the globe, and in different bandwidths, and different communication issues etc.

And you don’t want each time to be bottlenecked by the slowest one. So you want to be asynchronous, you don’t want to wait. You’re just like whenever some update is coming, take it into account.

Imagine some very advanced AI scenario, where you send a lot of learners all across the universe, and then they communicate with the speed of light, but some of them are five light minutes away, but some others are two hours and a half. And you want to learn from all of them, but not necessarily handicap the closest one, because there are some other learners far away.

Lucas: You want to run updates in the context of asynchrony.

El Mahdi: Yes. So you want to update whenever a gradient is popping up.

Lucas: Right. Before we move on to illustrate the problem again here is that the order matters, right? Like in the banking example. Because the 10% plus 10 is different from-

El Mahdi: Yeah. Here the order matters for different reasons. You update me so you are updating me on the model you got three hours ago. But in the meanwhile, three different agents updated me on the models, while getting it three minutes ago.

All the agents are communicating through some abstraction they call the server maybe. Like this server receives updates from fast workers.

Lucas: It receives gradients.

El Mahdi: Yeah, gradients. I also call them updates.

Lucas: Okay.

El Mahdi: Because some workers are close to me and very fast, I’ve done maybe 1000 updates, while you were still working and sending me the message.

So when your update arrive, I can tell whether it is very stale, very late, or malicious. So what we do in here is that, and I think it’s very important now to connect a bit back with classic distributed computing.

Is that Byzantine resilience in machine learning is easier than Byzantine resilience in classical distributed computing for one reason, but it is extremely harder for another reason.

The reason is that we know what we want to agree on. We want to agree on a gradient. We have a toolbox of calculus that tells us how this looks like. We know that it’s the slope of some loss function that is most of today’s models, relatively smooth, differentiable, maybe Lipschitz, bounded, whatever curvature.

So we know that we are agreeing on vectors that are gradients of some loss function. And we know that there is a majority of workers that will produce vectors that will tell us what does a legit vector look like.

You can find some median behavior, and then come up with filtering criterias that will get away with the bad gradients. That’s the good news. That’s why it’s easier to do Byzantine resilience in machine learning than to do Byzantine agreement. Byzantine agreement, because agreement is a way harder problem.

The reason why Byzantine resilience is harder in machine learning than in the typical settings you have in distributed computing is that we are dealing with extremely high dimensional data, extremely high dimensional decisions.

So a decision here is to update the model. It is triggered by a gradient. So whenever I accept a gradient, I make a decision. I make a decision to change the model, to take it away from this state, to this new state, by this much.

But this is a multidimensional update. And Byzantine agreement, or Byzantine approximate agreement in higher dimension has been provably hopeless by Hammurabi Mendes, and Maurice Herlihy in an excellent paper in 2013, where they show that you can’t do Byzantine agreement in D dimension with N agents in less than N to the power D computations, per agent locally.

Of course in their paper, they were meaning Byzantine agreement on positions. So they were framing it with a motivations saying, “This is N to the power D, but the typical cases we care about in distributed computing are like robots agreeing on a position on a plane, or on a position in a three dimensional space.” So D is two or three.

So N to the power two or N to the power three is fine. But in machine learning D is not two and three, D is a billion or a couple of millions. So N to the power a million is just like, just forget.

And not only that, but also they require … Remember when I tell you that Byzantine resilience computing would always have some upper bound on the number malicious agents?

Lucas: Mm-hmm (affirmative).

El Mahdi: So the number of total agents should exceed D times the number of malicious agents.

Lucas: What is D again sorry?

El Mahdi: Dimension.

Lucas: The dimension. Okay.

El Mahdi: So if you have to agree on D dimension, like on a billion dimensional decision, you need at least a billion times the number of malicious agents.

So if you have say 100 malicious agents, you need at least 100 billion total number of agents to be resistant. No one is doing distributed machine learning on 100 billion-

Lucas: And this is because the dimensionality is really screwing with the-

El Mahdi: Yes. Byzantine approximate agreement has been provably hopeless. That’s the bad, that’s why the dimensionality of machine learning makes it really important to go away, to completely go away from traditional distributed computing solutions.

Lucas: Okay.

El Mahdi: So we are not doing agreement. We’re not doing agreement, we’re not even doing approximate agreement. We’re doing something-

Lucas: Totally new.

El Mahdi: Not new, totally different.

Lucas: Okay.

El Mahdi: Called gradient decent. It’s not new. It’s as old as Newton. And it comes with good news. It comes with the fact that there are some properties, like some regularity of the loss function, some properties we can exploit.

And so in the asynchronous setting, it becomes even more critical to leverage those differentiability properties. So because we know that we are optimizing a loss functions that has some regularities, we can have some good news.

And the good news has to do with curvature. What we do here in asynchronous setting, is not only we ask workers for their gradients, we ask them for their empirical estimate of the curvature.

Lucas: Sorry. They’re estimating the curvature of the loss function, that they’re adding the gradient to?

El Mahdi: They add the gradient to the parameter, not the loss function. So we have a loss function, parameter is the abscissa, you add the gradient to the abscissa to update the model, and then you end up in a different place of the loss function.

So you have to imagine the loss function as like a surface, and then the parameter space as the plane, the horizontal plane below the surface. And depending on where you are in the space parameter, you would be on different heights of the loss function.

Lucas: Wait sorry, so does the gradient depend where you are on this, the bottom plane?

El Mahdi: Yeah [crosstalk 00:29:51]-

Lucas: So then you send an estimate for what you think the slope of the intersection will be?

El Mahdi: Yeah. But for asynchrony, not only that. I will ask you to send me the slope, and your observed empirical growth of the slope.

Lucas: The second derivative?

El Mahdi: Yeah.

Lucas: Okay.

El Mahdi: But the second derivative again in high dimension is very hard to compute. You have to computer the Hessian matrix.

Lucas: Okay.

El Mahdi: That’s something like completely ugly to compute in high dimensional situations because it takes D square computations.

As an alternative we would like you to send us some linear computation in D, not a square computation in D.

So we would ask you to compute your actual gradient, your previous gradient, the difference between them, and normalize it by the difference between models.

So, “Tell us your current gradient, by how much it changed from the last gradient, and divide that by how much you changed the parameter.”

So you would tell us, “Okay, this is my current slope, and okay this is the gradient.” And you will also tell us, “By the way, my slope change relative to my parameter change is this much.”

And this would be some empirical estimation of the curvature. So if you are in a very curved area-

Lucas: Then the estimation isn’t gonna be accurate because the linearity is gonna cut through some of the curvature.

El Mahdi: Yeah but if you are in a very curved area of the loss function, your slope will change a lot.

Lucas: Okay. Exponentially changing the slope.

El Mahdi: Yeah. Because you did a very tiny change in the parameter and it takes a lot of the slope.

Lucas: Yeah. Will change the … Yeah.

El Mahdi: When you are in a non-curved area of the loss function, it’s less harmful for us that you are stale, because you will just technically have the same updates.

If you are in a very curved area of the loss function, your updates being stale is now a big problem. So we want to discard your updates proportionally to your curvature.

So this is the main idea of this scheme in asynchrony, where we would ask workers about their gradient, and their empirical growth rates.

And then of course I don’t want to trust you on what you declare, because you can plan to screw me with some gradients, and then declare a legitimate value of the curvature.

I will take those empirical, what we call in the paper empirical Lipschitz-ness. So we ask you for this empirical growth rate, that it’s a scalar, remember? This is very important. It’s a single dimensional number.

And so we ask you about this growth rate, and we ask all of you about growth rates, again assuming the majority is correct. So the majority of growth rates will help us set the median growth rate in a robust manner, because as long as a simple majority is not lying, the median growth rates will always be bounded between two legitimate values of the growth rate.

Lucas: Right because, are you having multiple workers inform you of the same part of your loss function?

El Mahdi: Yes. Even though they do it in an asynchronous manner.

Lucas: Yeah. Then you take the median of all of them.

El Mahdi: Yes. And then we reason by quantiles of the growth rates.

Lucas: Reason by quantiles? What are quantiles?

El Mahdi: The first third, the second third, the third third. Like the first 30%, the second 30%, the third 30%. We will discard the first 30%, discard the last 30%. Anything in the second 30% is safe.

Of course this has some level of pessimism, which is good for safety, but not very good for being fast. Because maybe people are not lying, so maybe the first 30%, and the last 30% are also values we could consider. But for safety reasons we want to be sure.

Lucas: You want to try to get rid of the outliers.

El Mahdi: Possible.

Lucas: Possible outliers.

El Mahdi: Yeah. So we get rid of the first 30%, the last 30%.

Lucas: So this ends up being a more conservative estimate of the loss function?

El Mahdi: Yes. That’s completely right. We explain that in the paper.

Lucas: So there’s a trade off that you can decide-

El Mahdi: Yeah.

Lucas: By choosing what percentiles to throw away.

El Mahdi: Yeah. Safety never comes for free. So here, depending on how good your estimates about the number of potential Byzantine actors is, your level of pessimism with translate into slowdown.

Lucas: Right. And so you can update the amount that you’re cutting off-

El Mahdi: Yeah.

Lucas: Based off of the amount of expected corrupted signals you think you’re getting.

El Mahdi: Yeah. So now imagine a situation where you know the number of workers is know. You know that you are leveraging 100,000 smartphones doing gradient descent for you. Let’s call that N.

You know that F of them might be malicious. We argue that if F is exceeding the third of N, you can’t do anything. So we are in a situation where F is less than a third. So less than 33,000 workers are malicious, then the slowdown would be F over N, so a third.

What if you are in a situation where you know that your malicious agents are way less than a third? For example you know that you have at most 20 rogue accounts in your video sharing platform.

And your video sharing platform has two billion accounts. So you have two billion accounts.

Lucas: 20 of them are malevolent.

El Mahdi: What we show is that the slowdown would be N minus F divided by N. N is the two billion accounts, F is the 20, and is again two billion.

So it would be two billion minus 20, so one million nine hundred billion, like something like 0.999999. So you would go almost as fast as the non-Byzantine resilient scheme.

So our Byzantine resilient scheme has a slowdown that is very reasonable in situations where F, the number of malicious agents is way less than N, the total number of agents, which is typical in modern…

Today, like if you ask social media platforms, they have a lot of a tool kits to prevent people from creating a billion fake accounts. Like you can’t in 20 hours create an army of several million accounts.

None of the mainstream social media platforms today are susceptible to this-

Lucas: Are susceptible to massive corruption.

El Mahdi: Yeah. To this massive account creation. So you know that the number of corrupted accounts are negligible to the number of total accounts.

So that’s the good news. The good news is that you know that F is negligible to N. But then the slowdown of our Byzantine resilient methods is also close to one.

But it has the advantage compared to the state of the art today to train distributed settings of not taking the average gradient. And we argued in the very beginning that those 20 accounts that you could create, it doesn’t take a bot army or whatever, you don’t need to hack into the machines of the social network. You can have a dozen human, sitting somewhere in a house manually creating 20 accounts, training the accounts over time, doing behavior that makes the legitimate for some topics, and then because you’re distributing machine learning scheme would average the gradients generated by people behavior and that making your command anti-vaccine or controversies, anti-Semitic conspiracy theories.

Lucas: So if I have 20 bad gradients and like, 10,000 good gradients for a video, why is it that with averaging 20 bad gradients are messing up the-

El Mahdi: The amplitude. It’s like the billionaire in the room of core academics.

Lucas: Okay, because the amplitude of each of their accounts is greater than the average of the other accounts?

El Mahdi: Yes.

Lucas: The average of other accounts that are going to engage with this thing don’t have as large of an amplitude because they haven’t engaged with this topic as much?

El Mahdi: Yeah, because they’re not super credible on gun control, for example.

Lucas: Yeah, but aren’t there a ton of other accounts with large amplitudes that are going to be looking at the same video and correcting over the-

El Mahdi: Yeah, let’s define large amplitudes. If you come to the video and just like it, that’s a small update. What about you like it, post very engaging comments-

Lucas: So you write a comment that gets a lot of engagement, gets a lot of likes and replies.

El Mahdi: Yeah, that’s how you increase your amplitude. And because you are already doing some good job in becoming the reference on that video-sharing platform when it comes to discussing gun control, the amplitude of your commands is by definition high and the fact that your command was very early on posted and then not only you commented the video but you also produced a follow-up video.

Lucas: I see, so the gradient is really determined by a multitude of things that the video-sharing platform is measuring for, and the metrics are like, how quickly you commented, how many people commented and replied to you. Does it also include language that you used?

El Mahdi: Probably. It depends on the social media platform and it depends on the video-sharing platform and, what is clear is that there are many schemes that those 20 accounts created by this dozen people in a house can try to find good ways to maximize the amplitude of their generated gradients, but this is a way easier problem than the typical problems we have in technical AI safety. This is not value alignment or value loading or coherent extrapolated volition. This is a very easy, tractable problem on which now we have good news, provable results. What’s interesting is the follow-up questions that we are trying to investigate here with my colleagues, the first of which is, don’t necessarily have a majority of people on the internet promoting vaccines.

Lucas: People that are against things are often louder than people that are not.

El Mahdi: Yeah, makes sense, and sometimes maybe numerous because they generate content, and the people who think vaccines are safe not creating content. In some topics it might be safe to say that we have a majority of reasonable, decent people on the internet. But there are some topics in which now even like polls, like the vaccine situation, there’s a surge now of anti-vaccine resentment in western Europe and the US. Ironically this is happening in the developed country now, because people are so young, they don’t remember the non-vaccinated person. My aunt, I come from Morocco. my aunt is handicapped by polio, so I grew up seeing what a non-vaccinated person looks like. So young people in the more developed countries never had a living example of non-vaccinated past.

Lucas: But they do have examples of people that end up with autism and it seems correlated with vaccines.

El Mahdi: Yeah, the anti-vaccine content may just end up being so click baits, and so provocative that it gets popular. So this is a topic where the majority hypothesis which is crucial to poisoning resilience does not hold. An open follow up we’re onto now is how to combine ideas from reputation metrics, PageRank, et cetera, with poisoning resilience. So for example you have the National Health Institute, the John Hopkins Medical Hospital, Harvard Medical School, and I don’t know, the Massachusetts General Hospital having official accounts on some video-sharing platform and then you can spot what they say on some topic because now we are very good at doing semantic analysis of contents.

And know that okay, on the tag vaccines, I know that there’s this bunch of experts and then what you want to make emerge on your platform is some sort of like epistocracy. The power is given to the knowledgeable, like we have in some fields, like in medical regulation. The FDA doesn’t do a majority vote. We don’t have a popular majority vote across the country to tell the FDA whether it should approve this new drug or not. The FDA does some sort of epistocracy where the knowledgeable experts on the topic would vote. So how about mixing ideas from social choice?

Lucas: And topics in which there are experts who can inform.

El Mahdi: Yeah. There’s also a general fall-off of just straight out trying to connect Byzantine resilient learning with social choice, but then there’s another set of follow ups that motivates me even more. We were mentioning workers, workers, people generate accounts on social media, accounts generation gradients. That’s all I can implicitly assume in that the server, the abstraction that’s gathering those gradients is reliable. What about the aggregated platform itself being deployed on rogue machines? So imagine you are whatever platform doing learning. By the way, whatever always we have said from the beginning until now applies as long as you do gradient-based learning. So it can be recommended systems. It can be training some deep reinforcement learning of some super complicated tasks to beat, I don’t know the word, champion in poker.

We do not care as long as there’s some gradient generation from observing some state, some environmental state, and some reward or some label. It can be supervised, reinforced, as long as gradient based or what you say apply. Imagine now you have this platform leveraging distributed gradient creators, but then the platform itself for security reasons is deployed on several machines for fault tolerance. But then those machines themselves can fail. You have to make the servers agree on the model, so despite the fact that a fraction of the workers are not reliable and now a fraction of the servers themselves. This is the most important follow up i’m into now and I think there would be something on archive maybe in February or March on that.

And then a third follow up is practical instances of that, so I’ve been describing speculative thought experiments on power poisoning systems is actually brilliant master students working which means exactly doing that, like on typical recommended systems, datasets where you could see that it’s very easy. It really takes you a bunch of active agents to poison, a hundred thousand ones or more. Probably people working on big social media platforms would have ways to assess what I’ve said, and so as researchers in academia we could only speculate on what can go wrong on those platforms, so what we could do is just like we just took state of the art recommender systems, datasets, and models that are publicly available, and you can show that despite having a large number of reliable recommendation proposers, a small, tiny fraction of proposers can make, I don’t know, like a movie recommendation system recommend the most suicidal triggering film to the most depressed person watching through your platform. So I’m saying, that’s something you don’t want to have.

Lucas: Right. Just wrapping this all up, how do you see this in the context of AI alignment and the future of machine learning and artificial intelligence?

El Mahdi: So I’ve been discussing this here with people in the Beneficial AI conference and it seems that there are two schools of thought. I am still hesitating between the two because I switched within the past three months from the two sides like three times. So one of them thinks that an AGI is by definition resilient to poisoning.

Lucas: Aligned AGI might be by definition.

El Mahdi: Not even aligned. The second school of thought, aligned AGI is Byzantine resilient.

Lucas: Okay, I see.

El Mahdi: Obviously aligned AGI would be poisoning resilience, but let’s just talk about super intelligent AI, not necessarily aligned. So you have a super intelligence, would you include poisoning resilience in the super intelligence definition or not? And one would say that yeah, if you are better than human in whatever task, it means you are also better than human into spotting poison data.

Lucas: Right, I mean the poison data is just messing with your epistemics, and so if you’re super intelligent your epistemics would be less subject to interference.

El Mahdi: But then there is that second school of thought which I switched back again because I find that most people are in the first school of thought now. So I believe that super intelligence doesn’t necessarily include poisoning resilience because of what I call practically time constrained superintelligence. If you have a deadline because of computational complexity, you have to learn something, which can sometimes-

Lucas: Yeah, you want to get things done.

El Mahdi: Yeah, so you want to get it done in a finite amount of time. And because of that you will end up leveraging to speed up your learning. So if a malicious agent just put up bad observations of the environment or bad labeling of whatever is around you, then it can make you learn something else than what you would like as an aligned outcome. I’m strongly on the second side despite many disagreeing with me here. I don’t think super intelligence includes poisoning resilience, because super intelligence would still be built with time constraints.

Lucas: Right. You’re making a tradeoff between safety and computational efficiency.

El Mahdi: Right.

Lucas: It also would obviously seem to matter the kind of world that the ASI finds itself in. If it knows that it’s in a world with no, or very, very, very few malevolent agents that are wanting to poison it, then it can just throw all of this out of the window, but the problem is that we live on a planet with a bunch of other primates that are trying to mess up our machine learning. So I guess just as a kind of fun example in taking it to an extreme, imagine it’s the year 300,000 AD and you have a super intelligence which has sort of spread across space-time and it’s beginning to optimize its cosmic endowment, but it gives some sort of uncertainty over space-time to whether or not there are other super intelligences there who might want to poison its interstellar communication in order to start taking over some of its cosmic endowment. Do you want to just sort of explore?

El Mahdi: Yeah, that was like a closed experiment I proposed earlier to Carl Shulman from the FHI. Imagine some super intelligence reaching the planets where there is a smart form of life emerging from electric communication between plasma clouds. So completely non-carbon, non-silicon based.

Lucas: So if Jupiter made brains on it.

El Mahdi: Yeah, like Jupiter made brains on it just out of electric communication through gas clouds.

Lucas: Yeah, okay.

El Mahdi: And then this turned to a form of communication is smart enough to know that this is a super intelligence reaching the planet to learn about this form of life, and then it would just start trolling it.

Lucas: It’ll start trolling the super intelligence?

El Mahdi: Yeah. So they would come up with an agreement ahead of time, saying, “Yeah, this super intelligence coming from earth throughout our century to discover how we do things here. Let’s just behave dumbly, or let’s just misbehave. And then the super intelligence will start collecting data on this life form and then come back to earth saying, Yeah, they’re just a dumb plasma passive form of nothing interesting.

Lucas: I mean, you don’t think that within the super intelligence’s model, I mean, we’re talking about it right now so obviously a super intelligence will know this when it leaves that there will be agents that are going to try and trick it.

El Mahdi: That’s the rebuttal, yes. That’s the rebuttal again. Again, how much time does super intelligence have to do inference and draw conclusions? You will always have some time constraints.

Lucas: And you don’t always have enough computational power to model other agents efficiently to know whether or not they’re lying, or …

El Mahdi: You could always come up with thought experiment with some sort of other form of intelligence, like another super intelligence is trying to-

Lucas: There’s never, ever a perfect computer science, never.

El Mahdi: Yeah, you can say that.

Lucas: Security is never perfect. Information exchange is never perfect. But you can improve it.

El Mahdi: Yeah.

Lucas: Wouldn’t you assume that the complexity of the attacks would also scale? We just have a ton of people working on defense, but if we have an equal amount of people working on attack, wouldn’t we have an equally complex method of poisoning that our current methods would just be overcome by?

El Mahdi: That’s part of the empirical follow-up I mentioned. The one Isabella and I were working on, which is trying to do some sort of min-max game of poisoner versus poisoning resilience learner, adversarial poisoning setting where like a poisoner and then there is like a resilient learner and the poisoner tries to maximize. And what we have so far is very depressing. It turns out that it’s very easy to be a poisoner. Computationally it’s way easier to be the poisoner than to be-

Lucas: Yeah, I mean, in general in the world it’s easier to destroy things than to create order.

El Mahdi: As I said in the beginning, this is a sub-topic of technical AI safety where I believe it’s easier to have tractable formalizable problems for which you can probably have a safe solution.

Lucas: Solution.

El Mahdi: But in very concrete, very short term aspects of that. In March we are going to announce a major update in Tensor Flow which is the standout frameworks today to do distributed machine learning, open source by Google, so we will announce hopefully if everything goes right in sys ML in the systems for machine learning conference, like more empirically focused colleagues, so based on the algorithms I mentioned earlier which were presented at NuerIPS and ICML from the past two years, they will announce a major update where they basically changed every averaging insight in terms of flow by those three algorithms I mentioned, Krum and Bulyan and soon Kardam which constitute our portfolio of Byzantine resilience algorithms.

Another consequence that comes for free with that is that distributed machinery frameworks like terms of flow use TCPIP as a communication protocol. So TCPIP has a problem. It’s reliable but it’s very slow. You have to repeatedly repeat some messages, et cetera, to guarantee reliability, and we would like to have a faster communication protocol, like UDP. We don’t need to go through those details. But it has some package drop, so so far there was no version of terms of flow or any distributed machine learning framework to my knowledge using UDP. The old used TCPIP because they needed reliable communication, but now because we are Byzantine resilient, we can afford having fast but not completely reliable communication protocols like UDP. So one of the things that come for free with Byzantine resilience is that you can move from heavy-

Lucas: A little bit more computation.

El Mahdi: -yeah, heavy communication protocols like TCPIP to lighter, faster, more live communication protocols like UDP.

Lucas: Keeping in mind you’re trading off.

El Mahdi: Exactly. Now we have this portfolio of algorithms which can serve many other applications besides just making faster distributed machine learning, like making poisoning resilience. I don’t know, recommended systems for social media and hopefully making AGI learning poisoning resilience matter.

Lucas: Wonderful. So if people want to check out some of your work or follow you on social media, what is the best place to keep up with you?

El Mahdi: Twitter. My handle is El Badhio, so maybe you would have it written down on the description.

Lucas: Yeah, cool.

El Mahdi: Yeah, Twitter is the best way to get in touch.

Lucas: All right. Well, wonderful. Thank you so much for speaking with me today and I’m excited to see what comes out of all this next.

El Mahdi: Thank you. Thank you for hosting this.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

AI Alignment Podcast: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)

What motivates cooperative inverse reinforcement learning? What can we gain from recontextualizing our safety efforts from the CIRL point of view? What possible role can pre-AGI systems play in amplifying normative processes?

Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell is the eighth podcast in the AI Alignment Podcast series, hosted by Lucas Perry and was recorded at the Beneficial AGI 2018 conference in Puerto Rico. For those of you that are new, this series covers and explores the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, Lucas will speak with technical and non-technical researchers across areas such as machine learning, governance,  ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with Dylan Hadfield-Menell. Dylan is a 5th year PhD student at UC Berkeley advised by Anca Dragan, Pieter Abbeel and Stuart Russell, where he focuses on technical AI alignment research.

Topics discussed in this episode include:

  • How CIRL helps to clarify AI alignment and adjacent concepts
  • The philosophy of science behind safety theorizing
  • CIRL in the context of varying alignment methodologies and it’s role
  • If short-term AI can be used to amplify normative processes
You can follow Dylan here and find the Cooperative Inverse Reinforcement Learning paper here. You can listen to the podcast above or read the transcript below.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast series. I’m Lucas Perry and today we will be speaking for a second time with Dylan Hadfield-Menell on cooperative inverse reinforcement learning, the philosophy of science behind safety theorizing, CIRL in the context of varying alignment methodologies, and if short term AI can be used to amplify normative processes. This time it just so happened to be an in person discussion and Beneficial AGI 2019, FLI’s sequel to the Beneficial AI 2017 conference at Asilomar.

I have a bunch of more conversations that resulted from this conference to post soon and you can find more details about the conference in the coming weeks. As always, if you enjoy this podcast, please subscribe or follow us on your preferred listening platform. As many of you will already know, Dylan is a fifth year Ph.D. student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel, and Stuart Russell, where he focuses on technical AI Alignment research. And so without further ado, I’ll give you Dylan.

Thanks so much for coming on the podcast again, Dylan, that’s been like a year or something. Good to see you again.

Dylan: Thanks. It’s a pleasure to be here.

Lucas: So just to start off, we can go ahead and begin speaking a little bit about your work on cooperative inverse reinforcement learning and whatever sorts of interesting updates or explanation you have there.

Dylan: Thanks. For me, working in cooperative IRL has been a pretty long process, it really sort of dates back to the start of my second year in PhD when my advisor came back from a yearlong sabbatical and suggested that we entirely changed the research direction we were thinking about.

That was to think about AI Alignment and AI Safety and associated concerns that, that might bring. And our first attempt at a really doing research in that area was to try to formalize what’s the problem that we’re looking at, what are the space of parameters and the space of solutions that we should be thinking about in studying that problem?

And so it led us to write Cooperative Inverse Reinforcement Learning. Since then I’ve had a large amount of conversations where I’ve had incredible difficulty trying to convey what it is that we’re actually trying to do here and what exactly that paper and idea represents with respect to AI Safety.

One of the big updates for me and one of the big changes since we’ve spoken last, is getting a little bit of a handle on really what’s the value of that as the system. So for me, I’ve come around to the point of view that really what we were trying to do with cooperative IRL was to propose an alternative definition of what it means for an AI system to be effective or rational in some sense.

And so there’s a story you can tell about artificial intelligence, which is that we started off and we observed that people were smart and they were intelligent in some way, and then we observed that we could get computers to do interesting things. And this posed the question of can we get computers to be intelligent? We had no idea what that meant, no idea how to actually nail it down and we discovered that in actually trying to program solutions that looked intelligent, we had a lot of challenges.

So one of the big things that we did as a field was to look over next door into the economics department in some sense, to look at those sort of models that they have of decision theoretic rationality and really looking at homoeconomicous as an ideal to shoot for. From that perspective, actually a lot of the field of AI has shifted to be about effective implementations of homoeconomicous.

In my terminology, this is about systems that are effectively individually rational. These are systems that are good at optimizing for their goals, and a lot of the concerns that we have about AI Safety is that systems optimizing for their own goals could actually lead to very bad outcomes for the rest of us. And so what cooperative IRL attempts to do is to understand what it would mean for a human robot system to behave as a rational agent.

In the sense, we’re moving away from having a box drawn around the AI system or the artificial component of the system to having that agent box drawn around the person and the system together, and we’re trying to model the sort of important parts of the value alignment problem in our formulation here. And in this case, we went with the simplest possible set of assumptions which are basically that we have a static set of preferences that are the humans preferences that they’re trying to optimize. This is effectively the humans welfare.

The world is fully observable and the robot and the person are both working to maximize the humans welfare, but there is this information bottlenecking. This information asymmetry that’s present that we think is a fundamental component of the value alignment problem. And so really what cooperative IRL, is it’s a definition of how a human and a robot system together can be rational in the context of fixed preferences in a fully observable world state.

Lucas: There’s a point of metatheory or coming up with models and theory. It seems like the fundamental issue is given how and just insanely complex AI Alignment is trying to converge on whatever the most efficacious model is, is very, very difficult. People keep flicking back and forth about theoretically how we’re actually going to do this. Even in very grid world or toy environments. So it seemed very, very hard to isolate the best variables or what variables can be sort of modeled and tracked in ways that is going to help us most.

Dylan: So, I definitely think that this is not an accurate model of the world and I think that there are assumptions here which, if not appropriately reexamined, would lead to a mismatch between the real world and things that work in theory.

Lucas: Like human beings having static preferences.

Dylan: So for example, yes, I don’t claim to know what human preferences really are and this theory is not an attempt to say that they are static. It is an attempt to identify a related problem to the one that we’re really faced with, that we can actually make technical and theoretical progress on. That will hopefully lead to insights that may transfer out towards other situations.

I certainly recognize that what I’m calling a theta in that paper is not really the same thing that everyone talks about when we talk about preferences. I, in talking with philosophers, I’ve discovered, I think it’s a little bit more closer to things like welfare in like a moral philosophy context, which maybe you could think about as being a more static object that you would want to optimize.

In some sense theta really is an encoding of what you would like the system to do, in general is what we’re assuming there.

Lucas: Because it’s static.

Dylan: Yes, and to the extent that you want to have that be changing over time, I think that there’s an interesting theoretical question as to how that actually is different, and what types of changes that leads to and whether or not you can always reduce something with non-static preferences to something with static preferences from a mathematical point of view.

Lucas: I can see how moving from static to changing over time just makes it so much more insanely complex.

Dylan: Yeah, and it’s also really complex of the level of its Philosophically unclear what the right thing to do.

Lucas: Yeah, that’s what I mean. Yeah, you don’t even know what it even means to be aligning as the values are changing, like whether or not the agent even thinks that they just moved in the right direction or not.

Dylan: Right, and I also even think I want to point out how uncertain all of these things are. We as people are hierarchical organizations have different behaviors and observation systems and perception systems. And we believe we have preferences, we have a name to that, but there is a sense in which that is ultimately a fiction of some kind.

It’s a useful tool that we have to talk about ourselves to talk about others that facilitates interaction and cooperation. And so given that I do not know the answer to these philosophical questions, what can I try to do as a technical researcher to push the problem forward and to make actual progress?

Lucas: Right, and so it’s sort of again, like a metatheoretical point and what people are trying to do right now in the context of AI Alignment, it seems that the best thing for people to be doing is sort of to be coming up with these theoretical models and frameworks, which have a minimum set of assumptions which may be almost like the real world but are not, and then making theoretical progress there that will hopefully in the future transfer, as you said to other problems as ML and deep learning gets better and the other tools are getting better so that it’ll actually have the other tools to make it work with more complicated assumptions.

Dylan: Yes, I think that’s right. The way that I view this as we had AI, is this broad, vague thing. Through the course of AI research, we kind of got to Markov decision processes as a sort of coordinating theory around what it means for us to design good agents, and cooperative IRL is an attempt to take a step from markup decision processes more closely towards the set of problems that we want to study.

Lucas: Right, and so I think this is like a really interesting point that I actually haven’t talked to anyone else about and if you have a few more words about it, I think it would be really interesting. So just in terms of being a computer scientist and being someone who is working on the emerging theory of a field. I think it’s often unclear what the actual theorizing process is behind how people get to CIRL. How did someone get to debate? How did someone get to iterated amplification?

It seems like you first identify problems which you see to be crucial and then there are some sorts of epistemic and pragmatic heuristics that you apply to try and begin to sculpt a model that might lead to useful insight. Would you have anything to correct or unpack here?

Dylan: I mean, I think that is a pretty good description of a pretty fuzzy process.

Lucas: But like being a scientist or whatever?

Dylan: Yeah. I don’t feel comfortable speaking for scientists in general here, but I could maybe say a little bit more about my particular process, which is that I try to think about how I’m looking at the problem differently from other people based on different motivations and different goals that I have. And I try to lean into how that can push us in different directions. There’s a lot of other really, really smart people who have tried to do lots of things.

You have to maintain an amount of intellectual humility about your ability to out think the historical components of the field. And for me, I think that in particular for AI Safety, it’s thinking about reframing what is the goal that we’re shooting towards as a field.

Lucas: Which we don’t know.

Dylan: We don’t know of those goals are, absolutely. And I think that there is a sense in which the field has not re-examined those goals incredibly deeply. For a little bit, I think that it’s so hard to do anything that looks intelligent in the real world that we’ve been trying to focus on that individually rational Markov decision process model. And I think that a lot of the concerns about AI Safety are really a call for AI as a field to step back and think about what we’re trying to accomplish in the world and how can we actually try to achieve beneficial outcomes for society.

Lucas: Yeah, and I guess like a sociological phenomenon within the scientists or people who are committed to empirical things. In terms of reanalyzing what the goal of AI Alignment is, the sort of area of moral philosophy and ethics and other things, which for empirical leaning rational people can be distasteful because you can’t just take a telescope to the universe and see like a list of what you ought to do.

And so it seems like people like to defer on these questions. I don’t know. Do you have anything else to add here?

Dylan: Yeah. I think computer scientists in particular are selected to be people who like having boxed off problems that they know how to solve and feel comfortable with, and that leaning into getting more people with a humanities bent into computer science and broadly AI in particular, AI Safety especially is really important and I think that’s a broad call that we’re seeing come from society generally.

Lucas: Yeah, and I think it also might be wrong though to model the humanities questions as those which are not in boxes and cannot be solved. That’s sort of like a logical positivist thing to say, that on one end we have the hard things and you just have to look at the world enough and you’ll figure it out and then there’s the soft squishy things which deal with abstractions that I don’t have real answers, but people with fluffy degrees need to come up with things that seem right but aren’t really right.

Dylan: I think it would be wrong to take what I just said in that direction, and if that’s what it sounds like I definitely want to correct that. I don’t think there is a sense in which computer science is a place where there are easy right answers, and that the people in humanities are sort of waving their hands and sort of fluffing around.

This is sort of leaning into making this a more AI value alignment kinds of framing or thinking about it. But when I think about being AI systems into the world, I think about what things can you afford to get wrong in your specification and which things can you not afford to get wrong in your specifications.

In this sense, specifying physics incorrectly is much, much better than specifying the objective incorrectly, at least by default. And the reason for that is what happens to the world when you push it, is a question that you can answer from your observations. And so if you start off in the wrong place, as long as you’re learning and adapting, I can reasonably expect my systems do correct to that. Or at least the goal of successful AI research is that your systems will effectively adapt to that.

However, the past that your system is supposed to do is sort of arbitrary in a very fundamental sense. And from that standpoint, it is on you as the system designer to make sure that objective is specified correctly. When I think about what we want to do as a field, I ended up taking a similar lens and that there’s a sense in which we as researchers and people and society and philosophers and all of it are trying to figure out what we’re trying to do and what we want to task the technology with, and the directions that we want to push it in. And then there are questions of what will the technology be like and how should it function that will be informed by that and shaped by that.

And I think that there is a sense in which that is arbitrary. Now, what is right? That I don’t really know the answer to and I’m interested in having those conversations, but they make me feel uneasy. I don’t trust myself on those questions, and that could mean that I should learn how to feel more uneasy and think about it more and in doing this research I have been kind of forced into some of those conversations.

But I also do think that for me at least I see a difference between what can we do and what should we do. And thinking about what should we do as a really, really hard question that’s different than what can we do.

Lucas: Right. And so I wanna move back towards CIRL, but just to sort of wrap up here on our philosophy of science musings, a thought I had while you were going through that was, at least for now, what I think is fundamentally shared between fields that deal with things that matter, are their concepts deal with meaningfully relevant reference in the world? Like do your concepts refer to meaningful things?

Putting ontology aside, whatever love means or whatever value alignment mean. These are meaningful referents for people and I guess for now if our concepts are actually referring to meaningful things in the world, then it seems important.

Dylan: Yes, I think so. Although, I’m not totally sure I understood that.

Lucas: Sure, that’s fine. People will say that humanities or philosophy doesn’t have these boxes with like well-defined problems and solutions because they either don’t deal with real things in the world or the concepts are so fuzzy that the problems are sort of invented and illusory. Like how many angels can stand on the head of a pin? Like the concepts don’t work, aren’t real and don’t have real referents, but whatever.

And I’m saying the place where philosophy and ethics and computer science and AI Alignment should at least come together for now is where the referents have, where the concepts of meaningful referents in the world?

Dylan: Yes, that is something that I absolutely buy. Yes, I think there’s a very real sense in which those questions are harder, but that doesn’t mean they’re less real or less important.

Lucas: Yes, that’s because it’s the only point I wanted to push against logical positivism.

Dylan: No, I don’t mean to say that the answers are wrong, it’s just that they are harder to prove in a real sense.

Lucas: Yeah. I mean, I don’t even know if they have answers or if they do or if they’re just all wrong, but I’m just open to it and like more excited about everyone coming together thing.

Dylan: Yes, I absolutely agree with that.

Lucas: Cool. So now let’s turn it back into the CIRL. So you began by talking about how you and your advisers were having this conceptual shift and framing, then we got into the sort of philosophy of science behind how different models and theories of alignment go. So from here, whatever else you have to say about CIRL.

Dylan: So I think for me the upshot of concerns about advanced AI systems and negative consequence there in really is a call to recognize that the goal of our field is AI Alignment. That almost any AI that’s not AI Alignment is solving a sub problem and viewing it only in solving that sub problem is a mistake.

Ultimately, we are in the business of building AI systems that integrate well with humans and human society. And if we don’t take that as a fundamental tenant of the field, I think that we are potentially in trouble and I think that that is a perspective that I wish was more pervasive throughout artificial intelligence generally,

Lucas: Right, so I think I do want to move into this view where safety is a normal thing, and like Stuart Russell will say, “People who build bridges all care about safety and there aren’t a subsection of bridge builders who work in bridge safety, everyone is part of the bridge safety.” And I definitely want to get into that, but I also sort of want to get a little bit more into CIRL and why you think it’s so motivating and why this theoretical framing and shift is important or illuminating, and what the specific content of it is.

Dylan: The key thing is that what it does is point out that it doesn’t make sense to talk about how well your system is doing without talking about the way in which it was instructed and the type of information that it got. No AI system exists on its own, every AI system has a designer, and it doesn’t make sense to talk about the functioning of that system without also talking about how that designer built it, evaluated it and how well it is actually serving those ends.

And I don’t think this is some brand new idea that no one’s ever known about, I think this is something that is incredibly obvious to practitioners in the field once you pointed out. The process whereby a robot learns to navigate a maze or vacuum a room is not, there is an objective and it optimizes it and then it does it.

What it is that there is a system designer who writes down an objective, selects an optimization algorithm, observes the final behavior of that optimization algorithm, goes back, modifies the objectives, modifies the algorithm, changes hyper parameters, and then runs it again. And there’s this iterative process whereby your system eventually ends up getting to the behavior that you wanted to have. And AI researchers have tended to draw a box around. The thing that we call AI is the sort of final component of that.

Lucas: Yeah, it’s because at least subjectively and I guess this is sort of illuminated by meditation and Buddhism, is that if you’re a computer scientist and you’re just completely identified with the process of doing computer science, you’re just identified with the problem. And if you just have a little bit of mindfulness and you’re like, “Okay, I’m in the context of a process where I’m an agent and trying to align another agent,” and if you’re not just completely identified with the process and you see the unfolding of the process, then you can do sort of like more of a meta-analysis which takes a broader view of the problem and can then, I guess hopefully work on improving it.

Dylan: Yeah, I think that’s exactly right, or at least as I understand that, that’s exactly right. And to be a little bit specific about this, we have had these engineering principles and skills that are not in the papers, but they are things that are passed down from Grad student to Grad student within a lab. Their institutional knowledge that exists within a company for how you actually verify and validate your systems, and cooperative IRLs and attempt to take all of that sort of structure that AI systems have existed within and try to bring that into the theoretical frameworks that we actually work with.

Lucas: So can you paint a little picture of what the CIRL model looks like?

Dylan: It exists in a sequential decision making context and we assume we have states of the world and a transition diagram that basically tells us how we get to another state given the previous state and actions from the human and the robot. But the important conceptual shift that it makes is the space of solutions that we’re dealing with are combinations of a teaching strategy and a learning strategy.

There is a commitment on the side of the human designers or users of the systems to provide data that is in some way connected to the objectives that they want to be fulfilled. That data can take many forms, it could be in the form of writing down a reward function that ranks a set of alternatives, it could be in the form of providing demonstrations that you expect your system to imitate. It could be in the form of providing binary comparisons between two clearly identified alternatives.

And the other side of the problem is what is the learning strategy that we use? And this is the question of how the robot is actually committing to respond to the observations that we’re giving it about what we wanted to do, in the case of a pre-specified proxy reward going to a literal interpretation and a reinforcement learning system, let’s say. What the system is committing to doing is optimizing under that set of trajectory rankings and preferences based off the simulation environment that it’s in, or the actual physical environment that it’s exploring.

When we shift to something like inverse reward design, which is a paper that we released last year, what that says is we’d like the system to look at this ranking of alternatives and actually try to blow that up into a larger uncertainty set over the set of possible consistent rankings with that, and then when you go into deployment, you may be able to leverage that uncertainty to avoid catastrophic failures or generally just unexpected behavior.

Lucas: So this other point I think that you and I discussed briefly, maybe it was actually with Rohan, but it seems like often in terms of AI Alignment, it’s almost like we’re reasoning from nowhere about abstract agents and that sort of makes the problem extremely difficult. Often, if you just look at human examples, it just becomes super mundane and easy. This sort of conceptual shift can almost I think be framed super simply as like the difference between a teacher trying to teach someone and then a teacher realizing that the teacher is a person that is teaching another student and the teacher can think better about how to teach and then also the process between the teacher and the student and how to improve that at a higher level of attraction.

Dylan: I think that’s the direction that we’re moving in. What I would say is it’s as AI practitioners, we are teaching our systems how to behave and we have developed our strategies for doing that.

And now that we’ve developed a bunch of strategies that sort of seem to work. I think it’s time for us to develop a more rigorous theory of actually how those teaching strategies interact with the final performance of the system.

Lucas: Cool. Is there anything else here that you would like say about CIRL, or any really important points you would like people to get people who are interested in technical AI Alignment or CS students?

Dylan: I think the main point that I would make is that research and thinking about powerful AI systems is valuable, even if you don’t think that that’s what’s going to happen. You don’t need to be motivated by those sets of problems in order to recognize that this is actually just basic research into the science of artificial intelligence.

It’s got an incredible amount of really interesting problems and the perspectives that you adopt from this framing can be incredibly useful as a comparative advantage over other researchers in the field. I think that’d be my final word here.

Lucas: If I might just ask you one last question. We’re at beneficial AGI 2019 right now and we’ve heard a lot of overviews of different research agendas and methodologies and models and framings for how to best go forth with AI Alignment, which include a vast range of things which work on corrigibility and interpretability and robustness and other things, and the different sort of research agendas and methodologies of places like MIRI who is come out with this new framing on embedded agency, and also different views at OpenAI and DeepMind.

And Eric Drexler has also newly proposed these services based conception of AI where we remove the understanding of powerful AI systems or regular AI systems as agents, which sort of gets us away from a lot of the x-risky problems and global catastrophic risks problems and value alignment problems.

From your point of view, as someone who’s worked a lot in CIRL and is the technical alignment researcher, how do you view CIRL in this context and how do you view all of these different emerging approaches right now in AI Alignment?

Dylan: For me, and you know, I should give a disclaimer. This is my research area and so I’m obviously pretty biased to thinking it’s incredibly important and good, but for me at least, cooperative IRL is a uniting framework under which I can understand all of those different approaches. I believe that a services type solution to AI Safety or AI Alignment that’s actually arguing for a particular type of learning strategy and implementation strategies of CIRL, and I think it can be framed within that system.

Similarly, I had some conversations with people about debate. I believe debate fits really nicely into the framework and we commit to a human strategy of judging debates from systems and we commit to a robot strategy and just putting yourself into two systems and working towards that direction. So for me, it’s a way in which I can sort of identify the commonalities between these different approaches and compare and contrast them and then under a set of assumptions about what the world is like, what the space of possible preferences is like and what the space of strategies that people can implement possibly get out some information about which one is better or worse, or which type of strategy is vulnerable to different types of mistakes or errors.

Lucas: Right, so I agree with all of that, the only place that I might want to push back is, it seems that maybe the MIRI embedded agency stuff subsumes everything else. What do you think about that?

Because the framing is like whenever AI researchers draw these models, there are these conceptions of these information channels, right, which are selected by the researchers and which we control, but the universe is really just a big non-dual happening of stuff and agents are embedded in the environment and are almost an identical process within the environment and it’s much more fuzzy where the dense causal streams are and where a little causal streams are and stuff like that. It just seems like the MIRI stuff seems to maybe subsume the CIRL and everything else a little bit more, but I don’t know.

Dylan: I certainly agree that that’s the one that’s hardest to fit into the framework, but I would also say that in my mind, I don’t know what an agent is. I don’t know how to operationalize an agent, I don’t actually know what that means in the physical world and I don’t know what it means to be an agent. What I do know is that there is a strategy of some sort that we can think of as governing the ways that the system is perform and behave.

I want to be very careful about baking in assumptions in beforehand. And it feels to me like embedded agency is something that I don’t fully understand the set of assumptions being made in that framework. I don’t necessarily understand how they relate to the systems that we’re actually going to build.

Lucas: When people say that an agent is like a fuzzy concept, I think that, that might be surprising to a lot of people who have thought somewhat about the problem because it’s like, obviously I know what an agent is, it’s different than all the other dead stuff in the world that has goals and it’s physically confined and unitary.

If you just like imagine like abiogenesis, how life began. It is the first relatively self-replicating chain of hydrocarbons and agent and you can go from a really small systems to really big systems, which can exhibit certain properties or principles that feel a little bit agenty, but may not be useful. And so I guess if we’re going to come up with a definition of it, it should just be something useful for us or something.

Dylan: I think I’m not sure is the most accurate word we can use here. I wish I had a better answer for what this was, maybe I can share one of the thought experiments that convinced me, I was pretty confused about what an agent is.

Lucas: Yeah, sure.

Dylan: It came from thinking about what value alignment is. So if we think about values alignment between two agents and those are both perfectly rational actors, making decisions in the world perfectly in accordance with their values, with full information. I can sort of write down a definition of value alignment, which is basically you’re using the same ranking over alternatives that I am.

But a question that we really wanted to try to answer that feels really important is what does it mean to be value aligned in a partial context? If you were a bounded agent, if you’re not a perfectly rational agent, what does it actually mean for you to be value aligned? That was the question that we also didn’t really know how to answer.

Lucas: My initial reaction is the kind of agent that tries its best with its limited rationality to be like the former thing that you talked about.

Dylan: Right, so that leads to a question that we thought about, so as opposed I have a chess playing agent and it is my chess playing agent and so I wanted to win the game for me. Suppose it’s using the correct goal test, so it is actually optimizing for my values. Let’s say it’s only searching out to depth three, so it’s pretty dumb as far as chess players go.

Do I think that that is an agent that is value aligned with me? Maybe. I mean, certainly I can tell the story in one way that it sounds like it is. It’s using the correct objective function, it’s doing some sort of optimization thing. If it ever identifies a checkmate move in three moves, I will always find that get that back to me. And so that’s a sense in which it feels like it is a value aligned agent.

On the other hand, what if it’s using a heuristic function which is chosen poorly, or and something closer to an adversarial manner. So now it’s a depth three agent that is still using the correct goal test, but it’s searching in a way that is adversarially selected. Is that a partially value aligned agent?

Lucas: Sorry, I don’t understand what it means to have the same objective function, but be searching in three depth in an adversarial way.

Dylan: In particular, when you’re doing a chess search engine, there is your sort of goal tests that you run on your leaves of your search to see if you’ve actually achieved winning the game. But because you’re only doing a partial search, you often have to rely on using a heuristic of some sort to like rank different positions.

Lucas: To cut off parts of the tree.

Dylan: Somewhat to cut off parts of the tree, but also just like you’ve got different positions, neither of which are winning and you need to choose between those.

Lucas: All right. So there’s a heuristic, like it’s usually good to take the center or like the queen is something that you should always probably keep.

Dylan: Or these things that are like values of pieces that you can add up was I think one of the problems …

Lucas: Yeah, and just as like an important note now in terms of the state of machine learning, the heuristics are usually chosen by the programmer. Are system is able to collapse on heuristics themselves?

Dylan: Well, so I’d say one of the big things in like AlphaZero or AlphaGo as an approach is that they applied sort of learning on the heuristic itself and they figured out a way to use the search process to gradually improve the heuristic and have the heuristic actually improving the search process.

And so there’s sort of a feedback loop set up in those types of expert iteration systems. What my point here is that when I described that search algorithm to you, I didn’t mention what heuristic it was using at all. And so you had no reason to tell me whether or not that system was partially value aligned or not because actually with heuristic is 100 percent of what’s going to determine the final performance of the system and whether or not it’s actually helping you.

And then the sort of final point I have here that I might be able to confuse you with a little bit more is, what if we just sort of said, “Okay, forget this whole searching business. I’m just going to precompute all the solutions from my search algorithm and I’m going to give you a policy of when you’re in this position, do this move. When you’re in that position, do that move.” And what would it mean for that policy to be values aligned with me?

Lucas: If it did everything that you would have done if you were the one playing the chess game. Like is that value alignment?

Dylan: That certainly perfect imitation, and maybe we [crosstalk 00:33:04]

Lucas: Perfect imitation isn’t necessarily value alignment because you don’t want it to perfectly imitate you, you want it to win the game.

Dylan: Right.

Lucas: Isn’t the easiest way to just sort of understand this is that there are degrees of value alignment and value alignment is the extent to which the thing is able to achieve the goals that you want?

Dylan: Somewhat, but the important thing here is trying to understand what these intuitive notions that we’re talking about actually mean for the mathematics of sequential decision making. And so there’s a sense in which you and I can talk about partial value alignment and the agents that are trying to help you. But if we actually look at the math of the problem, it’s actually very hard to understand how that actually translates. Like mathematically I have lots of properties that I could write down and I don’t know which one of those I want to call partial value alignment.

Lucas: You know more about the math than I do, but the percentage chance of a thing achieving the goal is the degree to which its value aligned? If you’re certain that the end towards which is striving, and the end towards what you want it to strive?

Dylan: Right, but that striving term is a hard one, right? Because if your goals aren’t achievable then it’s impossible to be value aligned with you in that sense.

Lucas: Yeah, you have to measure the degree to which the end towards which it’s striving is the end towards what you want it to strive and then also measure the degree to which the way that it tries to get to what you want is efficacious or …

Dylan: Right. I think that intuitively I agree with you and I know what you mean, but it’s like I can do things like I can write down a reward function and I can say how well does this system optimize that reward function? And we could ask whether or not that means its value aligned with it or not. But to me, that just sounds like the question of like is your policy optimal and the sort of more standard context.

Lucas: All right, so have you written about how you think that CIRL subsumes all of these other methodologies? And if it does subsume these other AI Alignment methodologies. How do you think that will influence or affect the way we should think about the other ones?

Dylan: I haven’t written that explicitly, but when I’ve tried to convey is that it’s a formalization of the type of problem we’re trying to solve. I think describing this subsuming them is not quite right.

Lucas: It contextualizes them and it brings light to them by providing framing.

Dylan: It gives me a way to compare those different approaches and understand what’s different and what’s the same between them, and in what ways are they … like in what scenarios do we expect them to work out versus not? One thing that we’ve been thinking about recently is what happens when the person doesn’t know immediately and what they’re trying to do.

So if we imagine that there is in fact the static set of preferences, the person’s trying to optimize, so we’re still making that assumption, but assuming that those preferences are revealed to the person over time through experience or interaction with the world. That is a richer class of value alignment problems than cooperative IRL deals with. It’s really closer to what we are attempting to do right now.

Lucas: Yeah, and I mean that doesn’t even include value degeneracy, like what if I get hooked on drugs in the next three years and all my values go and my IRL agent works on assumptions that I’m always updating towards what I want, but you know …

Dylan: Yes, and I think that’s where you get these questions of changing preferences that make it hard to really think through things. I think there’s a philosophical stance you’re taking there, which is that your values have changed rather than your beliefs have changed there.

In the sense that wire-heading is a phenomenon that we see in people and in general learning agents, and if you are attempting to help it learning agent, you must be aware of the fact that wire-heading is a possibility and possibly bad. And then it’s incredibly hard to distinguish from someone who’s just found something that they really like and want to do.

When you should make that distinction or how you should make that distinction is a really challenging question, that’s not a purely technical computer science question.

Lucas: Yeah, but even at the same time, I would like to demystify it a bit. If your friend got hooked on drugs, it’s pretty obvious for you why it’s bad, it’s bad because he’s losing control, it’s bad because he’s sacrificing all of his other values. It’s bad because he’s shortening his life span by a lot.

I just mean to win again, in this way, it’s obvious in ways in which humans do this, so I guess if we take biological inspired approaches to understanding cognition and transferring how humans deal with these things into AI machines, at least at face value seems like a good way of doing it, I guess.

Dylan: Yes, everything that you said I agree with. My point is that those are in a very real sense, normative assumptions that you as that person’s friend are able to bring to the analysis of that problem, and in in some ways there is an arbitrariness to labeling that as bad.

Lucas: Yeah, so the normative issue is obviously very contentious and needs to be addressed more, but at the same time society has come to very clear solutions to normative problems like murder is basically a solved normative problem. There’s a degree to which it’s super obvious that certain normative questions are just answer it and we should I guess practice epistemic humility and whatever here obviously.

Dylan: Right, and I don’t disagree with you on that point, but I think what I’d say is, as a research problem there’s a real question to getting a better understanding of the normative processes whereby we got to solving that question. Like what is the human normative process? It’s a collective societal system. How does that system evolve and change? And then how should machines or other intelligent entities integrate into that system without either subsuming or destroying it in bad ways? I think that’s what I’m trying to get at when I make these points. There is something about what we’re doing here as a society that gets us to labeling these things in the ways that we do and calling them good or bad.

And on the one hand, as a person believe that there are correct answers and I know what I think is right versus what I think is wrong. And then as a scientist I want to try to take a little bit more of an outside view and try to understand like what is the process whereby we as a society or as genetic beings started doing that? Understanding what that process is and how that process evolves, and actually what that looks like in people now is a really critical research program.

Lucas: So one thing that I tried to cover in my panel yesterday on what civilization should strive for, is in the short, medium, to longterm the potential role that narrow to general AI systems might play in amplifying human moral decision making.

Solving as you were discussing this sort of deliberative, normative process that human beings undergo to total converge on an idea. I’m just curious to know like with more narrow systems, if you’re optimistic about ways in which AI can sort of help and elucidate our moral decision making at work to amplify it.

And before I let you start, I guess there’s one other thing that I said that I think Rohin Shah pointed out to me that was particularly helpful in one place. But beyond the moral decision making, the narrow AI systems can help us by making the moral decision make, the decisions that we implement them faster than we could.

Depending on the way a self-driving car decides to crash is like an expression of our moral decision making in like a fast computery way. I’m just saying like beyond ways in which AI systems make moral decisions for us faster than we can, I don’t know, maybe in courts or other things which seem morally contentious. Are there also other ways in which they can actually help the deliberative process examining massive amounts of moral information or like a value information or analyzing something like an aggregated well-being index where we try to understand more so how policies impact the wellbeing of people or like what sorts of moral decisions lead to good outcomes, whatever. So whatever you have to say to that.

Dylan: Yes, I definitely want to echo that. We can sort of get a lot of pre-deliberation into a fast timescale reaction with AI systems and I think that that is a way for us to improve how we act in the quality of the things that we do from a moral perspective. That you do see a real path and to actually bringing that to be in the world.

In terms of helping us actually deliberate better, I think that is a harder problem that I think is absolutely worth more people thinking about but I don’t know the answers here. What I do think is that if we have a better understanding of what the deliberative process is, I think there are correct questions to look at to try to get to that or not, the moral questions about what’s right and what’s wrong and what do we think is right and what do we think is wrong, but they are much more questions at the level of what is it about our evolutionary pathway that led us to thinking that these things are right or wrong.

What is it about society and the pressures that you’re gone and faced that led us to things where murder is wrong in almost every society in the world. I will say the death penalty is the thing, it’s just the type of sanctioned murder. So there is a sense in which I think it’s a bit more nuanced than just that. And there’s something to be said about like I guess if I had to make my claims, like what I think has sort of happened there.

So there’s something about us as creatures that evolved to coordinate and perform well in groups and pressures that, that placed on us that caused us to develop these normative systems whereby we say different things are right and wrong.

Lucas: Iterated game theory over millions of years or something.

Dylan: Something like that. Yeah, but there’s a sense in which us labeling things as right and wrong and developing the processes whereby we label things as right and wrong is a thing that we’ve been pushed towards.

Lucas: From my perspective, it feels like this is more tractable than people lead on, like AI is only going to be able to help in moral deliberation, once it’s general. It already helps us in regular deliberation and moral deliberation isn’t a special kind of deliberation and moral deliberation requires empirical facts about the world and in persons just like any other kind of actionable deliberation does and domains that aren’t considered to have to do with moral philosophy or ethics or things like that.

So I’m not an AI researcher, but it seems to me like this is more attractable than people lead onto be. The normative aspect of AI Alignment seems to be under researched.

Dylan: Can you say a little more about what you mean by that?

Lucas: What I meant was the normative deliberative process, the difficulty in coming to normative conclusions and what the appropriate epistemic and deliberative process is for arriving at normative solutions and how narrow AI systems can take us to a beautiful world where advanced AI systems actually lead us to post human ethics.

If we ever want to get to a place where general systems take us to post human ethics, why not start today with figuring out how narrow systems can work to amplify human moral decision making and deliberative processes.

Dylan: I think the hard part there is, I don’t exactly know what it means to amplify those processes. My perspective is that we as a species do not yet have a good understanding of what those deliberative processes actually represent and what formed the result actually does.

Lucas: It’s just like giving more information, providing tons of data, analyzing the data, potentially pointing out biases. The part where they’re literally amplifying cognitive implicit or explicit decision making process is more complicated and will require more advancement and cognition and deliberation and stuff. But yeah, I still think there are more mundane ways in which it can make us better moral reasoners and decision makers.

If I could give you like 10,000 more bits of information every day about moral decisions that you make, you would probably just be a better moral agent.

Dylan: Yes, one way to try to think about that is maybe things like VR approaches to increasing empathy. I think that that has a lot of power to make us better.

Lucas: Max always says that there’s a race between wisdom and the power of our technology and it seems like people really aren’t taking seriously ways in which we can amplify wisdom because wisdom is generally taken to be part of the humanities and like the soft sciences. Maybe we should be taking more seriously ways in which narrow current day AI systems can be used to amplify the progress at which the human species makes wisdom. Because otherwise we’re just gonna like continue how we always continue and the wisdom is going to go really slow and then we’re going to probably learn from a bunch of mistakes.

And it’s just not going to be as good until we’ll develop a rigorous science of making moral progress or like using technology to amplify the progress of wisdom and moral progress.

Dylan: So in principle, what you’re saying, I don’t really disagree with it, but I also don’t know how that would change what I’m working on either. In the sense that I’m not sure what it would mean. I do not know how I would do research on amplifying wisdom. I just don’t really know what that means. And that’s not to say it’s an impossible problem, we talked earlier about how I don’t know what partial value alignment means, that something that you and I can talk about it and we can intuitively I think align on a concept, but it’s not a concept I knew how to translate into actionable concrete research problems right now.

In the same way, the idea of amplifying wisdom and making people more wise is something that I think intuitively I understand what you mean, but when I try to think about how an AI system would make someone wiser, that feels difficult.

Lucas: It can seem difficult, but I feel like it would, obviously this is like an open research question, but if you were able to identify a bunch of variables that are most important for moral decision making and then if you could use AI systems to sort of gather aggregate and compile in certain ways and analyze moral information in this way, again, it just seems more tractable than people seem to be letting on.

Dylan: Yeah, although I wonder now is that different from value alignment does, we’re thinking about it, right? Concrete research thing I spend a while thinking about is, how do you identify the features that a person considers to be valuable? Say, we don’t know the relative tradeoffs between them.

One way you might try to solve value alignment is have a process that identifies the features that might matter in the world and then have a second process that identifies the appropriate tradeoffs between those features, and maybe something about diminishing returns or something like that. And that to me sounds like I just placed values with wisdom and I’ve got sort of what you’re thinking about. I think both of those terms are similarly diffuse. I wonder if what we’re talking about is semantics, and if it’s not, I’d like to know what the difference is.

Lucas: I guess, the more mundane definition of wisdom, at least in the way that Max Tegmark would use it would be like the ways in which we use our technology. I might have specific preferences, but just because I have specific preferences that I may or may not be aligning an AI system to does not necessarily mean that that total process, this like CIRL process is actually an expression of wisdom.

Dylan: Okay, can you provide a positive description of what a process would look like? Or like basically what I’m saying is I can hear the point of I have preferences and I aligned my system to it and that’s not necessarily a wise system and …

Lucas: Yeah, like I build a fire because I want to be hot, but then the fire catches my village on fire and no longer is … That’s still might be value alignment.

Dylan: But isn’t [crosstalk 00:48:39] some values that you didn’t take into account when you were deciding to build the fire.

Lucas: Yeah, that’s right. So I don’t know. I’d probably have to think about this more because I guess this is something that I just sort of throwing out right now as a reaction to what we’ve been talking about. So I don’t have a very good theory of it.

Dylan: And I don’t wanna say that you need to know the right answers to these things to not have that be a useful direction to push people.

Lucas: We don’t want to use different concepts to just reframe the same problem and just make a conceptual mess.

Dylan: That’s what I’m a little bit concerned about and that’s the thing I’m concerned about broadly. We’ve got a lot of issues that we’re thinking about in dealing with that we’re not really sure what they are.

For me, I think one of the really helpful things has been to frame the issue that I’m thinking about as if a person has a behavior that they want to implement into the world and that’s a complex behavior that they don’t know how to identify immediately. How do you actually go about building systems that allow you to implement that behavior effectively, evaluate that the behavior is actually been correctly implemented.

Lucas: Avoiding side effects, avoiding …

Dylan: Like all of these kinds of things that we sort of concerned about in AI Safety, in my mind fall a bit more into place when we frame the problem as I have a desired behavior that I want to exist, a response function, a policy function that I want to implement into the world. What are the technological systems I can use to implement that in a computer or a robot or what have you.

Lucas: Okay. Well, do you have anything else you’d like to wrap up on?

Dylan: No, I just, I want to say thanks for asking hard questions and making me feel uncomfortable because I think it’s important to do a lot of that as a scientist and in particular I think as people working on AI, we should be spending a bit more time being uncomfortable and talking about these things, because it does impact what we end up doing and it does I think impact the trajectories that we put the technology on.

Lucas: Wonderful. So if people want to read about cooperative inverse reinforcement learning, where can we find the paper or other work that you have on that? What do you think are the best resources? What are just general things you’d like to point people towards in order to follow you or keep up to date with AI Alignment?

Dylan: I tweet occasionally about AI Alignment and a bit of AI ethics questions, the Hadfield-Menell, my first initial, last name. And if you’re interested in getting a technical introduction to value alignment, I would say take a look at the 2016 paper on cooperative IRL. If you’d like a more general introduction, there’s a blog post from summer 2017 on the bear blog.

Lucas: All right, thanks so much Dylan, and maybe we’ll be sitting in a similar room again in two years for Beneficial Artificial Super Intelligence 2021.

Dylan: I look forward to it. Thanks a bunch.

Lucas: Thanks. See you, Dylan. If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

AI Alignment Podcast: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah

What role does inverse reinforcement learning (IRL) have to play in AI alignment? What issues complicate IRL and how does this affect the usefulness of this preference learning methodology? What sort of paradigm of AI alignment ought we to take up given such concerns?

Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah is the seventh podcast in the AI Alignment Podcast series, hosted by Lucas Perry. For those of you that are new, this series is covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, governance,  ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

Topics discussed in this episode include:

  • The role of systematic bias in IRL
  • The metaphilosophical issues of IRL
  • IRL’s place in preference learning
  • Rohin’s take on the state of AI alignment
  • What Rohin has changed his mind about
You can learn more about Rohin’s work here and find the Value Learning sequence hereYou can listen to the podcast above or read the transcript below.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast series. I’m Lucas Perry and today we will be speaking with Rohin Shah about his work on inverse reinforcement learning and his general take on the state of AI alignment efforts and theory today. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter. He has also been working with effective altruism for several years. Without further ado I give you Rohin Shah.

Hey, Rohin, thank you so much for coming on the podcast. It’s really a pleasure to be speaking with you.

Rohin: Hey, Lucas. Yeah. Thanks for inviting me. I’m glad to be on.

Lucas: Today I think that it would be interesting just to start off by delving into a lot of the current work that you’ve been looking into and practicing over the past few years. In terms of your research, it looks like you’ve been doing a lot of work on practical algorithms for inverse reinforcement learning that take into account, as you say, systematic cognitive biases that people have. It would be interesting if you could just sort of unpack this work that you’ve been doing on this and then contextualize it a bit within the AI alignment problem.

Rohin: Sure. So basically the idea with inverse reinforcement learning is you can look at the behavior of some agent, perhaps a human, and tell what they’re trying to optimize, what are the things that they care about? What are their goals? And in theory this seems like a pretty nice way to do AI alignment and that intuitively you can just say, “Hey, AI, go look at the actions of humans are taking, look at what they say, look at what they do, take all of that in and figure out what humans care about.” And then you could use that perhaps as a utility function for your AI system.

I think I have become less optimistic about this approach now for reasons I’ll get into, partly because of my research on systematic biases. Basically one problem that you have to deal with is the fact that whatever humans are trying to optimize for, they’re not going to do it perfectly. We’ve got all of these sorts of cognitive biases like a planning fallacy or hyperbolic time discounters, when we tend to be myopic, not looking as far into the long-term as we perhaps could.

So assuming that humans are like perfectly optimizing goals that they care about is like clearly not going to work. And in fact, basically, if you make that assumption, well, then whatever reward function you infer, once the AI system is optimizing that, it’s going to simply recover the human performance because well, you assumed that it was optimal when you inferred what it was so that means whatever the humans were doing is probably the behavior that optimizes their work function that you inferred.

And we’d really like to be able to reach super human performance. We’d like our AI systems to tell us how we’re wrong to get new technologies develop things that we couldn’t have done ourselves. And that’s not really something we can do using the sort of naive version of inverse reinforcement learning that just assumes that you’re optimal. So one thing you could try to do is to learn the ways in which humans are biased, the ways in which they make mistakes, the ways in which they plan sub-optimally. And if you could learn that, then you could correct for those mistakes, take them into account when you’re inferring human values.

The example I like to use is if there’s a grad student who procrastinates or doesn’t plan well and as a result near a paper deadline they’re frantically working, but they don’t get it in time and they miss the paper deadline. If you assume that they’re optimal, optimizing for their goals very well I don’t know what you’d infer, maybe something like grad students like to miss deadlines. Something like that seems pretty odd and it doesn’t seem like you’d get something sensible out of that, but if you realize that humans are not very good at planning, they have the planning fallacy and they tend to procrastinate for reasons that they wouldn’t endorse on reflection, then maybe you’d be able to say, “Oh, this was just a mistake of a grad student made. In the future I should try to help them meet their deadlines.”

So that’s the reason that you want to learn systematic biases. My research was basically let’s just take the hammer of deep learning and apply it to this problem. So not just learn the reward function, but let’s also learn the biases. It turns out that this was already known, but there is an impossibility result that says that you can’t do this in general. So more, I guess I would phrase the question I was investigating, as what are a weaker set of assumptions some of than the ones that we currently use such that you can still do some reasonable form of IRL.

Lucas: Sorry. Just stepping back for like half a second. What does this impossibility theorem say?

Rohin: The impossibility theorem says that if you assume that the human is basically running some sort of planner that takes in a reward function and spits out a behavior or a policy, a thing to do over time, then if you all you see is the behavior of the human, basically any reward function is compatible with some planner. So you can’t learn anything about that reward function without making any more assumptions. And intuitively, this is because for any complex behavior you see you could either call it, “Hey, the human’s optimizing a reward that makes them act like that. “Or you could say, “I guess the human is biased and they’re trying to do something else, but they did this instead.”

The sort of extreme version of this is like if you give me an option between apples and oranges and I picked the apple, you could say, “Hey, Rohin probably likes apples and is good at maximizing his reward of getting apples.” Or you could say, “Rohin probably likes oranges and he is just extremely bad at satisfying his preferences. He’s got a systematic bias that always causes him to choose the opposite of what he wants.” And you can’t distinguish between these two cases just by looking at my behavior.

Lucas: Yeah, that makes sense. So we can pivot sort of back in here into this main line of thought that you were on.

Rohin: Yeah. So basically with that impossibility result … When I look at the impossibility result, I sort of say that humans do this all the time, humans just sort of look at other humans and they can figure out what they want to do. So it seems like there are probably some simple set of assumptions that humans are using to infer what other humans are doing. So a simple one would be when the consequences of something or obvious to humans. Now, how you determine when that is another question, but when that’s true humans tend to be close to optimal and if you have something like that, you can rule out the planner that says the human is anti-rational and always chooses the worst possible thing.

Similarly, you might say that as tasks get more and more complex or require more and more computation, the probability that the human chooses the action that best maximizes his or her goals also goes down since the task is more complex and maybe a human doesn’t figure that out, figure out what’s the best thing to do. Maybe with enough of these assumptions we could get some sort of algorithm that actually works.

So we looked at if you make the assumption that the human is often close to rational and a few other assumptions about humans behaving similarly or planning similarly on similar tasks, then you can maybe, kind of, sort of, in simplified settings do IRL better than if you had just assumed that the human was optimal if humans actually systematically biased, but I wouldn’t say that our results are great. I don’t think I would say that I definitively, conclusively said, “This will never work.” Nor did I definitively conclusively say that this is great and we should definitely be putting more resources into it. Sort of somewhere in the middle, maybe more on the negative side of like this seems like a really hard problem and I’m not sure how we get around it.

Lucas: So I guess just as a point of comparison here, how is it that human beings succeed at this every day in terms of inferring preferences?

Rohin: I think humans have the benefit of being able to model the other person as being very similar to themselves. If I am trying to infer what you are doing I can sort of say, “Well, if I were in Lucas issues and I were doing this, what would I be optimizing?” And that’s a pretty good answer to what you would be optimizing. Humans are just in some absolute sense very similar to each other. We have similar biases. We’ve got similar ways of thinking. And I think we’ve leveraged that similarity a lot using our own self models as a drop-in approximation of the other person’s planner in this planner reward language.

And then we say, “Okay, well, if this other person thought like me and this is what they ended up doing, well then, what must they have been optimizing?” I think you’ll see that when this assumption breaks down humans actually get worse at inferring goals. It’s harder for me to infer what someone in a different culture is actually trying to do. They might have values that are like significantly different from mine.

I’ve been in both India and the US and it often seems to me that people in the US just have a hard time grasping the way that Indians see society and family expectations and things like this. So that’s an example that I’ve observed. It’s probably also true the other way around, but I was never old enough in India to actually think through this.

Lucas: Human beings sort of succeed in inferring preferences of people who they can model as having like similar values as their own or if you know that the person has similar values as your own. If inferring human preferences from inverse reinforcement learning is sort of not having the most promising results, then what do you believe to be a stronger way of inferring human preferences?

Rohin: The one thing I correct there is that I don’t think humans do it by assuming that people have similar values, just that people think in similar ways. For example, I am not particularly good at dancing. If I see someone doing a lot of hip-hop or something. It’s not that I value hip-hop and so I can infer they value hip-hop. It’s that I know that I do things that I like and they are doing hip-hop. Therefore, they probably like doing hip-hop. But anyway, that’s the minor point.

So a, just because IRL algorithms aren’t doing well now, I don’t think it’s true that IRL algorithms couldn’t do well in the future. It’s reasonable to expect that they would match human performance. That said, I’m not super optimistic about IRL anyway, because even if we do figure out how to get IRL algorithms and sort of make all these implicit assumptions that humans are making that we can then run and get what a human would have thought other humans are optimizing, I’m not really happy about then going and optimizing that utility function off into the far future, which is what sort of the default assumption that we seem to have when using inverse reinforcement learning.

It may be that IRL algorithms are good for other things, but for that particular application, it seems like the utility function you infer is going to not really scale to things that super intelligence will let us do. Humans just think very differently about how they want the future to go. In some sense, the future is going to be very, very different. We’re going to need to think a lot about how we want the future to go. All of our experience so far has not trained us to be able to think about what we care about in the sort of feature setting where we’ve got as a simple example the ability to easily copy people if they’re uploaded as software.

If that’s a thing that happens, well, is it okay to clone yourself? How does democracy work? All these sorts of things are somewhat value judgments. If you take egalitarianism and run with it, you basically get that one person can copy themselves millions of millions of times and just determine the outcome of all voting that way. That seems bad, but on our current values, I think that is probably what we want and we just really haven’t thought this through. IRL to infer utility function that we’ve then just ruthlessly optimized in the long-term just seems like by the time when the world changes a bunch, the value function that we inferred is going to be weirdly wrong in strange ways that we can’t predict.

Lucas: Why not run continuous updates on it as people update given the change of the world?

Rohin: It seems broadly reasonable. This is the sort of idea that you could have about how you could use IRL in a more realistic way that actually works. I think that’s perfectly fine. I’m optimistic about approaches that are like, “Okay, we’re going to use IRL to infer a value function or reward function or something and we’re going to use that to inform what the AI does, but it’s not going to be the end-all utility functions. It’s just going to infer what we do now and AI system is somehow going to check with us. Maybe it’s got some uncertainty over what the true reward function is. Maybe that it only keeps this reward function for a certain amount of time.”

These seem like things that are worth exploring, but I don’t know that we have the correct way to do it. So in the particular case that you proposed, just updating the reward function over time. The classic wire heading question is, how do we make it so that the AI doesn’t say, “Okay, actually, in order to optimize the utility function I have now, it would be good for me to prevent you from changing my utility function since if you change my utility function, I’m no longer going to achieve my original utility.” So that’s one issue.

The other issue is maybe it starts doing some long-term plans. Maybe even if it’s planning according to this utility function without expecting some changes to the utility function in the future, then it might set up some long-term plans that are going to look bad in the future, but it is hard to stop them in the future. Like you make some irreversible change to society because you didn’t realize that something was going to change. These sorts of things suggest you don’t want a single utility function that you’re optimizing even if you’re updating that utility function over time.

It could be that you have some sort of uncertainty over utility functions and that might be okay. I’m not sure. I don’t think that it’s settled that we don’t want to do something like this. I think it’s settled that we don’t want to use IRL to infer a utility function and optimize that one forever. There are certain middle grounds. I don’t know how well those middle grounds work. There are some intuitively there are going to be some problems, but maybe we can get around those.

Lucas: Let me try to do a quick summary just to see if I can explain this as simply as possible. There are people and people have preferences, and a good way to try and infer their preferences is through their observed behavior, except that human beings have cognitive and psychological biases, which sort of skew their actions because they’re not perfectly rational epistemic agents or rational agents. So the value system or award system that they’re optimizing for is imperfectly expressed through their behavior. If you’re going to infer the preferences from behavior than you have to correct for biases and epistemic and rational failures to try and inferr the true reward function. Stopping there. Is that sort of like a succinct way you’d put it?

Rohin: Yeah, I think maybe another point that might be the same or might be different is that under our normal definition of what our preferences or our values are, if we would say something like, “I value egalitarianism, but it seems predictably true that in the future we’re not going to have a single vote per a sentient being,” or something. Then essentially what that says is that our preferences, our values are going to change over time and they depend on the environment in which we are right now.

So you can either see that as okay, I have this really big, really global, really long-term utility function that tells me how given my environment what my narrow values in that environment are. And in that case and you say, “Well okay, in that case, we’re really super biased because we only really know our values in the environment. We don’t know our values in future environments. We’d have to think a lot more for that.” Or you can say, “We can infer our narrow values now and that has some biases thrown in, but we could probably account for those that then we have to have some sort of story for how we deal with our preferences evolving in the future.”

Those are two different perspectives on the same problem, I would say, and they differ in basically what you’re defining values to be. Is it the thing that tells you how to extrapolate what you want all the way into the future or is it the thing that tells you how you’re behaving right now in the environment. I think our classical notion of preference or values, the one that we use when we say values in everyday language is talking about the second kind, the more narrow kind.

Lucas: There’s really a lot there, I think, especially in terms of issues in that personal identity over time, commitment to values and as you said, different ideas and conceptualization of value, like what is it that I’m actually optimizing for or care about. Population ethics and tons of things about how people value future versions of themselves or whether or not they actually equally care about their value function at all times as it changes within the environment.

Rohin: That’s a great description of why I am nervous around inverse reinforcement learning. You listed a ton of issues and I’m like, yeah, all of those are like really difficult issues. And with inverse reinforcement learning, it’s sort of based on this premise of all of that is existent, is real and is timeless and we can infer it and then maybe we put on some hacks like continuously improving the value function over time to take into account changes, but this does feel like we’re starting with some fundamentally flawed paradigm.

So mostly because of this fact that it feels like we’ve taken a flawed paradigm to start with, then changed it so that it doesn’t have all the obvious flaws. I’m more optimistic about trying to have a different paradigm of how we want to build AI, which maybe I’ll summarize as just make AIs that do what we want or what we mean at the current moment in time and then make sure that they evolve along with us as we evolve and how we think about the world.

Lucas: Yeah. That specific feature there is something that we were trying to address in inverse reinforcement learning, if the algorithm were sort of updating overtime alongside myself. I just want to step back for a moment to try to get an even grander and more conceptual understanding of the globalness of inverse reinforcement learning. So from an evolutionary and sort of more cosmological perspective, you can say that from the time that the first self-replicating organisms on the planet until today, like the entire evolutionary tree, there’s sort of a global utility function across all animals that is ultimately driven by thermodynamics and the sun shining light on a planet and that this sort of global utility function of all agents across the planet, it seems like very ontologically basic and pure like what simply empirically exists. Attempting to access that through IRL is just interesting, the difficulties that arise from that. Does that sort of a picture seem accurate?

Rohin: I think I’m not super sure what exactly you’re proposing here. So let me try and restate it. So if we look at the environment as a whole or the universe as a whole or maybe we’re looking at evolution perhaps and we see that hey, evolution seems to have spit out all of these creatures that are interacting in this complicated way, but you can look at all of their behavior and trace it back to this objective in some sense of maximizing reproductive fitness. And so are we expecting that IRL on this very grand scale would somehow end up with maximize reproductive fitness. Is that what … Yeah, I think I’m not totally sure what implication you’re drawing from this.

Lucas: Yeah. I guess I’m not arguing that there’s going to be some sort of evolutionary thing which is being optimized.

Rohin: IRL does make the assumption that there is something doing an optimization. You usually have to point it towards what that thing is. You have to say, “Look at the behavior of this particular piece of the environment and tell me what it’s optimizing.” Maybe if you’re imagining IRL on this very grand scale, what is the thing you’re pointing it at?

Lucas: Yeah, so to sort of reiterate and specify, the pointing IRL at the human species would be like to point IRL at 7 billion primates. Similarly, I was thinking that what if one pointed IRL at the ecosystem of Earth over time, you could sort of plot this evolving algorithm over time. So I was just sort of bringing to note that accessing this sort of thing, which seems quite ontologically objective and just sort of clear in this way, it’s just very interesting how it’s fraught with so many difficulties. Yeah, in terms of history it seems like all there really is, is the set of all preferences at each time step over time, which could be summarized in some sort of global or individual levels of algorithms.

Rohin: Got it. Okay. I think I see what you’re saying right now. It seems like the intuition is like ecosystems, universe, laws of physics, very simple, very ontologically basic things, there’s something more real about any value function we could infer from that. And I think this is a misunderstanding of what IRL does. IRL fundamentally requires you to have some notion of counterfactuals. You need to have a description of the action space that some agent had and then when you observe their behavior, you see that they made a choice to take one particular action instead of another particular action.

You need to be able to ask the question of what could they have done instead, which is a counterfactual. Now, with laws of physics, it’s very unclear what the counterfactual would be. With evolution, you can maybe say something like, “Evolution could have chosen to make a whole bunch of mutations and I chose this particular one. And then if you use that particular model, what is IRL going to infer? It will probably infer something like maximized reproductive fitness.”

On the other hand, if you model evolution as like hey, you can design the best possible organism that you can. You can just create an organism out of thin air. And then what reward function are you maximizing then, it’s like super unclear. If you could just poof into existence a organism, you could just make something that’s extremely intelligent, very strong, et cetera, et cetera. And you’re like, well, evolution didn’t do that. It took millions of years to create even humans so clearly it wasn’t optimizing reproductive fitness, right?

And in fact, I think people often say that evolution is not an optimization process because of things like this. The notion of something doing optimization is very much relative to what you assume their capabilities to be and in particular what do you assume their counterfactuals to be. So if you were talking about this sort of grand scale ecosystems, universe, laws of physics, I would ask you like, “What are the counterfactuals? What could the laws of physics done otherwise or what could the ecosystem have done if it didn’t do the thing that it did?” Once you have an answer to that, I imagine I could predict what IRL would do. And that part is the part that doesn’t seem ontologically basic to me, which is why I don’t think that IRL on this sort of thing makes very much sense.

Lucas: Okay. The part here that seems to be a little bit funny to me is where tracking from physics, whatever you take to be ontologically basic about the universe, and tracking from that to the level of whatever our axioms and pre-assumptions for IRL are. What I’m trying to say is in terms of moving from whatever is ontologically basic to the level of agents and we have some assumptions in our IRL where we’re thinking about agents as sort of having theories of counterfactuals where they can choose between actions and they have some sort of reward or objective function that they’re trying to optimize for over time.

It seems sort of metaphysically queer where physics stops … Where we’re going up in levels of abstraction from physics to agents and we … Like physics couldn’t have done otherwise, but somehow agents could have done otherwise. Do you see the sort of concern that I’m raising?

Rohin: Yeah, that’s right. And this is perhaps another reason that I’m more optimistic about the don’t try to do anything at the grand scale and just try to do something that does the right thing locally in our current time, but I think that’s true. It definitely feels to me like optimization, the concept, should be ontologically basic and not a property of human thought. There’s something about how a random universe is high entropy whereas the ones that humans construct is low entropy. That suggests that we’re good at optimization.

It seems like it should be independent of humans. Also, on the other hand, optimization, any conception I come up with it is either specific to the way humans think about it or it seems like it relies on this notion of counterfactuals. And yeah, the laws of physics don’t seem like they have counterfactuals, so I’m not really sure where that comes in. In some sense, you can see that, okay, why do we have the notion of counterfactuals on agency thinking that we could have chosen something else while we’re basically … In some sense we’re just an algorithm that’s continually thinking about what we could do, trying to make plans.

So we search over this space of things that could be done, and that search is implemented in physics, which has no say, it has no counterfactuals, but the search itself, which is an abstraction layer above, it’s something that is running on physics. It is not itself a physics thing, that search is in fact going through multiple options and then choosing one now. It is deterministic from the point of view of physics, but from the point of view of the search, it’s not deterministic. The search doesn’t know which one is going to happen. I think that’s why humans have this notion of choice and of agency.

Lucas: Yeah, and I mean, just in terms of understanding the universe, it’s pretty interesting just how there’s like these two levels of attention where at the physics level you actually couldn’t have done otherwise, but as sort of like this optimization process running on physics that’s searching over space and time and modeling different world scenarios and then seemingly choosing and thus, creating observed behavior for other agents to try and infer whatever reward function that thing is trying to optimize for, it’s an interesting picture.

Rohin: I agree. It’s definitely a sort of puzzles that keep you up at night. But I think one particularly important implication of this is that agency is about how a search process thinks about itself. It’s not just about that because I can look at what someone else is doing and attribute agency to them, figure out that they are themselves running an algorithm that chooses between actions. I don’t have a great story for this. Maybe it’s just humans realizing that other humans are just like them.

So this is maybe why we get acrimonious debates about whether evolution has agency, but we don’t get acrimonious debates about whether humans have agency. Evolution is sufficiently different from us that we can look at the way that it “chooses” “things” and we say, “Oh well, but we understand how it chooses things.” You could model it as a search process, but you could also model it is all that’s happening is this deterministic or mostly deterministic which animals survived and had babies and that is how things happen. And so therefore, it’s not an optimization process. There’s no search. There is deterministic. And so you have these two conflicting views for evolution.

Whereas I can’t really say, “Hey Lucas, I know exactly deterministically how you’re going to do things.” I know this at the sense of like men, there are electrons and atoms and stuff moving around in your brain and electrical signals, but that’s not going to let me predict what you can do. One of the best models I can have of you is just optimizing for some goal, whereas with evolution I can have a more detailed model. And so maybe that’s why I set aside the model of evolution as an optimizer.

Under this setting it’s like, “Okay, maybe our views of agency and optimization are just facts about how well we can model the process, which cuts against the optimization as ontologically basic thing and it seems very difficult. It seems like a hard problem to me. I want to reiterate that most of this has just pushed me to let’s try and instead have a AI alignment focus, try to do things that we understand now and not get into the metaphilosophy problems. If we just get AI systems that broadly do what we want and are asking us for clarification, helping us evolve our thoughts over time, if we can do something like that. I think there are people who would argue that like no, of course, we can’t do something like that.

But if we could do something like that, that seems significantly more likely to work than something that has to have answers to all these metaphilosophical problems today. My position is just that this is doable. We should be able to make systems that are of the nature that I described.

Lucas: There’s clearly a lot of philosophical difficulties that go into IRL. Now it would be sort of good if we could just sort of take a step back and you could summarize your thoughts here on inverse reinforcement learning and the place that it has in AI alignment.

Rohin: I think my current position is something like fairly confidently don’t use IRL to infer a utility function that you then optimize over the long-term. In general, I would say don’t have a utility function that you optimize over the long-term because it doesn’t seem like that’s easily definable right now. So that’s like one class of things I think we should do. On the other hand I think IRL is probably good as a tool.

There is this nice property of IRL that you figure out what someone wants and then you help them do it. And this seems more robust than handwriting, the things that we care about in any particular domain, like even in a simple household robot setting, there are tons and tons of preferences that we have like don’t break vases. Something like IRL could infer these sorts of things.

So I think IRL has definitely a place as a tool that helps us figure out what humans want, but I don’t think the full story for alignment is going to rest on IRL in particular. It gets us good behavior in the present, but it doesn’t tell us how to extrapolate on into the future. Maybe if you did IRL that let you infer how we want the AI system to extrapolate our values or to figure out IRL and our meta-preferences about how the algorithm should infer our preferences or something like this, that maybe could work, but it’s not obvious to me. It seems worth trying at some point.

TLDR, don’t use it for long-term utility function. Do use it as a tool to get decent behavior in the short-term. Maybe also use it as a tool to infer meta-preferences. That seems broadly good, but I don’t know that we know enough about that setting yet.

Lucas: All right. Yeah, that’s all just super interesting and it’s sort of just great to hear how the space is unfolded for you and what your views are now. So I think that we can just sort of pivot here into the AI alignment problem more generally and so now that you’ve moved on from being as excited about IRL, what is essentially capturing your interests currently in the space of AI alignment?

Rohin: The thing that I’m most interested in right now is can we build an AI system that basically evolves over time with us. I’m thinking of this now is like a human AI interaction problem. You’ve got an AI system. We want to figure out how to make it that it broadly helps us, but also at the same time and figures out what it needs to do based on some sort of data that comes from humans. Now, this doesn’t have to be the human saying something. It could be from their behavior. It could be things that they have created in the past. It could be all sorts of things. It could be a reward function that they write down.

But I think the perspective of the things that are easy to infer are the things that are specific to our current environment is pretty important. What I would like to do is build AI systems that refer to preferences in the current environment or things we want in the current environment and do those reasonably well, but don’t just extrapolate to the future and let humans adapt to the future and then figure out what the humans value now and then do things based on that then.

There are a few ways that you could imagine this going. One is this notion of corrigibility in the sense that Paul Christiano writes about it, not the sense that MIRI writes about it, where the AI is basically trying to help you. And if I have an AI that is trying to help me, well, I think one of the most obvious things for someone who’s trying to help me to do is make sure that I remain in effective control of any power resources that might be present that the AI might have and to ask me if my values change in the future or if what I want the AI to do changes in the future. So that’s one thing that you might hope to do.

Also imagine building a norm following AI. So I think human society basically just runs on norms that we mostly all share and tend to follow. We have norms against particularly bad things like murdering people and stealing. We have norms against shoplifting. We have maybe less strong norms against littering. Unclear. And then we also have norms for things are not very consequential. We have norms against randomly knocking over a glass at a restaurant in order to break it. That is also a norm. Even though there are quite often times where I’m like, “Man, it would be fun to just break a glass at the restaurant. It’s very cathartic,” but it doesn’t happen very often.

And so if we could build an AI system that could infer and follow those norms, it seems like this AI would behave in a more human-like fashion. This is a pretty new line of thought so I don’t know whether this works, but it could be that such an AI system is simultaneously behaving in a fashion that humans would find acceptable and also lets us do pretty cool, interesting, new things like developing new technologies and stuff that humans can then deploy and the AI doesn’t just unilaterally deploy without any safety checks or running it by humans or something like that.

Lucas: So let’s just back up a little bit here in terms of the picture of AI alignment. So we have a system that we do not want to extrapolate too much toward possible future values. It seems that there are all these ways in which we can be using the AI first to sort of amplify our own decision making and then also different methodologies which reflect the way that human beings update their own values and preferences over time, something like as proposed by I believe Paul Christiano and Geoffrey Irving and other people at OpenAI, like alignment through debate.

And there’s just all these sorts of epistemic practices of human beings with regards to sort of this world model building and how that affects shifts in value and preferences, also given how the environment changes. So yeah, it just seems like tracking overall these things, finding ways in which AI can amplify or participate in those sort of epistemic practices, right?

Rohin: Yeah. So I definitely think that something like amplification can be thought of as improving our epistemics over time. That seems like a reasonable way to do it. I haven’t really thought very much about how amplification or the pay scales were changing environments. They both operate under this general like we could have a deliberation tree and in principle what we want is this exponentially sized deliberation tree where the human goes through all of the arguments and counter-arguments and breaks those down into sub-points in excruciating detail in a way that no human could ever actually do because it would take way too long.

And then amplification debate basically show you how to get the outcome that this reasoning process would have given by using an AI system to assist the human. I don’t know if I would call it like improving human epistemics, but more like taking whatever epistemics you already have and running it for a long amount of time. And it’s possible like in that long amount of time you actually figure out how to do better epistemics.

I’m not sure that this perspective really talks very much about how preferences change over time. You would hope that it would just naturally be robust to that in that as the environment changes, your deliberation starts looking different in that like okay, now suddenly we have to go back to my example before we have uploads and we’re like egalitarianism now seems to have some really weird consequences. And then presumably the deliberation tree that amplification and debate are mimicking is going to have a bunch of thoughts about do we actually want egalitarianism now, what were the moral intuitions that pushed us towards this? Is there some equivalent principle that lets us keep our moral intuitions, but doesn’t have this weird property where a single person can decide the outcome of an election, et cetera, et cetera.

I think they were not designed to do this, but by a virtue of being based off like how a human would think, what a human would do if they got a long time and a lot of helpful tools to think about it, they’re essentially just inheriting these properties from the human. If the human as the environment would change would start rethinking their priorities or what they care about, then so too would amplification and debate.

Lucas: I think here it also has me thinking about what are the meta-preferences and the meta-meta-preferences and if you could imagine taking a human brain and then running it until the end, through decision and rational and logical thought trees over enough time, with enough epistemics and power behind it to try to sort of navigate its way to the end. It just raises interesting questions about like is that what we want? Is taking that over every single person and then sort of just preference aggregating it all together, is that what we want? And what is the role of moral philosophy for thinking here?

Rohin: Well, so one thing is that whatever moral philosophy you would do so would the amplification of you in theory. I think the benefit of these approaches is that they have this nice property that whatever you would have thought of it in the limit of good AI and idealizations, properly mimicking you and so on, so forth. In this sort of nice world where this all works in a nice, ideal way, it seems like any consideration you can have or you would have so would be agent produced by iterated amplification or debate.

And so if you were going to do a bunch of moral philosophy and come to some sort of decision based on that, so would iterated amplification or debate. So I think it’s like basically here is how we build an AI system that solves the problems in the same way that a human would solve them. And so then if you’re worried about, hey, maybe humans themselves are just not very good at solving problems. Looks like most humans in the world. Like don’t do moral philosophy and don’t extrapolate their values well in the future. And the only reason we have moral progress is because younger generations keep getting born and they have different views than the older generations.

That, I think, could in fact be a problem, but I think there’s hope that we could like train humans to have them nice sort of properties, good epistemics, such they would provide good training data for iterated amplification if there comes a day where we think we can actually train iterated amplification to mimic human explicit reasoning. They do both have the property that they’re only mimicking the explicit reasoning and not necessarily the implicit reasoning.

Lucas: Do you want to unpack that distinction there?

Rohin: Oh, yeah. Sure. So both of them require that you take your high-level question and decompose it into a bunch of sub-questions or sorry, the theoretical model of them has that. This is like pretty clear with iterated amplification. It is less clear with debate. At each point you need to have the top level agent decompose the problem into a bunch of sub-problems. And this basically requires you to be able to decompose tasks into clearly specified sub-tasks, where clearly specified could mean in natural language, but you need to make it explicit in a way that the agent you’re assigning the task to can understand it without having to have your mind.

Whereas if I’m doing some sort of programming task or something, often I will just sort of know what direction to go in next, but not be able to cleanly formalize it. So you’ll give me some like challenging algorithms question and I’ll be like, “Oh, yeah, kind of seems like dynamic programming is probably the right thing to do here.” And maybe if I consider it this particular way, maybe if I put these things in the stack or something, but even the fact that I’m saying this out in natural language is misrepresenting my process.

Really there’s some intuitive not verbalizable process going on in my head. Somehow navigates to the space of possible programs and picks a thing and I think the reason I can do this is because I’ve been programming for a lot of time and I’ve trained a bunch of intuitions and heuristics that I cannot easily verbalize us some like nice decomposition. So that’s sort of implicit in this thing. If you did want that to be incorporated in an iterated amplification, it would have to be incorporated in the base agent, the one that you start with. But if you start with something relatively simple, which I think is often what we’re trying to do, then you don’t get those human abilities and you have to rediscover them in some sense through explicit decompositional reasoning.

Lucas: Okay, cool. Yeah, that’s super interesting. So now to frame all of this again, do you want to sort of just give a brief summary of your general views here?

Rohin: I wish there were a nice way to summarize this. That would mean we’d made more progress. It seems like there’s a bunch of things that people have proposed. There’s amplification/debate, which are very similar, IRL as a general. I think, but I’m not sure, that most of them would agree that we don’t want to like infer a utility function and optimize it for the long-term. I think more of them are like, yeah, we want this sort of interactive system with the human and the AI. It’s not clear to me how different these are and what they’re aiming for in amplification and debate.

So here we’re sort of looking at how things change over time and making that a pretty central piece of how we’re thinking about it. Initially the AI is trying to help the human, human has some sort of reward function, AI trying to learn it and help them, but over time this changes, the AI has to keep up with it. And under this framing you want to think a lot about interaction, you want to think about getting as many bits about reward from the human to the AI as possible. Maybe think about control theory and how human data is in some sense of control mechanism for the AI.

You’d want to like infer norms and ways that people behave, how people relate with each other, try to have your AI systems do that as well. So that’s one camp of things, have the AI interact with humans, behave generally in the way that humans would say is not crazy, update those over time. And then there’s the other side which is like have an AI system that is taking human reasoning, human explicit reasoning and doing that better or doing that more, which allows it to do anything that the human would have done, which is more taking the thought process that humans go through and putting that at the center. That is the thing that we want to mimic and make better.

Sort of parts where our preferences change over time is something that you get for free in some sense by mimicking human thought processes or reasoning. Summary, those are two camps. I am optimistic about both of them, think that people should be doing research on both of them. I don’t really have much more of a perspective of that, I think.

Lucas: That’s excellent. I think that’s a super helpful overview actually. And given that, how do you think that your views of AI alignment have changed over the past few years?

Rohin: I’ll note that I’ve only been in this field for I think 15, 16 months now, so just over a year, but over that year I definitely came into it thinking what we want to do is infer the correct utility function and optimize it. And I have moved away quite strongly from that. I, in fact, recently started writing a value learning sequence or maybe collating is a better word. I’ve written a lot of posts that still have to come out, but I also took a few posts from other people.

The first part of that sequence is basically arguing seems bad to try and define a utility function and then optimize it. So I’m just trying to move away from long-term utility functions in general or long-term goals or things like this. That’s probably the biggest update since starting. Other things that I’ve changed, a focus more on norms than on values, trying to do things that are easy to infer right now in the current environment and that making sure that we update on these over time as opposed to trying to get the one true thing that depends on us solving all the hard metaphilosophical problems. That’s, I think, another big change in the way I’ve been thinking about it.

Lucas: Yeah. I mean, there are different levels of alignment at their core.

Rohin: Wait, I don’t know exactly what you mean by that.

Lucas: There’s your original point of view where you said you came into the field and you were thinking infer the utility function and maximize it. And your current view is now that you are moving away from that and beginning to be more partial towards the view which takes it that we want to be like inferring from norms in the present day just like current preferences and then optimizing that rather than extrapolating towards some ultimate end-goal and then trying to optimize for that. In terms of aligning in these different ways, isn’t there a lot of room for value drift, allowing the thing to run in the real world rather than amplifying explicit human thought on a machine?

Rohin: Value drift if is an interesting question. In some sense, I do want my values to drift in that whatever I think about the correct way that the future should go or something like that today. I probably will not endorse that in the future and I endorse the fact that I won’t endorse it in the future. I do want to learn more and then figure out what to do in the future based on that. You could call that value drift that is a thing. I want to happen. So in that sense then value drift wouldn’t be a bad thing, but then there’s also a sense in which there are ways in which my values could change in the future and ways that I don’t endorse and then that one maybe is value drift. That is bad.

So yeah, if you have an AI system that’s operating in the real world and changes over time as we humans change, yes, there will be changes at what the AI system is trying to achieve over time. You could call that value drift, but value drift usually has a negative connotation, whereas like this process of learning as the environment changes seems to be to me like a positive thing. It’s a thing I would want to do myself.

Lucas: Yeah, sorry, maybe I wasn’t clear enough. In the case of running human beings in the real world, where there are like the causes and effects of history and whatever else and how that actually will change the expression of people over time. Because if you’re running this version of AI alignment where you’re sort of just always optimizing the current set of values in people, progression of the world and of civilization is only as good as the best of all human like values and preferences in that moment.

It’s sort of like limited by what humans are in that specific environment and time, right? If you’re running that in the real world versus running some sort of amplified version of explicit human reasoning, don’t you think that they’re going to come to different conclusions?

Rohin: I think the amplified explicit human reasoning, I imagine that it’s going to operate in the real world. It’s going to see changes that happen. It might be able to predict those changes and then be able to figure out how to respond fast, before the changes even happen perhaps, but I still think of amplification as being very much embedded in the real world. Like you’re asking it questions about things that happen in the real world. It’s going to use explicit reasoning that it would have used if a human were in the real world and thinking about the question.

I don’t really see much of a distinction here. I definitely think that even in my setting where I’m imagining AI systems that evolve over time and change based on that, that they are going to be smarter than humans, going to think through things a lot faster, be able to predict things in advance in the same way that simplified explicit reasoning would. Maybe there are differences, but value drift doesn’t seem like one of them or at least I cannot predict right now how they will differ along the axis of value drift.

Lucas: So then just sort of again taking a step back to the ways in which your views have shifted over the past few years. Is there anything else there that you’d like to touch on?

Rohin: Oh man, I’m sure there is. My views changed so much because I was just so wrong initially.

Lucas: So most people listening should think that if given a lot more thought on this subject, that their views are likely to be radically different than the ones that they currently have and the conceptions that they currently have about AI alignment.

Rohin: Seems true from most listeners, yeah. Not all of them, but yeah.

Lucas: Yeah, I guess it’s just an interesting fact. Do you think this is like an experience of most people who are working on this problem?

Rohin: Probably. I mean, within the first year of working on the problem that seems likely. I mean just in general if you work on the problem, if you start with near no knowledge on something and then you work on it for a year, your views should change dramatically just because you’ve learned a bunch of things and I think that basically explains most of my changes in view.

It’s just actually hard for me to remember all the ways in which I was wrong back in the past and I focused on not using utility functions because I think that even other people in the field still believe right now. So that’s where that one came from, but there are like plenty of other things that are just notably, easily, demonstrably wrong about that I’m having trouble recalling now.

Lucas: Yeah, and the utility function one I think is a very good example and I think that if it were possible to find all of these in your brain and distill them, I think it would make a very, very good infographic on AI alignment, because those misconceptions are also misconceptions that I’ve had and I share those and I think that I’ve seen them also in other people. A lot of sort of the intellectual blunders that you or I have made are probably repeated quite often.

Rohin: I definitely believe that. Yeah, I guess I could talk about the things that I’m going to very soon saying the value learning sequence. Those were definitely updates that I made, one of those a utility functions thing. Another one was thinking about what we want is for the human AI system as a whole to be optimizing for some sort of goal. And this opens up a nice space of possibilities where the AI is not optimizing a goal, only the human AI system together is. Keeping in mind that that is the goal and not just the AI itself must be optimizing some sort of goal.

The idea of corrigibility itself as a thing that we should be aiming for was a pretty big update for me, took a while for me to get to that one. I think distributional shift was a pretty key concept that I learned at some point and started applying everywhere. One way of thinking about the evolving preferences over time thing is that humans, they’ve been trained on the environment that we have right now and arguably we’ve been trained on the ancestral environment too by evolution, but we haven’t been trained on whatever the future is going to be.

Or for a more current example, we haven’t been trained on social media. Social media is a fairly new thing affecting us in ways that we hadn’t considered in the past and this is causing us to change how we do things. So in some sense what’s happening is as we go into the future, we’re encountering a distributional shift and human values don’t extrapolate well to that distributional shift. What do you actually need to do is wait for the humans to get to that point, let them experience it, train on it, have their values be trained on this new distribution and then figure out what they are rather than trying to do it right now when their values are just going to be wrong or going to be not what they would get if they were actually in that situation.

Lucas: Isn’t that sort of summarizing coherent extrapolated volition?

Rohin: I don’t know that coherent extrapolated volition explicitly talks about having the human be in a new environment. I guess you could imagine that CEV considers … If you imagine like a really, really long process of deliberation in CEV, then you could be like, okay what would happen if I were in this environment and all these sorts of things happened. It seems like you would need to have a good model of how the world works and how physics works in order to predict what the environment would be like. Maybe you can do that and then in that case you simulate a bunch of different environments and you think about how humans would adapt and evolve and respond to those environments and then you take all of that together and you summarize it and distill it down into a single utility function.

Plausibly that could work. Doesn’t seem like a thing we can actually build, but as a definition of what we might want, that seems not bad. I think that is me putting the distributional shift perspective on CEV and it was not, certainly not obvious to me from the statement of CEV itself, that you’re thinking about how to mitigate the impact of distributional shift on human values. I think I’ve had this perspective and I’ve put it on CEV and I’m like, yeah, that seems fine, but it was not obvious to me from reading about CEV alone.

Lucas: Okay, cool.

Rohin: I recently posted a comment on the Alignment Forum talking about how we want to like … I guess this is sort of in corrigibility ability too, making an AI system that tries to help us as opposed to making an AI system that is optimizing the one true utility function. So that was an update I made, basically the same update as the one about aiming for corrigibility. I guess another update I made is that while there is a phase transition or something or like a sharp change in the problems that we see when AIs become human level or super-intelligent, I think the underlying causes of the problems don’t really change.

Underlying causes of problems with narrow AI systems, probably similar to the ones that underlie a super intelligent systems. Having their own reward function leads to problems both in narrow settings and in super-intelligent settings. This made me more optimistic about doing work trying to address current problems, but with an eye towards long-term problems.

Lucas: What made you have this update?

Rohin: Thinking about the problems a lot, in particular thinking about how they might happen in current systems as well. So I guess a prediction that I would make is that if it is actually true that superintelligence would end up killing us all or something like that, some like really catastrophic outcome. Then I would predict that before that, we will see some AI system that causes some other smaller scale catastrophe where I don’t know what catastrophe means, it might be something like oh, you humans die or oh, the power grid went down for some time or something like that.

And then before that we will have things that sort of fail in relatively not important ways, but in ways of say that like here’s an underlying problem that we need to fix with how we build AI systems. If you extrapolate all the way back to today that looks like for example to boat racing example from open AI, a reward hacking one. So generally expecting things to be more continuous. Not necessarily slow, but continuous. That update I made because of the posts arguing for slow take off from Paul Christiano and AI impacts.

Lucas: Right. And the view there is sort of that the world will be propagated with lower-level ML as we sort of start to ratchet up the capability of intelligence. So a lot of tasks will sort of be … Already being done by systems that are slightly less intelligent than the current best system. And so all work ecosystems will already be fully flooded with AI systems optimizing within the spaces. So there won’t be a lot of space for the first AGI system or whatever to really get decisive strategic advantage.

Rohin: Yeah, would I make prediction that we won’t have a system that gets a decisive strategic advantage? I’m not sure about that one. It seems plausible to me that we have one AI system that is improving over time and we use those improvements in society for before it becomes super intelligent. But then by the time it becomes super intelligent, it is still the one AI system that is super intelligent. So it does gain a decisive strategic advantage.

An example of this would be if there was just one main AGI project, I would still predict that progress on AI, it would be continuous, but I would not predict a multipolar outcome in that scenario. The corresponding view is that while I still do use the terminology first AGI because it’s like pointing out some intuitive concept that I think is useful, it’s a very, very fuzzy concept and I don’t think we’ll be able to actually point at any particular system and say that was the first AGI. Rather we’ll point to like a broad swath of time and say, “Somewhere in there AI had became generally intelligent.”

Lucas: There are going to be all these sort of like isolated meta-epistemic reasoning tools which can work in specific scenarios, which will sort of potentially aggregate in that fuzzy space to create something fully general.

Rohin: Yep. They’re going to be applied in some domains and then the percent of domains in which they apply will gradually grow grutter and eventually we’ll be like, huh, looks like there’s nothing left for humans to do. It probably won’t be a surprise, but I don’t think there will be a particular point where everyone agrees, yep, looks like AI is going to automate everything in just a few years. It’s more like AI will start automating a bunch of stuff. The amount of stuff it automates will increase over time. Some people will see it coming, see full automation coming earlier, some people will be like nah, this is just a simple task that AI can do, still got a long ways to go for all the really generally intelligent stuff. People will sign on to like oh, yeah, it’s actually becoming generally intelligent at different spots.

Lucas: Right. If you have a bunch of small mammalian level AIs automating a lot of stuff in industry, there would likely be a lot of people whose timelines would be skewed in the wrong direction.

Rohin: I’m not even sure this was a point of timelines. It was just a point of like which is the system that you call AGI. I claim this will not have a definitive answer. So that was also an update to how I was thinking. That one, I think, is like more generally accepted in the community. And this was more like well, all of the literature on the AI safety that’s publicly available and like commonly read by EA’s doesn’t really talk about these sorts of points. So I just hadn’t encountered these things when I started out. And then I encountered a more maybe I thought to myself, I don’t remember, but like once I encountered the arguments I was like, yeah, that makes sense and maybe I should have thought of that before.

Lucas: In the sequence which you’re writing, do you sort of like cover all of these items which you didn’t think were in the mainstream literature?

Rohin: I cover some of them. The first few things I told you were I was just like what did I say in the sequence. There were a few I think that probably aren’t going to be in that sequence just because there’s a lot of stuff that people have not written down.

Lucas: It’s pretty interesting because the way in which the AI alignment field is evolving is sometimes, it’s often difficult to have a bird’s-eye view of where it is and track avant-guard ideas being formulated in people’s brains and being shared.

Rohin: Yeah. I definitely agree. I was hoping that the Alignment Newsletter, which I write, to help with that. I would say it probably speeds up the process of bit, but it’s definitely not keeping you on the forefront. There are many ideas that I’ve heard about, that I’ve even read documents about that haven’t made it in the newsletter yet because they haven’t become public.

Lucas: So how many months behind do you think for example, the newsletter would be?

Rohin: Oh, good question. Well, let’s see. There’s a paper that I started writing in May or April that has not made it into the newsletter yet. There’s a paper that I finished and submitted in October that has not made it to the newsletter yet, or was it September, possibly September. That one will come out soon. That suggests a three month lag. But I think many others have been longer than that. Admittedly, this is for academic researchers at CHAI. I think CHAI is like we tend to publish using papers and not blog posts and this results in the longer delay on our side.

Also because work on relative reachability, for example, I’ve learned about quite a bit. I learned about maybe four or five months before she released it and that’s when it came out in the newsletter. And of course, she’d been working on it for longer or like AI safety by debate I think I learned about six or seven months before it was published in came out in the newsletter. So yeah, somewhere between three months and half a year for things seems likely. For things that I learned from MIRI, it’s possible that they never get into the newsletter because they’re never made public. So yeah, there’s a fairly broad range there.

Lucas: Okay. That’s quite interesting. I think that also sort of gives people a better sense of what’s going on in technical AI alignment because it can seem kind of black boxy.

Rohin: Yeah. I mean, in some sense this is a thing that all fields have. I used to work in programming languages. On there we would often write a paper and submit it and then go and present about it a year later by the time we had moved on, done a whole other project and written other paper and then we’d go back and we’d talk about this. I definitely remember sometimes grad students being like, “Hey, I want to get this practice document.” I say, “What’s it about?” It’s like some topic. And I’m like wait, but you did that. I heard about this like two years ago. And they’re like, yep, just got published.

So in that sense, I think both AI is faster and AI alignment is I think even faster than AI because it’s a smaller field and people can talk to each other more, and also because a lot of us write blog posts. Blog posts are great.

Lucas: They definitely play a crucial role within the community in general. So I guess just sort of tying things up a bit more here, pivoting back to a broader view. Given everything that you’ve learned and how your ideas have shifted, what are you most concerned about right now in AI alignment? How are the prospects looking to you and how does the problem of AI alignment look right now to Rohin Shah?

Rohin: I think it looks pretty tractable, pretty good. Most of the problems that I see are I think ones that we can see in advance, we probably can solve. None of these seem like particularly impossible to me. I think I also give more credit to the machine learning community or AI community than other researchers do. I trust in our ability where here are meaning like the AI field broadly, our ability to notice what things could go wrong and fix them in a way that maybe other researchers in the AI safety don’t.

I think one of the things that feels most problematic to me right now is the problem of inner optimizers, which I’m told there will probably be a sequence on in the future because there aren’t great resources on it right now. So basically this is the idea of if you run a search process over a wide space of strategies or options and you search for something that gets you good external reward or something like that, what you might end up finding is a strategy that is itself a consequentialist agent that’s optimizing for its own internal reward and that internal reward will agree with the external reward on the training data because that’s why it was selected, but it might diverge soon as there’s any distribution shift.

And then it might start optimizing against us adversarially in the same way that you would get if you like gave a misspecified award function to and RL system today. This seems plausible to me. I’ve read a bit more about this and talk to people about this and things that aren’t yet public, but hopefully will soon be. I definitely recommend reading that if it ever comes out, but yeah, this seems like it could be a problem. I don’t think we have any instance of it being a problem yet. Seems hard to detect and I’m not sure how I would fix it right now.

But I also don’t think that we’ve thought about the problem or I don’t think I’ve thought about the problem that much. I don’t want to say like, “Oh man, this is totally unsolvable,” yet. Maybe I’m just an optimistic person by nature. I mean, that’s definitely true, but maybe that’s biasing my judgment here. Feels like we could probably solve that if it ends up being a problem.

Lucas: Is there anything else here that you would like to wrap up on in terms of AI alignment or inverse reinforcement learning?

Rohin: I want to continue to exhort that we should not be trying to solve all the metaphilosophical problems and we should not be trying to like infer the one true utility function and we should not be modeling an AI as pursuing a single goal over the long-term. That is a thing I want to communicate to everybody else. Apart from that I think we’ve covered everything at a good depth. Yeah, I don’t think there’s anything else I’d add to that.

Lucas: So given that I think rather succinct distillation of what we are trying not to do, could you try and offer an equally succinct distillation of what we are trying to do?

Rohin: I wish I could. That would be great, wouldn’t it? I can tell you that I can’t do that. I could give you like a suggestion on what we are trying to do instead, which would be try to build an AI system that is corrigible, that is doing what we want, but it’s going to remain under human control in some sense. It’s going to ask us, take our preferences into account, not try to go off behind our backs and optimize against us. That is a summary of a path that we could go down that I think is premised or what I would want our AI systems to be like. But that’s unfortunately very sparse on concrete details because I don’t know those concrete details yet.

Lucas: Right. I think that that sort of perspective shift is quite important. I think it changes the nature of the problem and how one thinks about the problem, even at the societal level.

Rohin: Yeah. Agreed.

Lucas: All right. So thank you so much Rohin, it’s really been a pleasure. If people are interested in checking out some of this work that we have mentioned or following you, where’s the best place to do that?

Rohin: I have a website. It is just RohinShah.com. Subscribing to the Alignment Newsletter is … Well, it’s not a great way to figure out what I personally believe. Maybe if you’d keep reading the newsletter over time and read my opinions for several weeks in a row, maybe then you’d start getting a sense of what Rohin thinks. It will soon have links to my papers and things like that, but yeah, that’s probably the best way on this, like my website. I do have a Twitter, but I don’t really use it.

Lucas: Okay. So yeah, thanks again Rohin. It’s really been a pleasure. I think that was a ton to think about and I think that I probably have a lot more of my own thinking and updating to do based off of this conversation.

Rohin: Great. Love it when that happens.

Lucas: So yeah. Thanks so much. Take care and talk again soon.

Rohin: All right. See you soon.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer

Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health’s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized  how we navigate, how we track and ship goods, and how we regulate traffic.

However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it’s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.

This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges — artificial intelligence, for example.

Yet, despite the unpredictable nature of tomorrow’s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems — all of which combine advanced decision-making processes with the physics of motion.

Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90% and saving the world’s nations hundreds of billions of dollars a year — but only if such systems are themselves implemented correctly.

This is where Andre Platzer, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer’s research is dedicated to ensuring that CPS benefit humanity and don’t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.

What Does it Mean to Have a Safe System?

Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.

To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, “CPS are at their best if they’re not just running very simple [control systems], but if they’re running much more sophisticated and advanced systems,” Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.

In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, “the simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.”

The ultimate goal, then, is to find equilibrium between the flexibility and predictability — between the advanced learning technology and the proof of safety — to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, “with cyber-physical systems, in order to make that sophistication feasible and scalable, it’s also important to keep the system as simple as possible.”

How to Make a System Safe

The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving that this is actually the case?

The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs.

When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.

After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents — increasing their complexity while simultaneously verifying their safety.

Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn’t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.

The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.

Creating the Tech of Tomorrow

To date, one of the biggest developments to come from Platzer’s research is the KeYmaera X prover, which Platzer characterizes as a “gigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.”

The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.

More technically, KeYmaera X is a hybrid systems theorem prover that analyzes the control program and the physical behavior of the controlled system together, in order to provide both efficient computation and the necessary support for sophisticated safety proof techniques. Ultimately, this work builds off of a previous iteration of the technology known as KeYmaera. However, Platzer states that, in order to optimize the tool and make it as simple as possible, the team essentially “started from scratch.”

Emphasizing just how dramatic these most recent changes are, Platzer notes that, in the previous prover, the correctness of the statements was dependent on some 66,000 lines of code. Notably, each of these 66,000 lines were all critical to the correctness of the verdict. According to Platzer, this poses a problem, as it’s exceedingly difficult to ensure that all of the lines are implemented correctly. Although the latest iteration of KeYmaera is ultimately just as large as the previous version, in KeYmaera X, the part of the prover that is responsible for verifying the correctness is a mere 2,000 lines of code.

This allows the team to evaluate the safety of cyber-physical systems more reliably than ever before. “We identified this microkernel, this really minuscule part of the system that was responsible for the correctness of the answers, so now we have a much better chance of making sure that we haven’t accidentally snuck any mistakes into the reasoning engines,” Platzer said. Simultaneously, he notes that it enables users to do much more aggressive automation in their analysis. Platzer explains, “If you have a small part of the system that’s responsible for the correctness, then you can do much more liberal automation. It can be much more courageous because there’s an entire safety net underneath it.”

For the next stage of his research, Platzer is going to begin integrating multiple mathematical models that could potentially describe reality into a CPS. To explain these next steps, Platzer returns once more to self-driving cars: “If you’re following another driver, you can’t know if the driver is currently looking for a parking spot, trying to get somewhere quickly, or about to change lanes. So, in principle, under those circumstances, it’s a good idea to have multiple possible models and comply with the ones that may be the best possible explanation of reality.”

Ultimately, the goal is to allow the CPS to increase their flexibility and complexity by switching between these multiple models as they become more or less likely explanations of reality. “The world is a complicated place,” Platzer explains, “so the safety analysis of the world will also have to be a complicated one.”

FLI Signs Safe Face Pledge

FLI is pleased to announce that we’ve signed the Safe Face Pledge, an effort to ensure facial analysis technologies are not used as weapons or in other situations that can lead to abuse or bias. The pledge was initiated and led by Joy Buolamwini, an AI researcher at MIT and founder of the Algorithmic Justice League.  

Facial analysis technology isn’t just used by our smart phones and on social media. It’s also found in drones and other military weapons, and it’s used by law enforcement, airports and airlines, public surveillance cameras, schools, business, and more. Yet the technology is known to be flawed and biased, often miscategorizing anyone who isn’t a white male. And the bias is especially strong against dark-skinned women.

Research shows facial analysis technology is susceptible to bias and even if accurate can be used in ways that breach civil liberties. Without bans on harmful use cases, regulation, and public oversight, this technology can be readily weaponized, employed in secret government surveillance, and abused in law enforcement,” warns Buolamwini.

By signing the pledge, companies that develop, sell or buy facial recognition and analysis technology promise that they will “prohibit lethal use of the technology, lawless police use, and require transparency in any government use.”

FLI does not develop or use these technologies, but we signed because we support these efforts, and we hope all companies will take necessary steps to ensure their technologies are used for good, rather than as weapons or other means of harm.

Companies that had signed the pledge at launch include Simprints, Yoti, and Robbie AI. Other early signatories of the pledge include prominent AI researchers Noel Sharkey, Subbarao Kambhampati, Toby Walsh, Stuart Russell, and Raja Chatila, as well as tech bauthors Cathy O’Neil and Meredith Broussard, and many more.

The SAFE Face Pledge commits signatories to:

Show Value for Human Life, Dignity, and Rights

  • Do not contribute to applications that risk human life
  • Do not facilitate secret and discriminatory government surveillance
  • Mitigate law enforcement abuse
  • Ensure your rules are being followed

Address Harmful Bias

  • Implement internal bias evaluation processes and support independent evaluation
  • Submit models on the market for benchmark evaluation where available

Facilitate Transparency

  • Increase public awareness of facial analysis technology use
  • Enable external analysis of facial analysis technology on the market

Embed Safe Face Pledge into Business Practices

  • Modify legal documents to reflect value for human life, dignity, and rights
  • Engage with stakeholders
  • Provide details of Safe Face Pledge implementation

Organizers of the pledge say, “Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts.” And the first statement of the pledge calls on signatories to ensure their facial analysis tools are not used “to locate or identify targets in operations where lethal force may be used or is contemplated.”

Anthony Aguirre, cofounder of FLI, said, “A great majority of AI researchers agree that designers and builders of AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.  That is, in fact, the 9th Asilomar AI principle. The Safe Face Pledge asks those involved with the development of facial recognition technologies, which are dramatically increasing in power through the use of advanced machine learning, to take this belief seriously and to act on it.  As new technologies are developed and poised for widespread implementation and use, it is imperative for our society to consider their interplay with the rights and privileges of the people they affect — and new rights and responsibilities may have to be considered as well, where technologies are currently in a legal or regulatory grey area.  FLI applauds the multiple initiatives, including this pledge, aimed at ensuring that facial recognition technologies — as with other AI technologies — are implemented only in a way that benefits both individuals and society while taking utmost care to respect individuals’ rights and human dignity.”

You can support the Safe Face Pledge by signing here.

 

Highlights From NeurIPS 2018

The Top Takeaway from Google’s Attempt to Remove Racial Biases From AI

By Jolene Creighton

Algorithms don’t just decide what posts you see in your Facebook newsfeed. They make millions of life-altering decisions every day. They help decide who moves to the next stage of a job interview, who can take out a loan, and even who’s granted parole.

When one stops to consider the well-known biases that exist in these algorithms, the role that they play in our decision-making processes becomes somewhat concerning.

Ultimately, bias is a problem that stems from the unrepresentative datasets that our systems are trained on. For example, when it comes to images, most of the training data is Western-centric — it depicts caucasian individuals taking part in traditionally Western activities. Consequently, as Google research previously revealed, if we give an AI system an image of a caucasian bride in a Western dress, it correctly labels the image as “wedding,” “bride,” and “women.” If, however, we present the same AI system with an image of a bride of Asian descent, is produces results like “clothing,” “event,” and “performance art.”

Of course, this problem is not exclusively a Western one. In 2011, a study found that AI developed in Eastern Asia have more difficulty distinguishing between Caucasian faces than Asian faces.

That’s why, in September of 2018, Google partnered with the NeurIPS confrence to launch the Inclusive Images Competition, an event that was created to help encourage the development of less biased AI image classification models.

For the competition, individuals were asked to use Open Images, a image dataset collected from North America and Europe, to train a system that can be evaluated on images collected from a different geographic region.

At this week’s NeurIPS conference, Pallavi Baljekar, a Google Brain researcher, spoke about the success of the project. Notably, the competition was only marginally successful. Although the leading models maintained relatively high accuracy in the first stages of the competition, four out of five top models didn’t predict the “bride” label when applied to the original two bride images.

However, that’s not to say that progress wasn’t made. Baljekar noted that the competition proved that, even with a small and diverse set of data, “we can improve performance on unseen target distributions.”

And in an interview, Pavel Ostyakov, a Deep Learning Engineer at Samsung AI Center and the researcher who took first place in the competition, added that demanding an entirely unbiased AI may be asking for a bit too much.  Ultimately, our AI need to be able to “stereotype” to some degree in order to make their classifications. “The problem was not solved yet, but I believe that it is impossible for neural networks to make unbiased predictions,” he said. Ultimately, the need to retain some biases are sentiments that have been echoed by other AI researchers before.

Consequently, it seems that making unbiased AI systems is going to be a process that requires continuous improvement and tweaking. Yet, despite the fact that we can’t make entirely unbiased AI, we can do a lot more to make them less biased.

With this in mind, today, Google announced Open Images Extended. It’s an extension of Google’s Open Images and is intended to be a dataset that better represents the global diversity we find on our planet. The first set to be added is seeded with over 470,000 images.

On this very long road we’re traveling, it’s a step in the right direction.

 

 

 

The Reproducibility Problem: AI Agents Should be Trained in More Realistic Environments

By Jolene Creighton

Our world is a complex and vibrant place. It’s also remarkably dynamic, existing in a state of near constant change. As a result, when we’re faced with a decision, there are thousands of variables that must be considered.

According to Joelle Pineau, an Associate Professor at McGill University and lead of Facebook’s Artificial Intelligence Research lab in Montreal, this poses a bit of a problem when it comes to our AI agents.

During her keynote speech at the 2018 NeurIPS conference, Pineau stated that many AI researchers aren’t training their machine learning systems in proper environments. Instead of using dynamic worlds that mimic what we see in real life, much of the work that’s currently being done takes place in simulated worlds that are static and pristine, lacking the complexity of realistic environments.

According to Pineau, although these computer-constructed worlds help make research more reproducible, they also make the results less rigorous and meaningful. “The real world has incredible complexity, and when we go to these simulators, that complexity is completely lost,” she said.

Pineau continued by noting that, if we hope to one day create intelligent machines that are able to work and react like humans — artificial general intelligences (AGIs) — we must go beyond the static and limited worlds that are created by computers and begin tackling real world scenarios. “We have to break out of these simulators…on the roadmap to AGI, this is only the beginning,” she said.

Ultimately, Pineau also noted that we will never achieve a true AGI unless we begin testing our systems on more diverse training sets and forcing our intelligent agents to tackle more complex problems. “The world is your test set,” she said, concluding, “I’m here to encourage you to explore the full spectrum of opportunities…this means using separate tasks for training and testing.”

Teaching a Machine to Reason

Pineau’s primary critique was on an area of machine learning that is known as reinforcement learning (RL). RL systems allow intelligent agents to improve their decision-making capabilities through trial and error. Over time, these agents are able to learn the rules that govern good and bad choices by interacting with their environment and receiving numerical reward signals that are based on the actions that they take.

Ultimately, RL systems are trained to maximize the numerical reward signals that they receive, so their decisions improve as they try more things and discover what actions yield the most reward. But unfortunately, most simulated worlds have a very limited number of variables. As a result, RL systems have very few things that they can interact with. This means that, although intelligent agents may know what constitutes good decision-making in a simulated environment, when they’re deployed in a realistic environment, they quickly become lost amidst all the new variables.

According to Pineau, overcoming this issue means creating more dynamic environments for AI systems to train on.

To showcase one way of accomplishing this, Pineau turned to Breakout, a game launched by Atari in 1976. The game’s environment is simplistic and static, consisting of a background that is entirely black. In order to inject more complexity into this simulated environment, Pineau and her team inserted videos, which are an endless source of natural noise, into the background.

Pineau argued that, by adding these videos into the equation, the team was able to create an environment that includes some of the complexity and variability of the real world. And by ultimately training reinforcement learning systems to operate in such multifaceted environments, researchers obtain more reliable findings and better prepare RL systems to make decisions in the real world.

In order to help researchers better comprehend exactly how reliable and reproducible their results currently are — or aren’t — Pineau pointed to The 2019 ICLR Reproducibility Challenge during her closing remarks.

The goal of this challenge is to have members of the research community try to reproduce the empirical results submitted to the International Conference on Learning Representations. Then, once all of the attempts have been made, the results are sent back to the original authors. Pineau noted that, to date, the challenge has had a dramatic impact on the findings that are reported. During the 2018 challenge, 80% of authors that received reproducibility reports stated that they changed their papers as a result of the feedback.

You can download a copy of Pineau’s slides here.

 

 

Montreal Declaration on Responsible AI May Be Next Step Toward the Development of AI Policy

By Ariel Conn

Over the last few years, as concerns surrounding artificial intelligence have grown, an increasing number of organizations, companies, and researchers have come together to create and support principles that could help guide the development of beneficial AI. With FLI’s Asilomar Principles, IEEE’s treatise on the Ethics of Autonomous and Intelligent Systems, the Partnership on AI’s Tenets, and many more, concerned AI researchers and developers have laid out a framework of ethics that almost everyone can agree upon. However, these previous documents weren’t specifically written to inform and direct AI policy and regulations.

On December 4, at the NeurIPS conference in Montreal, Canadian researchers took the next step, releasing the Montreal Declaration on Responsible AI. The Declaration builds on the current ethical framework of AI, but the architects of the document also add, “Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.”

Yoshua Bengio, a prominent Canadian AI researcher and founder of one of the world’s premiere machine learning labs, described the Declaration saying, “Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner.”

“We want this Declaration to spark a broad dialogue between the public, the experts and government decision-makers,” said UdeM’s rector, Guy Breton. “The theme of artificial intelligence will progressively affect all sectors of society and we must have guidelines, starting now, that will frame its development so that it adheres to our human values ​​and brings true social progress.”

The Declaration lays out ten principles: Well-Being, Respect for Autonomy, Protection of Privacy and Intimacy, Solidarity, Democratic Participation, Equity, Diversity, Prudence, Responsibility, and Sustainable Development.

The primary themes running through the Declaration revolve around ensuring that AI doesn’t disrupt basic human and civil rights and that it enhances equality, privacy, diversity, and human relationships. The Declaration also suggests that humans need to be held responsible for the actions of artificial intelligence systems (AIS), and it specifically states that AIS cannot be allowed to make the decision to take a human life. It also includes a section on ensuring that AIS is designed with the climate and environment in mind, such that resources are sustainably sourced and energy use is minimized.

The Declaration is the result of deliberation that “occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.” That it was formulated in Canada is especially relevant given Montreal’s global prominence in AI research.

In his article for the Conversation, Bengio explains, “Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.”

He adds, “Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.”

 

 

Making an Impact: What Role Should Scientists Play in Creating AI Policy?

By Jolene Creighton

Artificially intelligent systems are already among us. They fly our planes, drive our cars, and even help doctors make diagnoses and treatment plans. As AI continues to impact daily life and alter society, laws and policies will increasingly have to take it into account. Each day, more and more of the world’s experts call on policymakers to establish clear, international guidelines for the governance of AI.

This week, at the 2018 NeurIPS conference, Edward W. Felten, Professor of Computer Science and Public Affairs at Princeton University, took up the call.

During his opening remarks, Felten noted that AI is poised to radically change everything about the way we live and work, stating that this technology is “extremely powerful and represents a profound change that will happen across many different areas of life.” As such, Felten noted that we must work quickly to amend our laws and update our policies so we’re ready to confront the changes that this new technology brings.

However, Felten argued that policy makers cannot be left to dictate this course alone — members of the AI research community must engage with them.

“Sometimes it seems like our world, the world of the research lab or the developer’s or data scientist’s cubicle, is a million miles from public policy…however, we have not only an opportunity but also a duty to be actively participating in public life,” he said.

Guidelines for Effective Engagement

Felton noted that the first step for researchers is to focus on and understand the political system as a whole. “If you look only at the local picture, it might look irrational. But, in fact, these people [policymakers] are operating inside a system that is big and complicated,” he said. To this point, Felten stated that researchers must become better informed about political processes so that they can participate in policy conversations more effectively.

According to Felten, this means the AI community needs to recognize that policy work is valid and valuable, and this work should be incentivized accordingly. He also called on the AI community to create career paths that encourage researchers to actively engage with policymakers by blending AI research and policy work.

For researchers who are interested in pursuing such work, Felten outlined the steps they should take to start an effective dialogue:

  1. Combine knowledge with preference: As a researcher, work to frame your expertise in the context of the policymaker’s interests.
  2. Structure the decision space: Based on the policymaker’s preferences, give a range of options and explain their possible consequences.
  3. Follow-up: Seek feedback on the utility of the guidance that you offered and the way that you presented your ideas.

If done right, Felton said, this protocol allows experts and policy makers to build productive engagement and trust over time.

AI Alignment Podcast: On Becoming a Moral Realist with Peter Singer

Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI?

On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with Peter Singer. Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He’s a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne.

Topics discussed in this episode include:

  • Peter’s transition from moral anti-realism to moral realism
  • Why emotivism ultimately fails
  • Parallels between mathematical/logical truth and moral truth
  • Reason’s role in accessing logical spaces, and its limits
  • Why Peter moved from preference utilitarianism to hedonic utilitarianism
  • How objectivity in ethics might affect AI alignment
In this interview we discuss ideas contained in the work of Peter Singer. You can learn more about Peter’s work here and find many of the ideas discussed on this podcast in his work The Point of View of the Universe: Sidgwick and Contemporary EthicsYou can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone, welcome back to the AI Alignment Podcast series. I’m Lucas Perry, and today, we will be speaking with Peter Singer about his transition from being a moral anti-realist to a moral realist. In terms of AI safety and alignment, this episode primarily focuses on issues in moral philosophy.

In general, I have found the space of moral philosophy to be rather neglected in discussions of AI alignment where persons are usually only talking about strategy and technical alignment. If it is unclear at this point, moral philosophy and issues in ethics make up a substantial part of the AI alignment problem and have implications in both strategy and technical thinking.

In terms of technical AI alignment, it has implications in preference aggregation, and it’s methodology, in inverse reinforcement learning, and preference learning techniques in general. It affects how we ought to proceed with inter-theoretic comparisons of value, with idealizing persons or agents in general and what it means to become realized, how we deal with moral uncertainty, and how robust preference learning versus moral reasoning systems should be in AI systems. It has very obvious implications in determining the sort of society we are hoping for right before, during, and right after the creation of AGI.

In terms of strategy, strategy has to be directed at some end and all strategies smuggle in some sort of values or ethics, and it’s just good here to be mindful of what those exactly are.

And with regards to coordination, we need to be clear, on a descriptive account, of different cultures or groups’ values or meta-ethics and understand how to move from the state of all current preferences and ethics onwards given our current meta-ethical views and credences. All in all, this barely scratches the surface, but it’s just a point to illustrate the interdependence going on here.

Hopefully this episode does a little to nudge your moral intuitions around a little bit and impacts how you think about the AI alignment problem. In coming episodes, I’m hoping to pivot into more strategy and technical interviews, so if you have any requests, ideas, or persons you would like to see interviewed, feel free to reach out to me at lucas@futureoflife.org. As usual, if you find this podcast interesting or useful, it’s really a big help if you can help share it on social media or follow us on your preferred listening platform.

As many of you will already know, Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He’s a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne. And so, without further ado, I give you Peter Singer.

Thanks so much for coming on the podcast, Peter. It’s really wonderful to have you here.

Peter: Oh, it’s good to be with you.

Lucas: So just to jump right into this, it would be great if you could just take us through the evolution of your metaethics throughout your career. As I understand, you began giving most of your credence to being an anti-realist and a preference utilitarian, but then over time, it appears that you’ve developed into a hedonic utilitarian and a moral realist. Take us through the evolution of these views and how you developed and arrived at your new ones.

Peter: Okay, well, when I started studying philosophy, which was in the 1960s, I think the dominant view, at least among people who were not religious and didn’t believe that morals were somehow an objective truth handed down by God, was what was then referred to as an emotivist view, that is the idea that moral judgments express our attitudes, particularly, obviously from the name, emotional attitudes, that they’re not statements of fact, they don’t purport to describe anything. Rather, they express attitudes that we have and they encourage others to share those attitudes.

So that was probably the first view that I held, siding with people who were non-religious. It seemed like a fairly obvious option. Then I went to Oxford and I studied with R.M. Hare who was a professor of moral philosophy at Oxford at the time and a well-known figure in the field. His view was also in this general ballpark of non-objectivist or, as we would know say, non-realist theories, non-cognitivist] was another term used for them. They didn’t purport to be about knowledge.

But his view was that when we make a moral judgment, we are prescribing something. So his idea was that moral judgments fall into the general family of imperative judgments. So if I tell you shut the door, that’s an imperative. It doesn’t say anything that’s true or false. And moral judgments were a particular kind of imperative according to Hare, but they had this feature that they had to be universalizable. So by universalizable, Hare meant that if you were to make a moral judgment, your prescription would have to hold in all relevantly similar circumstances. And relevantly similar was defined in such a way that it didn’t depend on who the people were.

So, for example, if I were to prescribe that you should be my slave, the fact that I’m the slave master and you’re the slave isn’t a relevantly similar circumstance. If there’s somebody just like me and somebody just like you, that I happen to occupy your place, then the person who is just like me would also be entitled to be the slave master of me ’cause now I’m in the position of the slave.

Obviously, if you think about moral judgments that way, that does put a constraint on what moral judgments you can accept because you wouldn’t want to be a slave, presumably. So I liked this view better than the straightforwardly emotivist view because it did seem to give more scope for argument. It seemed to say look, there’s some kind of constraint that really, in practice, means we have to take everybody’s interests into account.

And I thought that was a good feature about this, and I drew on that in various kinds of applied contexts where I wanted to make moral arguments. So that was my position, I guess, after I was at Oxford, and for some decades after that, but I was never completely comfortable with it. And the reason I was not completely comfortable with it was that there was always a question you could ask on Hare’s view. Hare said where does this universalizability constraint come from on our moral judgment? And Hare’s answer was well, it’s a feature of moral language. It’s implied in, say, using the terms ought or good or bad or beauty or obligation. It’s implied that the judgments you are making are universalizable in this way.

And that, in itself, was plausible enough, but it was open to the response that well, in that case, I’m just not gonna use moral language. If moral language requires me to make universalizable prescriptions and that means that I can’t do all sorts of things or can’t advocate all sorts of things that I would want to advocate, then I just won’t use moral language to justify my conduct. I’ll use some other kind of language, maybe prudential language, language of furthering my self-interests. And what’s wrong with doing that moreover, and it’s not just that they can do that, but tell me what’s wrong with them doing that?

So this is a kind of a question about why act morally. And on his view, it wasn’t obvious from his view what the answer to that would be, and, in particular, it didn’t seem that there would be any kind of answer about that’s irrational or you’re missing something. It seemed, really, as if it was an open choice that you had whether to use moral language or not.

So as I got further into the problem, as I tried to develop arguments that would show that it was a requirement of reason, not just a requirement of moral language, but a requirement of reason that we universalize our judgements.

And yet, it was obviously a problem in fitting that in to Hare’s framework, which is, I’ve been saying, was a framework within this general non-cognitivist family. And for Hare, the idea that there are objective reasons for action didn’t really make sense. They were just these desires that we had, which led to us making prescriptions and then the constraint that we universalize their prescriptions, but he explicitly talked about the possibility of objective prescriptions and said that that was a kind of nonsense, which I think comes out of the general background of the kind of philosophy that came out of logical positivism and the verificationist idea that things that you couldn’t verify were nonsense or so and so. And that’s why I was pretty uncomfortable with this, but I didn’t really see bright alternatives to it for some time.

And then, I guess, gradually, I was persuaded by a number of philosophers who were respected that Hare was wrong about rejecting the idea of objective truth in morality. I talked to Tom Nagel and probably most significant was the work of Derek Parfit, especially his work On What Matters, volumes one and two, which I saw in advance in draft form. He circulated drafts of his books to lots of people who he thought might give him some useful criticism. And so I saw that many years before it came out, and the arguments did seem, to me, pretty strong, particularly the objections to the kind of view that I’d held, which, by this time, was no longer usually called emotivism, but was called expressivism, but I think it’s basically a similar view, a view in the ballpark.

And so I came to the conclusion that there is a reasonable case for saying that there are objective moral truths and this is not just a matter of our attitudes or of our preferences universalized, but there’s something stronger going on and it’s, in some ways, more like the objectivity of mathematical truths or perhaps of logical truths. It’s not an empirical thing. This is not something you can describe that comes in the world, the natural world of our sense that you can find or prove empirically. It’s rather something that is rationally self-evident, I guess, to people who reflect on it properly and think about it carefully. So that’s how I gradually made the move towards objectivist metaethic.

Lucas: I think here, it would be really helpful if you could thoroughly unpack what your hedonistic utilitarian objectivist meta-ethics actually looks like today, specifically getting into the most compelling arguments that you found in Parfit and in Nagel that led you to this view.

Peter: First off, I think we should be clear that being an objectivist about metaethics is one thing. Being a hedonist rather than a preference utilitarian is a different thing, and I’ll describe … There is some connection between them as I’ll describe in a moment, but I could have easily become an objectivist and remained a preference utilitarian or held some other kind of normative moral view.

Lucas: Right.

Peter: So the metaethic view is separate from that. What were the most compelling arguments here? I think one of the things that had stuck with me for a long time and that had restrained me from moving in this direction was the idea that it’s hard to know what you mean when you say that something is an objective truth outside the natural world. So in terms of saying that things are objectively true in science, the truths of scientific investigation, we can say well, there’s all of this evidence for it. No rational person would refuse to believe this once they were acquainted with all of this evidence. So that’s why we can say that that is objectively true.

But that’s clearly not going to work for truths in ethics, which, assuming of course that we’re not naturalists, that we don’t think this can be deduced from some examination of human nature or the world, I certainly don’t think that and the people that are influential on me, Nagel and Parfit in particular, also didn’t think that.

So the only restraining question was well, what could this really amount to? I had known going back to the intuitionists in the early 20th century, people like W.D. Ross or, earlier, Henry Sidgwick, who was a utilitarian objectivist philosopher, that people made the parallel with mathematical proofs that there are mathematical proofs that we see as true by direct insight into their truths by their self-evidence, but I have been concerned about this. I’d never really done a deep study of philosophy or mathematics, but I’d been concerned about this because I thought there’s a case for saying that mathematical truths are an analytic truths, they’re truths in virtue of the meanings of the terms and virtue of the way we define what we mean by the numbers and by equals or the various other terms that we use in mathematics so that it’s basically just the unpacking of an analytic system.

The philosophers that I respected didn’t think this view had been more popular at the time when I was a student and it had stuck with me for a while, and although it’s not disappeared, I think it’s perhaps not as widely held a view now as it was then. So that plus the arguments that were being made about how do we understand mathematical truths, how do we understand the truths of logical inference. We grasps these as self-evident. We find them undeniable, yet this is, again, a truth that is not part of the empirical world, but it doesn’t just seem that it’s an analytic truth either. It doesn’t just seem that it’s the meanings of the terms. It does seem that we know something when we know the truths of logic or the truths of mathematics.

On this basis, it started to seem like the idea that there are these non-empirical truths in ethics as well might be more plausible than I thought it was before. And I also went back and read Henry Sidgwick who’s a philosopher that I greatly admire and that Parfit also greatly admired, and looked at his arguments about what he saw as, what he called, moral axioms, and that obviously makes the parallel with axioms of mathematics.

I looked at them and it did seem to me difficult to deny, that is, claims, for example, that there’s no reason for preferring one moment of our existence to another in itself. In other words that we shouldn’t discount the future, except for things like uncertainty, but otherwise, the future is just as important as the present, an idea somewhat similar to his universalizability, but somewhat differently stated by Sidgwick that if something is right for someone, then it’s right independently of the identities of the people involved. But for Sidgwick, as I say, that was, again, a truth of reason, not simply an implication of the use of particular moral terms. Thinking about that, that started to seem right to me, too.

And, I guess, finally, Sidgwick’s claim that the interest of one individual are no more important than the interests of another, assuming that the goods involved that can be done to that person, that is the extent of their interests are similar. Sidgwick’s claim was that people were reflecting carefully on these truths can see that they’re true, and I thought about that, and it did seem to me that … It was pretty difficult to deny, not that nobody will deny them, but that they do have a self-evidence about them. That seemed to me to be a better basis for ethics than views that I’d been holding up to that point, the views that so came out of, originally, emotivism and then out of prescriptivism.

It was a reasonable chance that that was right. As you say, I should give it more credence than I have. It’s not that I’m 100% certain that it’s right by any means, but that’s a plausible view that’s worth defending and trying to see what objections people make to it.

Lucas: I think there’s three things here that would be helpful for us to dive in more on. The first thing is, and this isn’t a part of metaethics, which I’m particularly acquainted with, so, potentially, you can help guide us through this part a little bit more. This non-naturalism vs naturalism argument. Your view is, I believe you’re claiming, is a non-naturalist view is you’re claiming that you can not deduce the axioms of ethics or the basis of ethics from a descriptive or empirical account of the universe?

Peter: That’s right. There certainly are still naturalists around. I guess Peter Railton is a well-known, contemporary, philosophical naturalist. Perhaps Frank Jackson, my Australian friend and colleague. And some of the naturalist views have become more complicated than they used to be. I suppose the original idea of naturalism that people might be more familiar with is simply the claim that there is a human nature and that acting in accordance with that human nature is the right thing to do, so you describe human nature and then you draw from that what are the characteristics that we ought to follow.

That, I think, just simply doesn’t work. I think it has its origins in a religious framework in which you think that God has created our nature with particular purposes that we should behave in certain ways. But the naturalists who defend it, going back to Aquinas even, maintain that it’s actually independent of that view.

If you, in fact, you take an evolutionary view of human nature, as I think we should, then our nature is morally neutral. You can’t derive any moral conclusions from what our nature is like. It might be relevant to know what our nature is like in order to know that if you do one thing, that might lead to certain consequences, but it’s quite possible that, for example, our nature is to seek power and to use force to obtain power, that that’s an element of human nature or, on a group level, to go to war in order to have power over others, and yet naturalists wouldn’t wanna say that those are the right things. They would try and give some account as to why how some of that’s a corruption of human nature.

Lucas: Putting aside naturalist accounts that involve human nature, what about a purely descriptive or empirical understanding of the world, which includes, for example, sentient beings and suffering, and suffering is like a substantial and real ontological fact of the universe and the potential of deducing ethics from facts about suffering and what it is like to suffer? Would that not be a form of naturalism?

Peter: I think you have to be very careful about how you formulate this. What you said sounds a little bit like what Sam Harris says in his book, The Moral Landscape, which does seem to be a kind of naturalism because he thinks that you can derive moral conclusions from science, including exactly the kinds of things that you’ve talked about, but I think there’s a gap there, and the gap has to be acknowledged. You can certainly describe suffering and you can describe happiness conversely, but you need to get beyond description if you’re going to have a normative judgment. That is if you’re gonna have a judgment that says what we ought to do or what’s the right thing to do or what’s a good thing to do, there’s a step that’s just being left out.

If somebody says sentient beings can suffer pain or they can be happy, this is what suffer and pain is like, this is what being happy is like; therefore, we ought to promote happiness, which goes back to David Hume who pointed this out that various moral arguments describe the world using is, is, is, this is the case, and then, suddenly, but without any explanation, they say and therefore, we ought. Needs to be explained how you get from this is statement to the ought statements.

Lucas: It seems that reason, whatever reason might be and however you might define that, seems to do a lot of work at the foundation of your moral view because it seems that reason is what leads you towards the self-evident truth of certain foundational ethical axioms. Why might we not be able to pull the same sort of move with a form of naturalistic moral realism like Sam Harris develops by simply stating that given a full descriptive account of the universe and given first person accounts of suffering and what suffering is like, that it is self-evidently true that built into the nature of that sort of property or part of the universe is that it ought to be diminished?

Peter: Well, if you’re saying that … There is a fine line, maybe this is what you’re suggesting, between saying from the description, we can deduce what we ought to do and between saying when we reflect on what suffering is and when we reflect on what happiness is, we can see that it is self-evident that we ought to promote happiness and we ought to reduce suffering. So I regard that as a non-naturalist position, but you’re right that the two come quite close together.

In fact, this is one of the interesting features of volume three of Parfit’s On What Matters, which was only published posthumously, but was completed before he died, and in that, he responds to essays that are in a book that I edited called Does Anything Really Matter. The original idea was that he would respond in that volume, but, as often happened with Parfit, he wrote responses as such length that it needed to be a separate volume. It would’ve made the work too bulky to put them together, but Peter Railton had an essay in Does Anything Really Matter, and Parfit responded to it, and then he invited Railton to respond to his response, and, essentially, they are saying that yeah, their views have become closer anyway, there’s been a convergence, which is pretty unusual in philosophy because philosophers tend to emphasize the differences between their views.

Between what Parfit calls his non-natural objectivist view and between Railton’s naturalist view, because Railton’s is a more sophisticated naturalist view, the line starts to become a little thin, I agree. But, to me, the crucial thing is that you’re not just saying here’s this description; therefore, we ought to do this. But you’re saying if we understand what we’re talking about here, we can have as an intuition of self-evidence, the proposition that it’s good to promote this or it’s good to try to prevent this. So that’s the moral proposition, that it is good to do this. And that’s the proposition that you have to take some other step. You can say it’s self-evident, but you have to take some other step from simply saying this is what suffering is like.

Lucas: Just to sort of capture and understand your view a bit more here, and going back to, I think, mathematics and reason and what reason means to you and how it operates the foundation of your ethics, I think that a lot of people will sort of get lost or potentially feel it is maybe an arbitrary or cheap move to …

When thinking about the foundations of mathematics, there are foundational axioms, which is self-evidently true, which no one will deny, and then translating that move into the foundations of ethics into determining what we ought to do, it seems like there would be a lot of peole being lost there, there would be a lot of foundational disagreement there. When is it permissible or okay or rational to make that sort of move? What does it mean to say that these really foundational parts of ethics are self-evidently true? How is not the case that that’s simply an illusion or simply a byproduct of evolution that we’re confused that these certain fictions that we’ve evolved are self-evidently true?

Peter: Firstly, let me say, as I’ve mentioned before, I don’t claim that we can be 100% certain about moral truths, but I do think that it’s a plausible view. One reason why it relates to, you just mentioned, being a product of evolution, one reason why it relates to that, and this is something that I argued with my co-author Katarzyna de Lazari-Radek in the 2014 book we wrote called The Point of View of the Universe, which is, in fact, a phrase form Sidgwick, and that argument is that there are a number of moral judgments that we make, there are many moral judgments that we make that we know have evolutionary origins, so lots of things that we think of as wrong, originated because they would not have helped us to survive or they would not have helped a small tribal group to survive to allow certain kinds of conduct. And some of those, we might wanna reject today.

We might think, for example, we have an instinctive repugnance of incest, but Jonathon Hyde has shown that even if you describe a case where adult brothers and sisters who choose to have sex and nothing bad happens as a result of that, their relationship remains as strong as ever, and they have fun, and that’s the end of it, people still say oh, somehow that’s wrong. They try to make up reasons why it’s wrong. That, I think, is an example of an evolved impulse, which, perhaps, is no longer really apposite because we have effective contraception, and so what are the evolutionary reasons why we might want to avoid incest are not necessarily there.

But in a case of the kinds of things that I’m talking about and that Sidgwick is talking about, like the idea that everyone’s good is of equal significance, they have perceived why we would’ve evolved to have bad attitude because, in fact, it seems harmful to our prospects of survival and reproduction to give equal weight to the interest of complete strangers.

The fact that people do think this, and if you look at a whole lot of different independent, historical, ethical traditions in different cultures and different parts of the world at different times, you do find many thinkers who converge on something like this idea in various formulations. So why do they converge on this given that it doesn’t seem to have that evolutionary justification or explanation as to why it would’ve evolved?

I think that suggests that it may be a truth of reason and, of course, you may then say well, but reason has also evolved, and indeed it has, but I think that reason may be a little different in that we evolved a capacity to reason various specific problem solving needs, helped us to survive in lots of circumstances. But it may then enable us to see things that have no survival value, just as no doubt simple arithmetic has a survival value, but understanding the truths of higher mathematics don’t really have a survival value, so maybe similarly in ethics, there are some of these more abstract universal truths that don’t have a survival value, but which, nevertheless, the best explanation for why many people seem to come to these views is that they’re truths of reason, and once we’re capable of reasoning, we’re capable of understanding these truths.

Lucas: Let’s start off at reason and reason alone. When moving from reason and thinking, I guess, alongside here about mathematics for example, how is one moving specifically from reason to moral realism and what is the metaphysics of this kind of moral realism in a naturalistic universe without anything supernatural?

Peter: I don’t think that it needs to have a very heavyweight metaphysical presence in the universe. Parfit actually avoided the term realism in describing his view. He called it non-naturalistic normative objectivism because he thought that realism carried this idea that it was part of the furniture of the universe, as philosophers say, that the universe consists of the various material objects, but in addition to that, it consists of moral truths is if they’re somehow sort of floating there out in space, and that’s not the right way to think about it.

I’d say, rather, the right way to think about it is as, you know, we do with logical and mathematical truths that once you have been capable of a certain kind of thought, they will move towards these truths. They have the potential and capacity for thinking along these lines. One of the claims that I would make a consequence of my acceptance of objectivism in ethics as the rationally based objectivism is that the morality that we humans have developed on Earth in this, anyway, at this more abstract, universal level is something that aliens from another galaxy could also have achieved if they had similar capacities of thought or maybe greater capacities of thought. It’s always a possible logical space, you could say, or a rational space that is there that beings may be able to discover once they develop those capacities.

You can see mathematics in that way, too. It’s one of a number of possible ways of seeing mathematics and of seeing logic, but they’re just timeless things that, in some way, truths or laws, if you like, but they don’t exist in the sense in which the physical universe exists.

Lucas: I think that’s really a very helpful way of putting it. So the claim here is that through reason, one can develop the axioms of mathematics and then eventually develop quantum physics and other things. And similarly, when reason is applied to thinking about what one ought to do or when thinking about the status of sentient creatures that one is applying logic and reason to this rational space and that this rational space has truths in the same way that mathematics does?

Peter: Yes, that’s right. It has at least perhaps only a very small number, Sidgwick came up with three axioms that are perhaps only a very small number of truths and fairly abstract truths, but that they are truths. That’s the important aspect. That they’re not just particular attitudes, which beings who evolved as homo sapiens have all are likely to understand and accept, but beings who evolved in a different galaxy in a quite different way would not accept. My claim is that if they are also capable of reasoning, if evolution had again produced rational beings, they would be able to see the truths in the same way as we can.

Lucas: So spaces of rational thought and of logic, which can or can not be explored, seems very conceptual queer to me, such that I don’t even really know how to think about it. I think that one would worry that one is applying reason, whatever reason might be, to a fictional space. I mean you’re discussing earlier that some people believe mathematics to be simply the formalization of what is analytically true about the terms and judgments and the axioms and then it’s just a systematization of that and an unpacking of it from beginning into infinity. And so, I guess, it’s unclear to me how one can discern spaces of rational inquiry which are real, from ones which are anti-real or which are fictitious. Does that make sense?

Peter: It’s a problem. I’m not denying that there is something mysterious, I think maybe my former professor, R.M. Hare, would have said queer … No, it was John Mackie, actually, John Mackie was also at Oxford when I was there, who said these must be very queer things if there are some objective moral truths. I’m not denying that it’s something that, in a way, would be much simpler if we could explain everything in terms of empirical examination of the natural world and say there’s only that plus there are formal systems. There are analytic systems.

But I’m not persuaded that that’s a satisfactory explanation of mathematics or logic either, so if those who are convinced that this is a satisfactory way of explaining logic and mathematics, may well think that then they don’t need this explanation of ethics either, but it is a matter of if we need to appeal to something outside the natural realm to understand some of the other things about the way we reason, then perhaps ethics is another candidate for this.

Lucas: So just drawing parallels again here with mathematics ’cause I think it’s the most helpful. Mathematics is incredible for helping us to describe and predict the universe. The president of the Future of Life Institute, Max Tegmark, develops an idea of potential mathematical Platonism or realism where the universe can be understood primarily as, and sort of ontologically, a mathematical object within, potentially, a multiverse because as we look into the properties and features of quarks and the building blocks of the universe, all we find is more mathematical properties and mathematical relationships.

So within the philosophy of math, there’s certainly, it seems, open questions about what math is and what the relation of mathematics is to the fundamental metaphysics and ontology of the universe and potential multiverse. So in terms of ethics, what information or insight or anything do you think that we’re missing could further inform our view that there potentially is objective morality or whatever that means or inform us that there is a space of moral truths which can be arrived at by non-anthropocentric minds, like aliens minds you said could also arrive at the moral truths as they could also arrive at mathematical truths.

Peter: So what further insight would show that this was correct, other, presumably, than the arrival of aliens who start swapping mathematical theorems with us?

Lucas: And have arrived at the same moral views. For example, if they show up and they’re like hey, we’re hedonistic consequentialists and we’re really motivated to-

Peter: I’m not saying they’d necessarily be hedonistic consequentialists, but they would-

Lucas: I think they should be.

Peter: That’s a different question, right?

Lucas: Yeah, yeah, yeah.

Peter: We haven’t really discussed steps to get there yet, so I think they’re separate questions. My idea is that they would be able to see that if we had similar interests to the ones that they did, then those interests ought to get similar weight, that they shouldn’t ignore our interests just because we’re not members of whatever civilization or species they are. I would hope that if they are rationally sophisticated, they would at least be able to see that argument, right?

Some of them, just as with us, might see the argument and then say yes, but I love the tastes of your flesh so much I’m gonna kill you and eat you anyway. So, like us, they may not be purely rational beings. We’re obviously not purely rational beings. But if they can get here and contact us somehow, they should be sufficiently rational to be able to see the point of the moral view that I’m describing.

But that wasn’t a very serious suggestion about waiting for the aliens to arrive, and I’m not sure that I can give you much of an answer to say what further insights are relevant here. Maybe it’s interesting to try and look at this cross-culturally, as I was saying, and to examine the way that great thinkers of different cultures and different eras have converged on something like this idea despite the fact that it seems unlikely to have been directly produced by evolutionary beings in the same way that our other more emotionally driven moral reactions are.

Peter: I don’t know that the argument can go any further, and it’s not completely conclusive, but I think it remains plausible. You might say well, that’s a stalemate. Here are some reasons for thinking morality’s objective and other reasons for rejecting that, and that’s possible. That happens in philosophy. We get down to bedrock disagreements and it’s hard to move people with different views.

Lucas: What is reason? One could also view reason as some human-centric bundle of both logic and intuitions, and one can be mindful that the intuitions, which are sort of bundled with this logic, are almost arbitrary consequences of evolution. So what is reason fundamentally and what does it mean that other reasonable agents could explore spaces of math and morality in similar ways?

Peter: Well, I would argue that there are common principles that don’t depend on our specific human nature and don’t depend on the path of our evolution. I accept, to the extent, that because the path of our evolution has given us the capacity to solve various problems through thought and that that is what our reason amounts to and therefore, we have insight into these truths that we would not have if we did not have that capacity. From this kind of reasoning, you can think of as something that goes beyond specific problem solving skills to insights into laws of logic, laws of mathematics, and laws of morality as well.

Lucas: When we’re talking about axiomatic parts of mathematics and logics and, potentially, ethics here as you were claiming with this moral realism, how is it that reason allows us to arrive at the correct axioms in these rational spaces?

Peter: We developed the ability when we’re presented with these things to consider whether we can deny them or not, whether they are truly self-evident. We can reflect on them, we can talk to others about them, we can consider biases that we might have that might explain why we believe them and see where there are any such biases, and once we’ve done all that, we’re left with the insight that some things we can’t deny.

Lucas: I guess I’m just sort of poking at this idea of self-evidence here, which is doing a lot of work in the moral realism. Whether or not something is self-evident, at least to me, it seems like a feeling, like I just look at the thing and I’m like clearly that’s true, and if I get a little bit meta, I ask okay, why is that I think that this thing is obviously true? Well, I don’t really know, it just seems self-evidently true. It just seems so and this, potentially, just seems to be a consequence of evolution and of being imbued with whatever reason is. So I don’t know if I can always trust my intuitions about things being self-evidently true. I’m not sure how to navigate my intuitions and views of what is self-evident in order to come upon what is true.

Peter: As I said, it’s possible that we’re mistaken, that I’m mistaken in these particular instances. I can’t exclude that possibility, but it seems to me that there’s hypotheses that we hold these views because they are self-evident, and look for evolutionary explanations and, as I’ve said, I’ve not really found them, so that’s as far as I can go with that.

Lucas: Just moving along here a little bit, and I’m becoming increasingly mindful of your time, would you like to cover briefly this sort of shift that you had from preference utilitarianism to hedonistic utilitarianism?

Peter: So, again, let’s go back to my autobiographical story. For Hare, the only basis for making moral judgments was to start from our preferences and then to universalize them. There could be no arguments about something else being intrinsically good or bad, whether it was happiness or whether it was justice or freedom or whatever because that would be to import some kind of objective claims into this debate that just didn’t have a place in his framework, so all I could do was take my preferences and prescribe them universally, and, as I said, that involved putting myself in the position of the others affected by my action and asking whether I could still accept it.

When you do that, and if you, let’s say your action affects many people, not just you and one other, what you’re really doing is you’re trying to sum up how this would be from the point of view of every one of these people. So if I put myself in A’s position, would I be able to accept this? But then I’ve gotta put myself in B’s position as well, and C, and D, and so on. And to say can I accept this prescription universalized is to say if I were living the lives of all of those people, would I want this to be done or not? And that’s a kind of, as they say, a summing of the extent to which doing this satisfies everyone’s preferences net on balance after deducting, of course, the way in which is thwarts or frustrates or is contrary to their preferences.

So this seem to be the only way in which you could go further with Hare’s views as they eventually worked it out and changed it a little bit over the years, but in his later formulations of it. So it was a kind of a preference utilitarianism that it led to, and I was reasonably happy with that, and I accepted the idea that this meant that what we ought to be doing is to maximize the satisfaction of preferences and avoid thwarting them.

And it gives you, in many cases, of course, somewhat similar conclusions to what you would say if what we wanna do is maximize happiness an minimize suffering or misery because for most people, happiness is something that they very much desire and misery is something that they don’t want. Some people might have different preferences that are not related to that, but for most people, they will probably come down some way or other to how it relates to their well-being, their interests.

There are certainly objections to this, and some of the objections relate to preferences that people have when they’re not fully informed about things. And Hare’s view was that, in fact, the preferences that we should universalize are the preferences people should have when they are fully informed and when they’re thinking calmly, they’re not, let’s say, angry with somebody and therefore they have a strong preference to hit him in the face, even though this will be bad for them and bad for him.

So the preference view sort of then took this further step of saying it’s the preferences that you would have if you were well informed and rational and calm, and that seemed to solve some problems with preference utilitarianism, but it gave rise to other problems. One of the problems were well, does this mean that if somebody is misinformed in a way that you can be pretty confident they’re never gonna be correctly informed, you should still do what they would want if they were correctly informed.

An example of this might be someone who’s a very firm religious believer and has been all their life, and let’s say one of their religious beliefs is that having sex outside marriage is wrong because God has forbid it, let’s say, it’s contrary to the commandments or whatever, but given that, let’s say, let’s just assume, there is no God, therefore, a priori there’s no commandments that God made against sex outside marriage, and given that if they didn’t believe in God, they would be happy to have sex outside marriage, and this would make them happier, and would make their partner happy as well, should I somehow try to wangle things so that they do have sex outside marriage even though, as they are now, they prefer not to.

And that seems a bit of a puzzle, really. Seems highly paternalistic to ignore their preferences in the base of their knowledge even though you’re convinced that they’re knowledge is false. So there are puzzles and paradoxes like that. And then there was another argument that does actually, again, come out of Sidgwick, although I didn’t find it in Sidgwick until I read it in other philosopher’s later.

Again, I think Peter Railton’s is one who uses his. and that is that if you’re really asking what people would do if they’re rational and fully informed, you have to make judgments about what is a rational and fully informed view in this situation. And that might involve even the views that we’ve just been discussing, that if you were rational, you would know what the objective truth was and you would want to do it. So, at that level, a preference view actually seems to amount to a different view, an objectivist view, that you would hold where you would have to actually know what the things that were good.

So, as I say, it had a number of internal problems, even just if you assume the meta-ethic that I was taking from Hare originally. But if then, as happened with me, you become convinced that there can be objective moral truths. This was, in some ways, opened up the field to other possible ideas as to what was intrinsically good because now you could argue that something was intrinsically good even if it was not something that people preferred, and in that light, I went back to reading some of the classical utilitarians, again, particularly, Sidgwick and his arguments for why happiness rather than the satisfaction of desires is the ultimate value, something that is of intrinsic value, and it did seem to overcome these problems with preference utilitarianism that had been troubling me.

It had certainly had some paradoxes of its own, some things that it seemed not to handle as well, but after thinking about it, again, I decided that it was more likely than not that a hedonistic view was the right view. I wouldn’t put it stronger than that. I still think preference utilitarianism has some things to be said for it and they’re also, of course, views that say yes, happiness is intrinsically good and suffering is intrinsically bad, but they’re not the only things that are intrinsically good or bad, things like justice or freedom or whatever. There’s various other candidates that people have put forward. Many of them, in fact, are being objectively good or bad. So there are also possibilities.

Lucas: When you mentioned that happiness or certain sorts of conscious states of sentient creatures can be seen as intrinsically good or valuable, keeping in mind the moral realism that you hold, what is the metaphysical status of experiences in the universe given this view? Is it that happiness is good based off of the application of reason and the rational space of ethics? Unpack the ontology of happiness and the metaphysics here a bit.

Peter: Well, of course it doesn’t change what happiness is. That’s to say that it’s of intrinsic value, but that is the claim that I’m making. That the world is a better place if it has more happiness in it and less suffering in it. That’s judgment that I’m making about the state of the universe. Obviously, there have to be beings who can be happy or can be miserable, and that requires a conscious mind, but the judgment that the universe if better with more happiness and less suffering is mind independent. I think … Let’s imagine that there were beings that could feel pain and pleasure, but could not make any judgments about anything of value. They’re like some non-humans animals, I guess. It would still be the case that the universe was better if those non-human animals suffered less and had more pleasure.

Lucas: Right. Because it would be sort of intrinsic quality or property to the experience that it be valuable or disvaluable. So yeah, thanks so much for your time, Peter. It’s really been wonderful and informative. If people would like to follow you or check you out somewhere, where can they go ahead and do that?

Peter: I have a website, which actually I’m in the process of reconstructing a bit, but it’s Petersinger.info. There’s a Wikipedia page. They wanna look at things that I’m involved in, they can look at thelifeyoucansave.org, which is the nonprofit organization that I’ve founded that is recommending perfective charities that people can donate to. That probably gives people a bit of an idea. There’s books that I’ve written that are discussing these things. I probably mentioned The Point of View of the Universe, which goes into the things we’ve discussed today, probably more thoroughly than anything else. For people who don’t wanna read a big book, I’ve also got Oxford University Press’ very short introduction series. The book on utilitarianism is, again, co-authored by the same co-author as The Point of View of the Universe, Katarzyna de Lazari-Radek and myself, and that’s just a hundred page version of some of these arguments we’ve been discussing.

Lucas: Wonderful. Well, thanks again, Peter. We haven’t ever met in person, but hopefully I’ll catch you around the Effective Altruism conference track sometime soon.

Peter: Okay, hope so.

Lucas: Alright, thanks so much, Peter.

Hey, it’s post-podcast Lucas here and just wanted to chime in with some of my thoughts and tie this all into AI thinking. For me, the most consequential aspect of moral thought in this space and moral philosophy, generally, is how much disagreement there is between people who’ve thought long and hard about this issue and what an enormous part of AI alignment this makes up, and the effects, different moral and meta-ethical views have on preferred AI alignment methodology.

Current general estimates by AI researchers, but human level AI on the decade to century long timescale with about a 50% probability by mid-century with that obviously increasing over time, and it’s quite obvious that moral philosophy ethics and issues of value and meaning will not be solved on that timescale. So if we assume at the worst case success story where technical alignment and coordination and strategy issues will continue in their standard, rather morally messy way with how we currently unreflectively deal with things, where moral information isn’t taken very seriously, then I’m really hoping the technical alignment and coordination succeed well enough for us to create a very low level aligned system, that we’re able to pull the brakes on and work hard on issues of value, ethics, and meaning. The end towards which that AGI will be aimed. Otherwise, it seems very clear that given all of this moral uncertainty that is shared, we risk value drift or catastrophically unoptimal or even negative futures.

Turning into Peter’s views that we discussed here today, if axioms of morality are accessible through reason alone, as the axioms of mathematics appear to be, then we ought to consider the implications here for how we want to progress with AI systems and AI alignment more generally.

If we take human beings to be agents of limited or semi-rationality, then we could expect that some of us, or some fraction of us, have gained access to what might potentially be core axioms of the logical space of morality. When AI systems are trained on human data in order to infer and learn human preferences, given Peter’s view, this could be seen as a way of learning the moral thinking of imperfectly rational beings. This, or any empirical investigation, given Peter’s views, would not be able to arrive at any clear, moral truth, rather it would find areas where semi-rational beings, like ourselves, generally tend to converge in this space.

This would be useful or potentially passable up until AGI, but if such a system is to be fully autonomous and safe, then a more robust form of alignment is necessary. If the AGI we create is one day rational, putting aside whatever reason might be and how it gives rational creatures access to self-evident truths and rational spaces, then if AGI is a fully rational agent, then it, perhaps, would arrive at self-evident truths of mathematics and logic, and even morality, just as aliens on another planet might if they’re fully rational as is Peter’s view. If so, this would potentially be evidence of this view being true and can also reflect here that AGI from this point of using reason to have insight into the core truths of logical spaces could reason much better and more impartially than any human in order to fully explore and realize universal truths of morality.

At this point, we would essentially have a perfect moral reasoner on our hands with access to timeless universal truths. Now the question would be could we trust it and what would ever be sufficient reasoning or explanation given to humans by this moral oracle that would satisfy and satiate us of our appetites and desires to know moral truth and to be sure that we have arrived at moral truth?

It’s above my pay grade what rationality or reason actually is and might be prior to certain logical and mathematical axioms and how such a truth seeking meta-awareness can grasps these truths as self-evident or whether the self-evidence of the truths of mathematics and logic are programmed into us by evolution trying and failing over millions of year. But maybe that’s an issue for another time. Regardless, we’re doing philosophy, computer science, and poly-sci on a deadline, so let’s keep working on getting it right.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

Cognitive Biases and AI Value Alignment: An Interview with Owain Evans

Click here to see this page in other languages:  Russian 

At the core of AI safety, lies the value alignment problem: how can we teach artificial intelligence systems to act in accordance with human goals and values?

Many researchers interact with AI systems to teach them human values, using techniques like inverse reinforcement learning (IRL). In theory, with IRL, an AI system can learn what humans value and how to best assist them by observing human behavior and receiving human feedback.

But human behavior doesn’t always reflect human values, and human feedback is often biased. We say we want healthy food when we’re relaxed, but then we demand greasy food when we’re stressed. Not only do we often fail to live according to our values, but many of our values contradict each other. We value getting eight hours of sleep, for example, but we regularly sleep less because we also value working hard, caring for our children, and maintaining healthy relationships.

AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, some researchers worry that systems trained with IRL will be fundamentally unable to distinguish between value-aligned and misaligned behavior. This could become especially dangerous as AI systems become more powerful: inferring the wrong values or goals from observing humans could lead these systems to adopt harmful behavior.

 

Distinguishing Biases and Values

Owain Evans, a researcher at the Future of Humanity Institute, and Andreas Stuhlmüller, president of the research non-profit Ought, have explored the limitations of IRL in teaching human values to AI systems. In particular, their research exposes how cognitive biases make it difficult for AIs to learn human preferences through interactive learning.

Evans elaborates: “We want an agent to pursue some set of goals, and we want that set of goals to coincide with human goals. The question then is, if the agent just gets to watch humans and try to work out their goals from their behavior, how much are biases a problem there?”

In some cases, AIs will be able to understand patterns of common biases. Evans and Stuhlmüller discuss the psychological literature on biases in their paper, Learning the Preferences of Ignorant, Inconsistent Agents, and in their online book, agentmodels.org. An example of a common pattern discussed in agentmodels.org is “time inconsistency.” Time inconsistency is the idea that people’s values and goals change depending on when you ask them. In other words, “there is an inconsistency between what you prefer your future self to do and what your future self prefers to do.”

Examples of time inconsistency are everywhere. For one, most people value waking up early and exercising if you ask them before bed. But come morning, when it’s cold and dark out and they didn’t get those eight hours of sleep, they often value the comfort of their sheets and the virtues of relaxation. From waking up early to avoiding alcohol, eating healthy, and saving money, humans tend to expect more from their future selves than their future selves are willing to do.

With systematic, predictable patterns like time inconsistency, IRL could make progress with AI systems. But often our biases aren’t so clear. According to Evans, deciphering which actions coincide with someone’s values and which actions spring from biases is difficult or even impossible in general.

“Suppose you promised to clean the house but you get a last minute offer to party with a friend and you can’t resist,” he suggests. “Is this a bias, or your value of living for the moment? This is a problem for using only inverse reinforcement learning to train an AI — how would it decide what are biases and values?”

 

Learning the Correct Values

Despite this conundrum, understanding human values and preferences is essential for AI systems, and developers have a very practical interest in training their machines to learn these preferences.

Already today, popular websites use AI to learn human preferences. With YouTube and Amazon, for instance, machine-learning algorithms observe your behavior and predict what you will want next. But while these recommendations are often useful, they have unintended consequences.

Consider the case of Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina. After watching videos of Trump rallies to learn more about his voter appeal, Tufekci began seeing white nationalist propaganda and Holocaust denial videos on her “autoplay” queue. She soon realized that YouTube’s algorithm, optimized to keep users engaged, predictably suggests more extreme content as users watch more videos. This led her to call the website “The Great Radicalizer.”

This value misalignment in YouTube algorithms foreshadows the dangers of interactive learning with more advanced AI systems. Instead of optimizing advanced AI systems to appeal to our short-term desires and our attraction to extremes, designers must be able to optimize them to understand our deeper values and enhance our lives.

Evans suggests that we will want AI systems that can reason through our decisions better than humans can, understand when we are making biased decisions, and “help us better pursue our long-term preferences.” However, this will entail that AIs suggest things that seem bad to humans on first blush.

One can imagine an AI system suggesting a brilliant, counterintuitive modification to a business plan, and the human just finds it ridiculous. Or maybe an AI recommends a slightly longer, stress-free driving route to a first date, but the anxious driver takes the faster route anyway, unconvinced.

To help humans understand AIs in these scenarios, Evans and Stuhlmüller have researched how AI systems could reason in ways that are comprehensible to humans and can ultimately improve upon human reasoning.

One method (invented by Paul Christiano) is called “amplification,” where humans use AIs to help them think more deeply about decisions. Evans explains: “You want a system that does exactly the same kind of thinking that we would, but it’s able to do it faster, more efficiently, maybe more reliably. But it should be a kind of thinking that if you broke it down into small steps, humans could understand and follow.”

This second concept is called “factored cognition” – the idea of breaking sophisticated tasks into small, understandable steps. According to Evans, it’s not clear how generally factored cognition can succeed. Sometimes humans can break down their reasoning into small steps, but often we rely on intuition, which is much more difficult to break down.

 

Specifying the Problem

Evans and Stuhlmüller have started a research project on amplification and factored cognition, but they haven’t solved the problem of human biases in interactive learning – rather, they’ve set out to precisely lay out these complex issues for other researchers.

“It’s more about showing this problem in a more precise way than people had done previously,” says Evans. “We ended up getting interesting results, but one of our results in a sense is realizing that this is very difficult, and understanding why it’s difficult.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is much trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • How the US and Russia perceive first-strike capabilities
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
  • The perceived obstacles to reducing nuclear arsenals

Publications discussed in this episode include:

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.

September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.

From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.

Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.

It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers which have not signed the treaty are taking steps to modernize their nuclear arsenals.

This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.

Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.

Paul and Mike first, thank you so much for joining me today.

Paul: Thank you, thanks for having us.

Mike: Yeah, excited for the conversation.

Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.

Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?

The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.

But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?

If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.

For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.

We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.

In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.

Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.

You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.

Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.

The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.

Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?

Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.

I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.

In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.

Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.

The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.

The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.

Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.

Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.

Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.

If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.

Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.

I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.

And that’s a lesson the United States by the way has taught the world over and over, over the last few decades you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.

Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.

I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.

Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.

I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?

Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.

Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.

The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.

I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.

Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and  nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.

There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?

We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”

I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.

When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.

Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.

Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side the American and Soviet side during the Cold War and immediately afterwards.

If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.

But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.

Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.

Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.

Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?

Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.

For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.

Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.

Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.

But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.

But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.

Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.

Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?

Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.

I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.

Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.

Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.

If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.

I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.

Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.

That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.

I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?

Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.

To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.

You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.

During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.

In the case of INF Treaty, Russia looking at what China is building not a signatory to INF and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.

Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging what specifically do you worry about if you’re thinking about destabilization? What does that look like?

Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.

But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.

Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.

That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.

For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.

While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-

Mike: Right, the US has actively moved to not do this, basically.

Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.

I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.

Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.

Paul: I knew you were going to say that.

Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.

Paul: Right.

Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.

Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.

Mike: That’s totally fair.

Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.

Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.

Mike: Now the world is safer. Major conventional war.

Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.

Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.

Paul: Right.

Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.

Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.

I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.

Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.

There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.

From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.

I do worry that that’s not a major feature when countries are looking at modernization that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.

Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.

Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.

Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.

In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.

That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.

Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.

Paul: Thank you.

Mike: Thanks a lot. Happy to do it and happy to come back anytime.

Paul: Yeah, thanks for having us. We really appreciate it.

[end of recorded material]

AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment with William MacAskill

How are we to make progress on AI alignment given moral uncertainty?  What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty?

Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics.

Topics discussed in this episode include:

  • Will’s current normative and metaethical credences
  • The value of moral information and moral philosophy
  • A taxonomy of the AI alignment problem
  • How we ought to practice AI alignment given moral uncertainty
  • Moral uncertainty in preference aggregation
  • Moral uncertainty in deciding where we ought to be going as a society
  • Idealizing persons and their preferences
  • The most neglected portion of AI alignment
In this interview we discuss ideas contained in the work of William MacAskill. You can learn more about Will’s work here, and follow him on social media here. You can find Gordon Worley’s post here and Rob Wiblin’s previous podcast with Will here.  You can hear more in the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast series at the Future of Life Institute. I’m Lucas Perry, and today we’ll be speaking with William MacAskill on moral uncertainty and its place in AI alignment. If you’ve been enjoying this series and finding it interesting or valuable, it’s a big help if you can share it on social media and follow us on your preferred listening platform.

Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues and ethics. And so, without further ado, I give you William MacAskill.

Yeah, Will, thanks so much for coming on the podcast. It’s really great to have you here.

Will: Thanks for having me on.

Lucas: So, I guess we can start off. You can tell us a little bit about the work that you’ve been up to recently in terms of your work in the space of metaethics and moral uncertainty just over the past few years and how that’s been evolving.

Will: Great. My PhD topic was on moral uncertainty, and I’m just putting the finishing touches on a book on this topic. The idea here is to appreciate the fact that we very often are just unsure about what we ought, morally speaking, to do. It’s also plausible that we ought to be unsure about what we ought morally to do. Ethics is a really hard subject, there’s tons of disagreement, it would be overconfident to think, “Oh, I’ve definitely figured out the correct moral view.” So my work focuses on not really the question of how unsure we should be, but instead what should we do given that we’re uncertain?

In particular, I look at the issue of whether we can apply the same sort of reasoning that we apply to uncertainty about matters of fact to matters of moral uncertainty. In particular, can we use what is known as “expected utility theory”, which is very widely accepted as at least approximately correct in empirical uncertainty. Can we apply that in the same way in the case of moral uncertainty?

Lucas: Right. And so coming on here, you also have a book that you’ve been working on on moral uncertainty that is unpublished. Have you just been expanding this exploration in that book, diving deeper into that?

Will: That’s right. There’s actually been very little that’s been written on the topic of moral uncertainty, at least in modern times, at least relative to its importance. I would think of this as a discipline that should be studied as much as consequentialism or contractualism or Kantianism is studied. But there’s really, in modern times, only one book that’s been written on the topic and that was written 18 years ago now, or published 18 years ago. What we want is this to be, firstly, just kind of definitive introduction to the topic, it’s co-authored with me as lead author, but co-authored with Toby Ord and Krista Bickfest, laying out both what we see as the most promising path forward in terms of addressing some of the challenges that face an account of decision-making under moral uncertainty, some of the implications of taking moral uncertainty seriously, and also just some of the unanswered questions.

Lucas: Awesome. So I guess, just moving forward here, you have a podcast that you already did with Rob Wiblin: 80,000 Hours. So I guess we can sort of just avoid covering a lot of the basics here about your views on using expected utility calculous in moral reasoning and moral uncertainty in order to decide what one ought to do when one is not sure what one ought to do. People can go ahead and listen to that podcast, which I’ll provide a link to within the description.

It would also be good, just to sort of get a general sense of where your meta ethical partialities just generally right now tend to lie, so what sort of meta ethical positions do you tend to give the most credence to?

Will: Okay, well that’s a very well put question ’cause, as with all things, I think it’s better to talk about degrees of belief rather than absolute belief. So normally if you ask a philosopher this question, we’ll say, “I’m a nihilist,” or “I’m a moral realist,” or something, so I think it’s better to split your credences. So I think I’m about 50/50 between nihilism or error theory and something that’s non-nihilistic.

Whereby nihilism or error theory, I just mean that any positive moral statement or normative statement or a evaluative statement. That includes, you ought to maximize happiness. Or, if you want a lot of money, you ought to become a banker. Or, pain is bad. That, on this view, all of those things are false. All positive, normative or evaluative claims are false. So it’s a very radical view. And we can talk more about that, if you’d like.

In terms of the rest of my credence, the view that I’m kind of most sympathetic towards in the sense of the one that occupies most of my mental attention is a relatively robust form of moral realism. It’s not clear whether it should be called kind of naturalist moral realism or non-naturalist moral realism, but the important aspect of it is just that goodness and badness are kind of these fundamental moral properties and are properties of experience.

The things that are of value are things that supervene on conscious states, in particular good states or bad states, and the way we know about them is just by direct experience with them. Just by being acquainted with a state like pain gives us a reason for thinking we ought to have less of this in the world. So that’s my kind of favored view in the sense it’s the one I’d be most likely to defend in the seminar room.

And then I give somewhat less credence in a couple of views. One is a view called “subjectivism” which is the idea that what you ought to do is determined in some sense by what you want to do. So the simplest view there would just be when I say, “I ought to do X.” That just means I want to do X in some way. Or a more sophisticated version would be ideal subjectivism where when I say I ought to do X, it means some very idealized version of myself would want myself to want to do X. Perhaps if I had limited amounts of knowledge and much clearer computational power and so on. I’m a little less sympathetic to that than many people I know. We’ll go into that.

And then a final view that I’m also less sympathetic towards is non-cognitivism, which would be the idea that our moral statements … So when I say, “Murder is wrong,” I’m not even attempting to express a proposition. What they’re doing is just expressing some emotion of mine, like, “Yuk. Murder. Ugh,” in the same way that when I said that, that wasn’t expressing any proposition, it was just expressing some sort of pro or negative attitude. And again, I don’t find that terribly plausible, again for reasons we can go into.

Lucas: Right, so those first two views were cognitivist views, which makes them fall under sort of a semantic theory where you think that people are saying truth or false statements when they’re claiming moral facts. And the other theory in your moral realism are both metaphysical views, which I think is probably what we’ll mostly be interested here in terms of the AI alignment problem.

There are other issues in metaethics, for example having to do with semantics, as you just discussed. You feel as though you give some credence to non-cognitivism, but there are also justification views, so like issues in moral epistemology, how one can know about metaethics and why one ought to follow metaethics if metaethics has facts. Where do you sort of fall in in that camp?

Will: Well, I think all of those views are quite well tied together, so what sort of moral epistemology you have depends very closely, I think, on what sort of meta-ethical view you have, and I actually think, often, is intimately related as well to what sort of view in normative ethics you have. So my preferred philosophical world view, as it were, the one I’d defend in a seminar room, is classical utilitarian in its normative view, so the only thing that matters is positive or negative mental states.

In terms of its moral epistemology, the way we access what is of value is just by experiencing it, so in just the same way we access conscious states. There are also some ways in which you can’t merely, you know, why is it that we should maximize the sum of good experiences rather than the product, or something? That’s a view that you’ve got to obtain by kind of reasoning rather than just purely from experience.

Part of my epistemology does appeal to whatever this spooky ability we have to reason about abstract affairs, but it’s the same sort of faculty that is used when we think about mathematics or set theory or other areas of philosophy. If, however, I had some different view, so supposing we were a subjectivist, well then moral epistemology looks very different. You’re actually just kind of reflecting on your own values, maybe looking at what you would actually do in different circumstances and so on, reflecting on your own preferences, and that’s the right way to come to the right kind of moral views.

There’s also another meta-ethical view called “constructivism” that I’m definitely not the best person to talk about with. But on that view, again it’s not really a realistic view, but on this view we just have a bunch of beliefs and intuitions and the correct moral view is just the best kind of systematization of those and beliefs or intuitions in the same way as you might think … Like linguistics, it is a science, but it’s fundamentally based just on what our linguistic intuitions are. It’s just kind of a systematization of them.

On that view, then, moral epistemology would be about reflecting on your own moral intuitions. You just got all of this data, which is the way things seem like to you, morally speaking, and then you’re just doing the systematization thing. So I feel like the question of moral epistemology can’t be answered in a vacuum. You’ve got to think about your meta-ethical view of the metaphysics of ethics at the same time.

Lucas: I think I’m pretty interested in here, and also just poking a little bit more into that sort of 50% credence you give to your moral realist view, which is super interesting because it’s a view that people tend not to have, I guess, in the AI computer science rationality space, EA space. People tend to, I guess, have a lot of moral anti-realists in this space.

In my last podcast, I spoke with David Pearce, and he also seemed to sort of have a view like this, and I’m wondering if you can just sort of unpack yours a little bit, where he believed that suffering and pleasure disclose the in-built pleasure/pain access of the universe. Like you can think of minds as sort of objective features of the world, because they in fact are objective features of the world, and the phenomenology and experience of each person is objective in the same way that someone could objectively be experiencing redness, and in the same sense they could be objectively experiencing pain.

It seems to me, and I don’t fully understand the view, but the claim is that there are some sort of in-built quality or property to the hedonic qualia of suffering or pleasure that discloses its in-built value to that.

Will: Yeah.

Lucas: Could you unpack it a little bit more about the metaphysics of that and what that even means?

Will: It sounds like David Pearce and I have quite similar views. I think relying heavily on the analogy with, or very close analogy with consciousness is going to help, where imagine you’re kind of a robot scientist, you don’t have any conscious experiences but you’re doing all this fancy science and so on, and then you kind of write out the book of the world, and i’m like, “hey, there’s this thing you missed out. It’s like conscious experience.” And you, the robot scientist, would say, “Wow, that’s just insane. You’re saying that some bits of matter have this first person subjective feel to them? Like, why on earth would we ever believe that? That’s just so out of whack with the naturalistic understanding of the world.” And it’s true. It just doesn’t make any sense from given what we know now. It’s a very strange phenomenon to exist in the world.

Will: And so one of the arguments that motivates error theory is this idea of just, well, if values were to exist, they would just be so weird, what Mackie calls “queer”. It’s just so strange that just by a principle of Occam’s razor not adding strange things in to our ontology, we should assume they don’t exist.

But that argument would work in the same way against conscious experience, and the best response we’ve got is to say, no, but I know I’m conscious, and just tell by introspecting. I think we can run the same sort of argument when it comes to a property of consciousness as well, which is namely the goodness or badness of certain conscious experiences.

So now I just want you to go kind of totally a-theoretic. Imagine you’ve not thought about philosophy at all, or even science at all, and I was just to ask you, rip off one of your fingernails, or something. And then I say, “Is that experience bad?” And you would say yes.

Lucas: Yeah, it’s bad.

Will: And I would ask, how confident are you? The more confident that this pain is bad than that I even have hands, perhaps. That’s at least how it seems to be for me. So then it seems like, yeah, we’ve got this thing that we’re actually incredibly confident of which is the badness of pain, or at least the badness of pain for me, and so that’s what initially gives the case for then thinking, okay, well, that’s at least one objective moral fact that pain is bad, or at least pain is bad for me.

Lucas: Right, so the step where I think that people will tend to get lost in this is when … I thought the part about Occam’s razor was very interesting. I think that most people are anti-realistic because they use Occam’s razor there and they think that what the hell would a value even be anyway in the third person objective sense? Like, that just seems really queer, as you put it. So I think people get lost at the step where the first person seems to simply have a property of badness to it.

I don’t know what that would mean if one has a naturalistic reductionist view of the world. There seems to be just like entropy, noise and quarks and maybe qualia as well. It’s not clear to me how we should think about properties of qualia and whether or not one can drive, obviously, “ought” statements about properties of qualia to normative statements, like “is” statements about the properties of qualia to “ought” statements?

Will: One thing I want to be very clear on is just it definitely is the case that we have really no idea on this view. We are currently completely in the dark about some sort of explanation of how matter and forces and energy could result in goodness or badness, something that ought to be promoted. But that’s also true with conscious experience as well. We have no idea how on earth matter could result in kind of conscious experience. At the same time, it would be a mistake to start denying conscious experience.

And then we can ask, we say, okay, we don’t really know what’s going on but we accept that there’s conscious experience, and then I think if you were again just to completely pre theoretically start categorizing distant conscious experiences that we have, we’d say that some are red and some are blue, some are maybe more intense, some are kind of dimmer than others, you’d maybe classify them into sights and sounds and other sorts of experiences there.

I think also a very natural classification would be the ones that are good and the ones that are bad, and then I think when we cash that out further, I think it’s not nearly the case. I don’t think the best explanation is that when we say, oh, this is good or this is bad it means what we want or what we don’t want, but instead it’s like what we think we have reason to want or reason not to want. It seems to give us evidence for those sorts of things.

Lucas: I guess my concern here is just that I worry that words like “good” and “bad” or “valuable” or “dis-valuable”, I feel some skepticism about whether or not they disclose some sort of intrinsic property of the qualia. I’m also not sure what the claim here is about the nature of and kinds of properties that qualia can have attached to them. I worry that goodness and badness might be some sort of evolutionary fiction which enhances us, enhances our fitness, but it doesn’t actually disclose some sort of intrinsic metaphysical quality or property of some kind of experience.

Will: One thing I’ll say is, again, remember that I’ve got this 50% credence on error theory, so in general, all these questions, maybe this is just some evolutionary fiction, things just seem bad but they’re not actually, and so on. I actually think those are good arguments, and so that should give us confidence, some degree of confidence and this idea of just actually nothing matters at all.

But kind of underlying a lot of my views is this more general argument that if you’re unsure between two views, one in which just nothing matters at all, we’ve got no reasons for action, the other one we do have some reasons for action, then you can just ignore the one that says you’ve got no reasons for action ’cause you’re not going to do badly by its likes no matter what you do. If I were to go around shooting everybody, that wouldn’t be bad or wrong on nihilism. If I were to shoot lots of people, it wouldn’t be bad or wrong on nihilism.

So if there are arguments such as, I think an evolutionary argument that pushes us in the direction of kind of error theory, in a sense we can put them to the side, ’cause what we ought to do is just say, yeah, we take that really seriously. Give us a high credence in error theory, but now say, after all those arguments, what are the views, because most plausibly kind of bear their force.

So this is why with the kind of evolutionary worry, I’m just like, yes. But, supposing it’s the case that there actually are. Presumably conscious experiences themselves are useful in some evolutionary way that, again, we don’t really understand. I think, presumably, also good and bad experiences are useful in some evolutionary way that we don’t fully understand, perhaps because they have a tendency to motivate at least beings like us, and that in fact seems to be a key aspect of making a kind of goodness or badness statement. It’s at least somehow tied up to the idea of kind of motivation.

And then when I say ascribing a property to a conscious experience, I really just don’t mean whatever it is that we mean when we say that this experience is red seeming, this is experience is blue seeming, I mean, again, opens philosophical questions what we even mean by properties but in the same way this is bad seeming, this is good seeming.

Before I got into thinking about philosophy and naturalism and so on, would I have thought those things are kind of on a par, and I think I would’ve done, so it’s at least a pre theoretically justified view to think, yeah, there just is this axiological property of my experience.

Lucas: This has made me much more optimistic. I think after my last podcast I was feeling quite depressed and nihilistic, and hearing you give this sort of non-naturalistic or naturalistic moral realist count is cheering me up a bit about the prospects of AI alignment and value in the world.

Will: I mean, I think you shouldn’t get too optimistic. I’m also certainly wrong-

Lucas: Yeah.

Will: … sort of is my favorite view. But take any philosopher. What’s the chance that they’ve got the right views? Very low, probably.

Lucas: Right, right. I think I also need to be careful here that human beings have this sort of psychological bias where we give a special metaphysical status and kind of meaning and motivation to things which have objective whatever to it. I guess there’s also some sort of motivation that I need to be mindful of that seeks out to make value objective or more meaningful and foundational in the universe.

Will: Yeah. The thing that I think should make you feel optimistic, or at least motivated, is this argument that if nothing matters, it doesn’t matter that nothing matters. It just really ought not to affect what you do. You may as well act as if things do matter, and in fact we can have this project of trying to figure out if things matter, and that maybe could be an instrumental goal, which kind of is a purpose for life is to get to a place where we really can figure out if it has any meaning. I think that sort of argument can at least give one grounds for getting out of bed in the morning.

Lucas: Right. I think there’s this philosophy paper that I saw, but I didn’t read, that was like, “nothing Matters, but it does matter”, with the one lower case M and then another capital case M, you know.

Will: Oh, interesting.

Lucas: Yeah.

Will: It sounds a bit like 4:20 ethics.

Lucas: Yeah, cool.

Moving on here into AI alignment. And before we get into this, I think that this is something that would also be interesting to hear you speak a little bit more about before we dive into AI alignment. What even is the value of moral information and moral philosophy, generally? Is this all just a bunch of BS or how can it be interesting and or useful in our lives, and in science and technology?

Will: Okay, terrific. I mean, and this is something I write about in a paper I’m working on now and also in the book, as well.

So, yeah, I think the stereotype of the philosopher engaged in intellectual masturbation, not doing really much for the world at all, is quite a prevalent stereotype. I’ll not comment on whether that’s true for certain areas of philosophy. I think it’s definitely not true for certain areas within ethics. What is true is that philosophy is very hard, ethics is very hard. Most of the time when we’re trying to do this, we make very little progress.

If you look at the long-run history of thought in ethics and political philosophy, the influence is absolutely huge. Even just take Aristotle, Locke, Hobbes, Mill, and Marx. The influence of political philosophy and moral philosophy there, it shaped thousands of years of human history. Certainly not always for the better, sometimes for the worse, as well. So, ensuring that we get some of these ideas correct is just absolutely crucial.

Similarly, even in more recent times … Obviously not as influential as these other people, but also it’s been much less time so we can’t predict into the future, but if you consider Peter Singer as well, his ideas about the fact that we may have very strong obligations to benefit those who are distant strangers to us, or that we should treat animal welfare just on a par with human welfare, at least on some understanding of those ideas, that really has changed the beliefs and actions of, I think, probably tens of thousands of people, and often in really quite dramatic ways.

And then when we think about well, should we be doing more of this, is it merely that we’re influencing things randomly, or are we making things better or worse? Well, if we just look to the history of moral thought, we see that most people in most times have believed really atrocious things. Really morally abominable things. Endorsement of slavery, distinctions between races, subjugation of women, huge discrimination against non-heterosexual people, and, in part at least, it’s been ethical reflection that’s allowed us to break down some of those moral prejudices. And so we should presume that we have very similar moral prejudices now. We’ve made a little bit of progress, but do we have the one true theory of ethics now? I certainly think it’s very unlikely. And so we need to think more if we want to get to the actual ethical truth, if we don’t wanna be living out moral catastrophes in the same way as we would if we kept slaves, for example.

Lucas: Right, I think we do want to do that, but I think that a bit later in the podcast we’ll get into whether or not that’s even possible, given economic, political, and militaristic forces acting upon the AI alignment problem and the issues with coordination and race to AGI.

Just to start to get into the AI alignment problem, I just wanna offer a little bit of context. It is implicit in the AI alignment problem, or value alignment problem, that AI needs to be aligned to some sort of ethic or set of ethics, this includes preferences or values or emotional dispositions, or whatever you might believe them to be. And so it seems that generally, in terms of moral philosophy, there are really two methods, or two methods in general, by which to arrive at an ethic. So, one is simply going to be through reason, and one is going to be through observing human behavior or artifacts, like books, movies, stories, or other things that we produce in order to infer and discover the observed preferences and ethics of people in the world.

The latter side of alignment methodologies are empirical and involves the agent interrogating and exploring the world in order to understand what the humans care about and value, as if values and ethics were simply a physical by-product of the world and of evolution. And the former is where ethics are arrived at through reason alone, and involve the AI or the AGI potentially going about ethics as a philosopher would, where one engages in moral reasoning about metaethics in order to determine what is correct. From the point of view of ethics, there is potentially only what the humans empirically do believe and then there is what we may or may not be able to arrive at through reason alone.

So, it seems that one or both of these methodologies can be used when aligning an AI system. And again, the distinction here is simply between sort of preference aggregation or empirical value learning approaches, or methods of instantiating machine ethics, reasoning, or decision-making in AI systems so they become agents of morality.

So, what I really wanna get into with you now is how metaethical uncertainty influences our decision over the methodology of value alignment. Over whether or not we are to prefer an empirical preference learning or aggregation type approach, or one which involved an imbuing of moral epistemology and ethical metacognition and reasoning into machine systems so it can discover what we ought to do. And how moral uncertainty, and metaethical moral uncertainty in particular, operates within both of these spaces once you’re committed to some view, or both of these views. And then we can get into issues and intertheoretic comparisons and how that arises here at many levels, the ideal way we should proceed if we could do what would be perfect, and again, what is actually likely to happen given race dynamics and political, economic, and militaristic forces.

Will: Okay that sounds terrific. I mean, there’s a lot of cover there.

I think it might be worth me saying just maybe a couple of distinctions I think are relevant and kind of my overall view in this. So, in terms of distinction, I think within what broadly gets called the alignment problem, I think I’d like to distinguish between what I’d call the control problem, then kind of human values alignment problem, and then the actual alignment problem.

Where the control problem is just, can you get this AI to do what you want it to do? Where that’s maybe relatively narrowly construed, I want it to clean up my room, I don’t want it to put my cat in the bin, that’s kinda control problem. I think describing that as a technical problem is kind of broadly correct.

Second is then what gets called aligning AI with human values. For that, it might be the case that just having the AI pay attention to what humans actually do and infer their preferences that are revealed on that basis, maybe that’s a promising approach and so on. And that I think will become increasingly important as AI becomes larger and larger parts of the economy.

This is kind of already what we do when we vote for politicians who represent at least large chunks of the electorate. They hire economists who undertake kind of willingness-to-pay surveys and so on to work out what people want, on average. I do think that this is maybe more normatively loaded than people might often think, but at least you can understand that, just as the control problem is I have some relatively simple goal, which is, what do I want? I want this system to clean my room. How do I ensure that it actually does that without making mistakes that I wasn’t intending? This is kind of broader problem of, well you’ve got a whole society and you’ve got to aggregate their preferences for what kind of society wants and so on.

But I think, importantly, there’s this third thing which I called a minute ago, the actual alignment problem, so let’s run with that. Which is just working out what’s actually right and what’s actually wrong and what ought we to be doing. I do have a worry that because many people in the wider world, often when they start thinking philosophically they start endorsing some relatively simple, subjectivist or relativist views. They might think that answering this question of well, what do humans want, or what do people want, is just the same as answering what ought we to do? Whereas for kind of the reductio of that view, just go back a few hundred years where the question would have been, well, the white man’s alignment problem, where it’s like, “Well, what do we want, society?”, where that means white men.

Lucas: Uh oh.

Will: What do we want them to do? So similarly, unless you’ve got the kind of such a relativist view that you think that maybe that would have been correct back then, that’s why I wanna kind of distinguish this range of problems. And I know that you’re kind of most interested in that third thing, I think. Is that right?

Lucas: Yeah, so I think I’m pretty interested in the second and the third thing, and I just wanna unpack a little bit of your distinction between the first and the second. So, the first was what you called the control problem, and you called the second just the plurality of human values and preferences and the issue of aligning to that in the broader context of the world.

It’s unclear to me how I get the AI to put a strawberry on the plate or to clean up my room and not kill my cat without the second thing haven been done, at least to me.

There is a sense at a very low level where your sort of working on technical AI alignment, which involves working on the MIRI approach with agential foundations and trying to work on a constraining optimization and corrigibility and docility and robustness and security and all of those sorts of things that people work on and the concrete problems in AI safety, stuff like that. But, it’s unclear to me where that sort of stuff is just limited to and includes the control problem, and where it begins requiring the system to be able to learn my preferences through interacting with me and thereby is already sort of participating in the second case where it’s sort of participating in AI alignment more generally, rather than being sort of like a low level controlled system.

Will: Yeah, and I should say that on this side of things I’m definitely not an expert, not really the person to be talking to, but I think you’re right. There’s going to be some big, gray area or transition from systems. So there’s one that might be cleaning my room, or even let’s just say it’s playing some sort of game, unfortunately I forget the example … It was under the blog post, an example of the alignment problem in the wild, or something, from open AI. But, just a very simple example of the AIs playing a game, and you say, “Well, get as many points as possible.” And what you really want it to do is win a certain race, but what it ends up doing is driving this boat just round and round in circles because that’s the way of maximizing the number of points.

Lucas: Reward hacking.

Will: Reward hacking, exactly. That would be a kind of failure of control problem, that first in our sense. And then I believe there’s gonna be kind of gray areas, where perhaps it’s the certain sort of AI system where the whole point is it’s just implementing kind of what I want. And that might be very contextually determined, might depend on what my mood is of the day. For that, that might be a much, much harder problem and will involve kind of studying what I actually do and so on.

We could go into the question of whether you can solve the problem of cleaning a room without killing my cat. Whether that is possible to solve without solving much broader questions, maybe that’s not the most fruitful avenue of discussion.

Lucas: So, let’s put aside this first case which involves the control problem, we’ll call it, and let’s focus on the second and the third, where again the second is defined as sort of the issue of the plurality of human values and preferences which can be observed, and then the third you described as us determining what we ought to do and tackling sort of the metaethics.

Will: Yeah, just tackling the fundamental question of, “Where ought we to be headed as a society?” One just extra thing to add onto that is that’s just a general question for society to be answering. And if there are kind of fast, or even medium-speed, developments in AI, perhaps suddenly we’ve gotta start answering that question, or thinking about that question even harder in a more kind of clean way than we have before. But even if AI were to take a thousand years, we’d still need to answer that question, ’cause it’s just fundamentally the question of, “Where ought we to be heading as a society?”

Lucas: Right, and so going back a little bit to the little taxonomy that I had developed earlier, it seems like your second case scenario would be sort of down to metaethical questions, which are behind and which influence the empirical issues with preference aggregation and there being plurality of values. And the third case would be, what would be arrived at through reason and, I guess, the reason of many different people.

Will: Again, it’s gonna involve questions of metaethics as well where, again, on my theory that metaethics … It would actually just involve interacting with conscious experiences. And that’s a critical aspect of coming to understand what’s morally correct.

Lucas: Okay, so let’s go into the second one first and then let’s go into the third one. And while we do that, it would be great if we could be mindful of problems in intertheoretic comparison and how they arise as we go through both. Does that sound good?

Will: Yeah, that sounds great.

Lucas: So, would you like to just sort of unpack, starting with the second view, the metaethics behind that, issues in how moral realism versus moral anti-realism will affect how the second scenario plays out, and other sorts of crucial considerations in metaethics that will affect the second scenario?

Will: Yeah, so for the second scenario, which again, to be clear, is the aggregating of the variety of human preferences across a variety of contexts and so on, is that right?

Lucas: Right, so that the agent can be fully autonomous and realized in the world that it is sort of an embodiment of human values and preferences, however construed.

Will: Yeah, okay, so here I do think all the metaethics questions are gonna play a lot more role in the third question. So again, it’s funny, it’s very similar to the question of kind of what mainstream economists often think they’re doing when it comes to cost-benefit analysis. Let’s just even start in the individual case. Even there, it’s not a purely kind of descriptive enterprise, where, again, let’s not even talk about AI. You’re just looking out for me. You and I are friends and you want to do me a favor in some way, how do you make a decision about how to do me that favor, how to benefit me in some way? Well, you could just look at the things I do and then infer on the basis of that what my utility function is. So perhaps every morning I go and I rob a convenience store and then I buy some heroin and then I shoot up and-

Lucas: Damn, Will!

Will: That’s my day. Yes, it’s a confession. Yeah, you’re the first to hear it.

Lucas: It’s crazy, in Oxford huh?

Will: Yeah, Oxford University is wild.

You see that behavior on my part and you might therefore conclude, “Wow, well what Will really likes is heroin. I’m gonna do him a favor and buy him some heroin.” Now, that seems kind of commonsensically pretty ridiculous. Well, assuming I’m demonstrating all sorts of bad behavior that looks like it’s very bad for me, it looks like a compulsion and so on. So instead what we’re really doing is not merely maximizing the utility function that’s gone by my revealed preferences, we have some deeper idea of kind of what’s good for me or what’s bad for me.

Perhaps that comes down to just what I would want to want, or what I want myself to want to want to want. Perhaps you can do it in terms of what are called second-order, third-order preferences. What idealized Will would want … That is not totally clear. Well firstly, it’s really hard to know kind of what would idealized Will want. You’re gonna have to start doing at least a little bit of philosophy there. Because I tend to favor hedonism, I think that an idealized version of my friend would want the best possible experiences. That might be very different from what they think an idealized version of themselves would want because perhaps they have some objective list account of well-being and they think well, what they would also want is knowledge for the its own sake and appreciating beauty for its own sake and so on.

So, even there I think you’re gonna get into pretty tricky questions about what is good or bad for someone. And then after that you’ve got the question of preference aggregation, which is also really hard, both in theory and in practice. Where, do you just take strengths of preferences across absolutely everybody and then add them up? Well, firstly you might worry that you can’t actually make these comparisons of strengths of preferences between people. Certainly if you’re just looking at peoples revealed preferences, it’s really opaque how you would say if I prefer coffee to tea and you vice versa, who has the stronger preference? But perhaps we could look at behavioral facts to kind of try and at least anchor that, but it’s still then non-obvious that what we ought to do when we’re looking at everybody’s preferences is just maximize the sum rather than perhaps give some extra weighting to people who are more badly off, perhaps we give more priority to their interests. So this is kinda theoretical issues.

And then secondly, is kinda just practical issues of implementing that, where you actually need to ensure that people aren’t faking their preferences. And there’s a well known literature and voting theory that says that basically any aggregation system you have, any voting system, is going to be manipulable in some way. You’re gonna be able to get a better result for yourself, at least in some circumstances, by misrepresenting what you really want.

Again, these are kind of issues that our society already faces, but they’re gonna bite even harder when we’re thinking about delegating to artificial agents.

Lucas: There’s two levels to this that you’re sort of elucidating. The first is that you can think of the AGI as being something which can do favors for everybody in humanity, so there are issues empirically and philosophically and in terms of understanding other agents about what sort of preferences should that AGI be maximizing for each individual, say being constrained by what is legal and what is generally converged upon as being good or right. And then there’s issues with preference aggregation which come up more given that we live in a resource-limited universe and world, where not all preferences can coexist and there has to be some sort of potential cancellation between different views.

And so, in terms of this higher level of preference aggregation … And I wanna step back here to metaethics and difficulties of intertheoretic comparison. It would seem that given your moral realist view, it would affect how the weighting would potentially be done. Because it seemed like before you were eluding to the fact that if your moral realist view would be true, then the way at which we could determine what we ought to do or what is good and true about morality would be through exploring the space of all possible experiences, right, so we can discover moral facts about experiences.

Will: Mm-hmm (affirmative).

Lucas: And then in terms of preference aggregation, there would be people who would be right or wrong about what is good for them or the world.

Will: Yeah, I guess this is, again why I wanna distinguish between these two types of value alignment problem, where on the second type, which is just kind of, “What does society want?” Societal preference aggregation. I wasn’t thinking of it as there being kind of right or wrong preferences.

In just the same way as there’s this question of just, “I want system to do X” but there’s a question of, “Do I want that?” or “How do you know that I want that?”, there’s a question of, “How do you know what society wants?” That’s a question in and of its own right that’s then separate from that third alignment issue I was raising, which then starts to bake in, well, if people have various moral preferences, views about how the world ought to be, yeah some are right and some are wrong. And no way should you give some aggregation over all those different views, because ideally you should give no weight to the ones that are wrong and if any are true, they get all the weight. It’s not really about kind of preference aggregation in that way.

Though, if you think about it as everyone is making certain sort of guess at the moral truth, then you could think of that like a kind of judgment aggregation problem. So, it might be like data or input for your kind of moral reasoning.

Lucas: I think I was just sort of conceptually slicing this a tiny bit different from you. But that’s okay.

So, staying on this second view, it seems like there’s obviously going to be a lot of empirical issues and issues in understanding persons and idealized versions of themselves. Before we get in to intertheoretic comparison issues here, what is your view on coherent extrapolated volition, sort of, being the answer to this second part?

Will: I don’t really know that much about it. From what I do know, it always seemed under-defined. As I understand it, the key idea is just, you take everyone’s idealized preferences in some sense, and then I think what you do is just take a sum of what everyone’s preference is. I’m personally quite in favor of the summation strategy. I think we can make interpersonal comparisons of strengths of preferences, and I think summing people’s preferences is the right approach.

We can use certain kinds of arguments that also have application in moral philosophy, like the idea of “If you didn’t know who you were going to be in society, how would you want to structure things? And if you’re a rational, self-interested agent, maximizing expected utility, then you’ll do the utilitarian aggregation function, so you’ll maximize the sum of preference strength.

But then, if we’re doing this idealized preference thing, all the devil’s going to be in the details of, “Well how are you doing this idealization?” Because, given my preferences for example, for what they are … I mean my preferences are absolutely … Certainly they’re incomplete, they’re almost certainly cyclical, who knows? Maybe there’s even some preferences I have that are areflexive of things, as well. Probably contradictory, as well, so there’s questions about what does it mean to idealize, and that’s going to be a very difficult question, and where a lot of the work is, I think.

Lucas: So I guess, just two things here. What are sort of the timeline and actual real world working in relationship here, between the second problem that you’ve identified and the third problem that you’ve identified, and what is the role and work that preferences are doing here, for you, within the context of AI alignment, given that you’re sort of partial of a form of hedonistic consequentialism?

Will: Okay, terrific, ’cause this is kind of important framing.

In terms of answering this alignment problem, the deep one of just where ought societies to be going, I think the key thing is to punt it. The key thing is to get us to a position where we can think about and reflect on this question, and really for a very long time, so I call this the long reflection. Perhaps it’s a period of a million years or something. We’ve got a lot of time on our hands. There’s really not the kind of scarce commodity, so there are various stages to get into that state.

The first is to reduce extinction risks down basically to zero, put us a position of kind of existential security. The second then is to start developing a society where we can reflect as much as possible and keep as many options open as possible.

Something that wouldn’t be keeping a lot of options open would be, say we’ve solved what I call the control problem, we’ve got these kind of lapdog AIs that are running the economy for us, and we just say, “Well, these are so smart, what we’re gonna do is just tell it, ‘Figure out what’s right and then do that.'” That would really not be keeping our options open. Even though I’m sympathetic to moral realism and so on, I think that would be quite a reckless thing to do.

Instead, what we want to have is something kind of … We’ve gotten to this position of real security. Maybe also along the way, we’ve fixed the various particularly bad problems of the present, poverty and so on, and now what we want to do is just keep our options open as much as possible and then kind of gradually work on improving our moral understanding where if that’s supplemented by AI system …

I think there’s tons of work that I’d love to see developing how this would actually work, but I think the best approach would be to get the artificially intelligent agents to be just doing moral philosophy, giving us arguments, perhaps creating new moral experiences that it thinks can be informative and so on, but letting the actual decision making or judgments about what is right and wrong be left up to us. Or at least have some kind of gradiated thing where we gradually transition the decision making more and more from human agents to artificial agents, and maybe that’s over a very long time period.

What I kind of think of as the control problem in that second level alignment problem, those are issues you face when you’re just addressing the question of, “Okay. Well, we’re now gonna have an AI run economy,” but you’re not yet needing to address the question of what’s actually right or wrong. And then my main thing there is just we should get ourselves into a position where we can take as long as we need to answer that question and have as many options open as possible.

Lucas: I guess here given moral uncertainty and other issues, we would also want to factor in issues with astronomical waste into how long we should wait?

Will: Yeah. That’s definitely informing my view, where it’s at least plausible that morality has an aggregative component, and if so, then the sheer vastness of the future may, because we’ve got half a billion to a billion years left on Earth, a hundred trillion years before the starts burn out, and then … I always forget these numbers, but I think like a hundred billion stars in the Milky Way, ten trillion galaxies.

With just vast resources at our disposal, the future could be astronomically good. It could also be astronomically bad. What we want to insure is that we get to the good outcome, and given the time scales involved, even what seem like an incredibly long delay, like a million years, is actually just very little time indeed.

Lucas: In half a second I want to jump into whether or not this is actually likely to happen given race dynamics and that human beings are kind of crazy. The sort of timeline here is that we’re solving the technical control problem up into and on our way to sort of AGI and what might be superintelligence, and then we are also sort of idealizing everyone’s values and lives in a way such that they have more information and they can think more and have more free time and become idealized versions of themselves, given constraints within issues of values canceling each other out and things that we might end up just deeming to be impermissible.

After that is where this period of long reflection takes place, and sort of the dynamics and mechanics of that are seeming open questions. It seems that first comes computer science and global governance and coordination and strategy issues, and then comes long time of philosophy.

Will: Yeah, then comes the million years of philosophy, so I guess not very surprising a philosopher would suggest this. Then the dynamics of the setup is an interesting question, and a super important one.

One thing you could do is just say, “Well, we’ve got ten billion people alive today, let’s say. We’re gonna divide the universe into ten billionths, so maybe that’s a thousand galaxies each or something.” And then you can trade after that point. I think that would get a pretty good outcome. There’s questions of whether you can enforce it or not into the future. There’s some arguments that you can. But maybe that’s not the optimal process, because especially if you think that “Wow! Maybe there’s actually some answer, something that is correct,” well, maybe a lot of people miss that.

I actually think if we did that and if there is some correct moral view, then I would hope that incredibly well informed people who have this vast amount of time, and perhaps intellectually augmented people and so on who have this vast amount of time to reflect would converge on that answer, and if they didn’t, then that would make me more suspicious of the idea that maybe there is a real face to the matter. But it’s still the early days we’d really want to think a lot about what goes into the setup of that kind of long reflection.

Lucas: Given this account that you’ve just given about how this should play out in the long term or what it might look like, what is the actual probability do you think that this will happen given the way that the world actually is today and it’s just the game theoretic forces at work?

Will: I think I’m going to be very hard pressed to give a probability. I don’t think I know even what my subjective credence is. But speaking qualitatively, I’d think it would be very unlikely that this is how it would play out.

Again, I’m like Brian and Dave in that I think if you look at just history, I do think moral forces have some influence. I wouldn’t say they’re the largest influence. I think probably randomness explains a huge amount of history, especially when you think about how certain events are just very determined by actions of individuals. Economic forces and technological forces, environmental changes are also huge as well. It is hard to think at least that it’s going to be likely that such a well orchestrated dynamic would occur. But I do think it’s possible and I think we can increase the chance of that happening by the careful actions that where people like FLI are doing at the moment.

Lucas: That seems like the sort of ideal scenario, absolutely, but I also am worried that people don’t like to listen to moral philosophers or people in that potentially selfish government forces and things like that will end up taking over and controlling things, which is kind of sad for the cosmic endowment.

Will: That’s exactly right. I think my chances … If there was some hard takeoff and sudden leap to artificial general intelligence, which I think is relatively unlikely, but again is possible, I think that’s probably the most scary ’cause it means that a huge amount of power is suddenly in the hands of a very small number of people potentially. You could end up with the very long run future of humanity being determined by the idiosyncratic preferences of just a small number of people, so it would be very dependent whether those people’s preferences are good or bad, with a kind of slow takeoff, so where there’s many decades in terms of development of AGI and it gradually getting incorporated into the economy.

I think there’s somewhat more hope there. Society will be a lot more prepared. It’s less likely that something very bad will happen. But my default presumption when we’re talking about multiple nations, billions of people doing something that’s very carefully coordinated is not going to happen. We have managed to do things that have involved international cooperation and amazing levels of operational expertise and coordination in the past. I think the eradication of smallpox is perhaps a good example of that. But it’s something that we don’t see very often, at least not now.

Lucas: It looks like that we need to create a Peter Singer of AI safety of some other philosopher who has had a tremendous impact on politics and society to spread this sort of vision throughout the world such that it would more likely become realized. Is that potentially most likely?

Will: Yeah. I think if a wide number of the political leaders, even if just political leaders of US, China, Russia, all were on board with global coordination on the issue of AI, or again, whatever other transformative technology might really upend things in the 21st century, and were on board with “How important it is that we get to this kind of period of long reflection where we can really figure out where we’re going,” then that alone would be very promising.

Then the question of just how promising is that I think depends a lot on maybe the robustness of … Even if you’re a moral realist, there’s a question of “How likely do you think it is that people will get the correct moral view?” It could be the case that it’s just this kind of strong attractor where even if you’ve got nothing as clean cut as the long reflection that I was describing, instead some really messy thing, perhaps various wars and it looks like feudal society or something, and anyone would say that civilization looks likely chaotic, maybe it’s the case that even given that, just given enough time and enough reasoning power, people will still converge on the same moral view.

I’m probably not as optimistic as that, but it’s at least a view that you could hold.

Lucas: In terms of the different factors that are going into the AI alignment problem and the different levels you’ve identified, first, second, and third, which side do you think is lacking the most resources and attention right now? Are you most worried about the control problem, that first level? Or are you more worried about potential global coordination and governance stuff at the potential second level or moral philosophy stuff at the third?

Will: Again, flagging … I’m sure I’m biased on this, but I’m currently by far the most worried on the third level. That’s for a couple of reasons. One is I just think the vast majority of the world are simple subjectivists or relativists, so the idea that we ought to be engaging in real moral thinking about how we use society, where we go with society, how we use our cosmic endowment as you put it, my strong default is that that question just never even really gets phrased.

Lucas: You don’t think most people are theological moral realists?

Will: Yeah. I guess it’s true that I’m just thinking about-

Lucas: Our bubble?

Will: My bubble, yeah. Well educated westerners. Most people in the world at least would say they’re theological moral realists. One thought is just that … I think my default is that some sort of relativistic will hold sway and people will just not really pay enough attention to think about what they ought to do. A second relevant thought is just I think the best possible universe is plausibly really, really good, like astronomically better than alternative extremely good universes.

Lucas: Absolutely.

Will: It’s also the case that if you’re … Even like slight small differences in moral view might lead you to optimize for extremely different things. Even just a toy example of preference utilitarianism vs hedonistic utilitarianism, what you might think of as two very similar views, I think in the actual world there’s not that much difference between them, because we just kind of know what makes people better off, at least approximately, improves their conscious experiences, it also is generally what they want, but when you’re kind of technologically unconstrained, it’s plausible to me that the optimal configuration of things will look really quite different between those two views. I guess I kind of think the default is that we get it very badly wrong and it will require really sustained work in order to ensure we get it right … If it’s the case that there is a right answer.

Lucas: Is there anything with regards to issues in intertheoretic comparisons, or anything like that at any one of the three levels which we’ve discussed today that you feel we haven’t sufficiently covered or something that you would just like to talk about?

Will: Yeah. I know that one of your listeners was asking whether I thought they were solvable even in principle, by some superintelligence, and I think they are. I think they are if other issues in moral philosophy are solvable. I think that’s particularly hard, but I think ethics in general is very hard.

I also think it is the case that whatever output we have at the end of this kind of long deliberation, again it’s unlikely we’ll get to credence 1 in a particular view, so we’ll have some distribution over different views, and we’ll want to take that into account. Maybe that means we do some kind of compromise action.

Maybe that means we just distribute our resources in proportion with our credence in different moral views. That’s again one of these really hard questions that we’ll want if at all possible to punt on and leave to people who can think about this in much more depth.

Then in terms of aggregating societal preferences, that’s more like the problem of interpersonal comparisons of preference strength, which is kind of formally isomorphic but is at least a definitely issue.

Lucas: At the second and the third levels is where the intertheoretic problems are really going to be arising, and at that second level where the AGI is potentially working to idealize our values, I think there is again the open question about in the real world, whether or not there will be moral philosophers at the table or in politics or whoever has control over the AGI at that point in order to work on and think more deeply about intertheoretic comparisons of value at that level and timescale. Just thinking a little bit more about what we ought to do or what we should do realistically, given potential likely outcomes about whether or not this sort of thinking will or will not be at the table.

Will: My default is just the crucial thing is to ensure that this thinking is more likely to be at the table. I think it is important to think about, “Well, what ought we to do then,” if we think it’s as very likely that things go badly wrong. Maybe it’s not the case that we should just be aiming to push for the optimal thing, but for some kind of second best strategy.

I think at the moment we should just be trying to push for the optimal thing. In particular, that’s in part because my views that a optimal universe is just so much better than even an extremely good one, that I just kind of think we ought to be really trying to maximize the chance that we can figure out what there is and then implement it. But it would be interesting to think about it more.

Lucas: For sure. I guess just wrapping up here, did you ever have the chance to look at those two Lesswrong posts by Worley?

Will: Yeah, I did.

Lucas: Did you have any thoughts or comments on them? If people are interested you can find links in the description.

Will: I read the posts, and I was very sympathetic in general to what he was thinking through. In particular the principle of philosophical conservatism. Hopefully I’ve shown that I’m very sympathetic to that, so trying to think “What are the minimal assumptions? Would this system be safe? Would this path make sense on a very, very wide array of different philosophical views?” I think the approach I’ve suggested, which is keeping our options open as much as possible and punting on the really hard questions, does satisfy that.

I think one of his posts was talking about “Should we assume moral realism or assume moral antirealism?” It seems like there our views differed a little bit, where I’m more worried that everyone’s going to assume some sort of subjectivism and relativism, and that there might be some moral truth out there that we’re missing and we never think to find it, because we decide that what we’re interested in is maximizing X, so we program agents to build X and then just go ahead with it, whereas actually the thing that we ought to have been optimizing for is Y. But broadly speaking, I think this question of trying to be as ecumenical as possible philosophically speaking makes a lot of sense.

Lucas: Wonderful. Well, it’s really been a joy speaking, Will. Always a pleasure. Is there anything that you’d like to wrap up on, anywhere people can follow you or check you out on social media or anywhere else?

Will: Yeah. You can follow me on Twitter @WillMacAskill if you want to read more on some of my work you can find me at williammacaskill.com

Lucas: To be continued. Thanks again, Will. It’s really been wonderful.

Will: Thanks so much, Lucas.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich

Our AI systems work remarkably well in closed worlds. That’s because these environments contain a set number of variables, making the worlds perfectly known and perfectly predictable. In these micro environments, machines only encounter objects that are familiar to them. As a result, they always know how they should act and respond. Unfortunately, these same systems quickly become confused when they are deployed in the real world, as many objects aren’t familiar to them. This is a bit of a problem because, when an AI system becomes confused, the results can be deadly.

Consider, for example, a self-driving car that encounters a novel object. Should it speed up, or should it slow down? Or consider an autonomous weapon system that sees an anomaly. Should it attack, or should it power down? Each of these examples involve life-and-death decisions, and they reveal why, if we are to deploy advanced AI systems in real world environments, we must be confident that they will behave correctly when they encounter unfamiliar objects.

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with. To achieve this, Dietterich asserts that we must move away from (or, at least, greatly modify) the discriminative training methods that currently dominate AI research.

However, to do that, we must first address the “open category problem.”

 

Understanding the Open Category Problem

When driving down the road, we can encounter a near infinite number of anomalies. Perhaps a violent storm will arise, and hail will start to fall. Perhaps our vision will become impeded by smoke or excessive fog. Although these encounters may be unexpected, the human brain is able to easily analyze new information and decide on the appropriate course of action — we will recognize a newspaper drifting across the road and, instead of abruptly slamming on the breaks, continue on our way.

Because of the way that they are programmed, our computer systems aren’t able to do the same.

“The way we use machine learning to create AI systems and software these days generally uses something called ‘discriminative training,’” Dietterich explains, “which implicitly assumes that the world consists of only, say, a thousand different kinds of objects.” This means that, if a machine encounters a novel object, it will assume that it must be one of the thousand things that it was trained on. As a result, such systems misclassify all foreign objects.

This is the “open category problem” that Dietterich and his team are attempting to solve. Specifically, they are trying to ensure that our machines don’t assume that they have encountered every possible object, but are, instead, able to reliably detect — and ultimately respond to — new categories of alien objects.

Dietterich notes that, from a practical standpoint, this means creating an anomaly detection algorithm that assigns an anomaly score to each object detected by the AI system. That score must be compared against a set threshold and, if the anomaly score exceeds the threshold, the system will need to raise an alarm. Dietterich states that, in response to this alarm, the AI system should take a pre-determined safety action. For example, a self-driving car that detects an anomaly might slow down and pull off to the side of the road.

 

Creating a Theoretical Guarantee of Safety

There are two challenges to making this method work. First, Dietterich asserts that we need good anomaly detection algorithms. Previously, in order to determine what algorithms work well, the team compared the performance of eight state-of-the-art anomaly detection algorithms on a large collection of benchmark problems.

The second challenge is to set the alarm threshold so that the AI system is guaranteed to detect a desired fraction of the alien objects, such as 99%. Dietterich says that formulating a reliable setting for this threshold is one of the most challenging research problems because there are, potentially, infinite kinds of alien objects. “The problem is that we can’t have labeled training data for all of the aliens. If we had such data, we would simply train the discriminative classifier on that labeled data,” Dietterich says.

To circumvent this labeling issue, the team assumes that the discriminative classifier has access to a representative sample of “query objects” that reflect the larger statistical population. Such a sample could, for example, be obtained by collecting data from cars driving on highways around the world. This sample will include some fraction of unknown objects, and the remaining objects belong to known object categories.

Notably, the data in the sample is not labeled. Instead, the AI system is given an estimate of the fraction of aliens in the sample. And by combining the information in the sample with the labeled training data that was employed to train the discriminative classifier, the team’s new algorithm can choose a good alarm threshold. If the estimated fraction of aliens is known to be an over-estimate of the true fraction, then the chosen threshold is guaranteed to detect the target percentage of aliens (i.e. 99%).

Ultimately, the above is the first method that can give a theoretical guarantee of safety for detecting alien objects, and a paper reporting the results was presented at ICML 2018. “We are able to guarantee, with high probability, that we can find 99% all of these new objects,” Dietterich says.

In the next stage of their research, Dietterich and his team plan to begin testing their algorithm in a more complex setting. Thus far, they’ve been looking primarily at classification, where the system looks at an image and classifies it. Next, they plan to move to controlling an agent, like a robot of self-driving car. “At each point in time, in order to decide what action to choose, our system will do a ‘look ahead search’ based on a learned model of the behavior of the agent and its environment. If the look ahead arrives at a state that is rated as ‘alien’ by our method, then this indicates that the agent is about to enter a part of the state space where it is not competent to choose correct actions,” Dietterich says. In response, as previously mentioned, the agent should execute a series of safety actions and request human assistance.

But what does this safety action actually consist of?

 

Responding to Aliens

Dietterich notes that, once something is identified as an anomaly and the alarm is sounded, the nature of this fall back system will depend on the machine in question, like whether the AI system is in a self-driving car or autonomous weapon.

To explain how these secondary systems operate, Dietterich turns to self-driving cars. “In the Google car, if the computers lose power, then there’s a backup system that automatically slows the car down and pulls it over to the side of the road.” However, Dietterich clarifies that stopping isn’t always the best course of action. One may assume that a car should come to a halt if an unidentified object crosses its path; however, if the unidentified object happens to be a blanket of snow on a particularly icy day, hitting the breaks gets more complicated. The system would need to factor in the icy roads, any cars that may be driving behind, and whether these cars can break in time to avoid a rear end collision.

But if we can’t predict every eventuality, how can we expect to program an AI system so that it behaves correctly and in a way that is safe?

Unfortunately, there’s no easy answer; however, Dietterich clarifies that there are some general best practices; “There’s no universal solution to the safety problem, but obviously there are some actions that are safer than others. Generally speaking, removing energy from the system is a good idea,” he says. Ultimately, Dietterich asserts that all the work related to programming safe AI really boils down to determining how we want our machines to behave under specific scenarios, and he argues that we need to rearticulate how we characterize this problem, and focus on accounting for all the factors, if we are to develop a sound approach.

Dietterich notes that “when we look at these problems, they tend to get lumped under a classification of ‘ethical decision making,’ but what they really are is problems that are incredibly complex. They depend tremendously on the context in which they are operating, the human beings, the other innovations, the other automated systems, and so on. The challenge is correctly describing how we want the system to behave and then ensuring that our implementations actually comply with those requirements.” And he concludes, “the big risk in the future of AI is the same as the big risk in any software system, which is that we build the wrong system, and so it does the wrong thing. Arthur C Clark in 2001: A Space Odyssey had it exactly right. The Hal 9000 didn’t ‘go rogue;’ it was just doing what it had been programmed to do.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

European Parliament Passes Resolution Supporting a Ban on Killer Robots

Click here to see this page in other languages:  Russian 

The European Parliament passed a resolution on September 12, 2018 calling for an international ban on lethal autonomous weapons systems (LAWS). The resolution was adopted with 82% of the members voting in favor of it.

Among other things, the resolution calls on its Member States and the European Council “to develop and adopt, as a matter of urgency … a common position on lethal autonomous weapon systems that ensures meaningful human control over the critical functions of weapon systems, including during deployment.”

The resolution also urges Member States and the European Council “to work towards the start of international negotiations on a legally binding instrument prohibiting lethal autonomous weapons systems.”

This call for urgency comes shortly after recent United Nations talks where countries were unable to reach a consensus about whether or not to consider a ban on LAWS. Many hope that statements such as this from leading government bodies could help sway the handful of countries still holding out against banning LAWS.

Daan Kayser of PAX, one of the NGO members of the Campaign to Stop Killer Robots, said, “The voice of the European parliament is important in the international debate. At the UN talks in Geneva this past August it was clear that most European countries see the need for concrete measures. A European parliament resolution will add to the momentum toward the next step.”

The countries that took the strongest stances against a LAWS ban at the recent UN meeting were the United States, Russia, South Korea, and Israel.

 

Scientists’ Voices Are Heard

Also mentioned in the resolution were the many open letters signed by AI researchers and scientists from around the world, who are calling on the UN to negotiate a ban on LAWS.

Two sections of the resolution stated:

“having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organisations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the destabilising effects of these technologies.’”

Toby Walsh, a prominent AI researcher who helped create the letters, said, “It’s great to see politicians listening to scientists and engineers. Starting in 2015, we’ve been speaking loudly about the risks posed by lethal autonomous weapons. The European Parliament has joined the calls for regulation. The challenge now is for the United Nations to respond. We have several years of talks at the UN without much to show. We cannot let a few nations hold the world hostage, to start an arms race with technologies that will destabilize the current delicate world order and that many find repugnant.”

The Risks Posed By Lethal Autonomous Weapons

The following article was originally posted on Metro.

Killer robots. It’s a phrase that’s both terrifying, but also one that most people think of as still in the realm of science fiction. Yet weapons built with artificial intelligence (AI) – weapons that could identify, target, and kill a person all on their own – are quickly moving from sci-fi to reality.

To date, no weapons exist that can specifically target people. But there are weapons that can track incoming missiles or locate enemy radar signals, and these weapons can autonomously strike these non-human threats without any person involved in the final decision. Experts predict that in just a few years, if not sooner, this technology will be advanced enough to use against people.

Over the last few years, delegates at the United Nations have debated whether to consider banning killer robots, more formally known as lethal autonomous weapons systems (LAWS). This week delegates met again to consider whether more meetings next year could lead to something more tangible – a political declaration or an outright ban.

Meanwhile, those who would actually be responsible for designing LAWS — the AI and robotics researchers and developers — have spent these years calling on the UN to negotiate a treaty banning LAWS. More specifically, nearly 4,000 AI and robotics researchers called for a ban on LAWS in 2015; in 2017, 137 CEOs of AI companies asked the UN to ban LAWS; and in 2018, 240 AI-related organizations and nearly 3,100 individuals took that call a step further and pledged not to be involved in LAWS development.

And AI researchers have plenty of reasons for their consensus that the world should seek a ban on lethal autonomous weapons. Principle among these is that AI experts tend to recognize how dangerous and destabilizing these weapons could be.

The weapons could be hacked. The weapons could fall into the hands of “bad actors.” The weapons may not be as “smart” as we think and could unwittingly target innocent civilians. Because the materials necessary to build the weapons are cheap and easy to obtain, military powers could mass-produce these weapons, increasing the likelihood of proliferation and mass killings. The weapons could enable assassinations or, alternatively, they could become weapons of oppression, allowing dictators and warlords to subdue their people.

But perhaps the greatest risk posed by LAWS, is the potential to ignite a global AI arms race.

For now, governments insist they will ensure that testing, validation, and verification of these weapons is mandatory. However, these weapons are not only technologically novel, but also transformative; they have been described as the third revolution in warfare, following gun powder and nuclear weapons. LAWS have the potential to become the most powerful types of weapons the world has seen.

Varying degrees of autonomy already exist in weapon systems around the world, and levels of autonomy and advanced AI capabilities in weapons are increasing rapidly. If one country were to begin substantial development of a LAWS program — or even if the program is simply perceived as substantial by other countries — an AI arms race would likely be imminent.

During an arms race, countries and AI labs will feel increasing pressure to find shortcuts around safety precautions. Once that happens, every threat mentioned above becomes even more likely, if not inevitable.

As stated in the Open Letter Against Lethal Autonomous Weapons:

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Most countries here have expressed their strong desire to move from talking about this topic to reaching an outcome. There have been many calls from countries and groups of countries to negotiate a new treaty to either prohibit LAWS and/or affirm meaningful human control over the weapons. Some countries have suggested other measures such as a political declaration. But a few countries – especially Russia, the United States, South Korea, Israel, and Australia – are obfuscating the process, which could lead us closer to an arms race.

This is a threat we must prevent.

State of California Endorses Asilomar AI Principles

Click here to see this page in other languages:  Russian 

On August 30, the State of California unanimously adopted legislation in support of the Future of Life Institute’s Asilomar AI Principles.

The Asilomar AI Principles are a set of 23 principles intended to promote the safe and beneficial development of artificial intelligence. The principles – which include research issues, ethics and values, and longer-term issues – emerged from a collaboration between AI researchers, economists, legal scholars, ethicists, and philosophers in Asilomar, California in January of 2017.

The Principles are the most widely adopted effort of their kind. They have been endorsed by AI research leaders at Google DeepMind, GoogleBrain, Facebook, Apple, and OpenAI. Signatories include Demis Hassabis, Yoshua Bengio, Elon Musk, Ray Kurzweil, the late Stephen Hawking, Tasha McCauley, Joseph Gordon-Levitt, Jeff Dean, Tom Gruber, Anthony Romero, Stuart Russell, and more than 3,800 other AI researchers and experts.

With ACR 215 passing the State Senate with unanimous support, the California Legislature has now been added to that list.

Assemblyman Kevin Kiley, who led the effort, said, “By endorsing the Asilomar Principles, the State Legislature joins in the recognition of shared values that can be applied to AI research, development, and long-term planning — helping to reinforce California’s competitive edge in the field of artificial intelligence, while assuring that its benefits are manifold and widespread.”

The third Asilomar AI principle indicates the importance of constructive and healthy exchange between AI researchers and policymakers, and the passing of this resolution highlights the value of that endeavor. While the principles do not establish enforceable policies or regulations, the action taken by the California Legislature is an important and historic show of support across sectors towards a common goal of enabling safe and beneficial AI.

The Future of Life Institute (FLI), the nonprofit organization that led the creation of the Asilomar AI Principles, is thrilled by this latest development, and encouraged that the principles continue to serve as guiding values for the development of AI and related public policy.

“By endorsing the Asilomar AI Principles, California has taken a historic step towards the advancement of beneficial AI and highlighted its leadership of this transformative technology,” said Anthony Aguirre, cofounder of FLI and physics professor at the University of California, Santa Cruz. “We are grateful to Assemblyman Kevin Kiley for leading the charge and to the dozens of co-authors of this resolution for their foresight on this critical matter.”

Profound societal impacts of AI are no longer merely a question of science fiction, but are already being realized today – from facial recognition technology, to drone surveillance, and the spread of targeted disinformation campaigns. Advances in AI are helping to connect people around the world, improve productivity and efficiencies, and uncover novel insights. However, AI may also pose safety and security threats, exacerbate inequality, and constrain privacy and autonomy.

“New norms are needed for AI that counteract dangerous race dynamics and instead center on trust, security, and the common good,” says Jessica Cussins, AI Policy Lead for FLI. “Having the official support of California helps establish a framework of shared values between policymakers, AI researchers, and other stakeholders. FLI encourages other governmental bodies to support the 23 principles and help shape an exciting and equitable future.”

Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction?

On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance.

Topics discussed in this episode include:

  • Three lenses through which to view AI’s transformative power
  • Emerging international and national AI governance strategies
  • The risks and benefits of regulating artificial intelligence
  • The importance of public trust in AI systems
  • The dangers of an AI race
  • How AI will change the nature of wealth and power

Papers and books discussed in this episode include:

You can listen to the podcast above and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

 

Ariel: Hi there, I’m Ariel Conn with the Future of Life Institute. As we record and publish this podcast, diplomats from around the world are meeting in Geneva to consider whether to negotiate a ban on lethal autonomous weapons. As a technology that’s designed to kill people, it’s no surprise that countries would consider regulating or banning these weapons, but what about all other aspects of AI? While, most, if not all AI researchers, are designing the technology to improve health, ease strenuous or tedious labor, and generally improve our well-being, most researchers also acknowledge that AI will be transformative, and if we don’t plan ahead, those transformations could be more harmful than helpful.

We’re already seeing instances in which bias and discrimination have been enhanced by AI programs. Social media algorithms are being blamed for impacting elections; it’s unclear how society will deal with the mass unemployment that many fear will be a result of AI developments, and that’s just the tip of the iceberg. These are the problems that we already anticipate and will likely arrive with the relatively narrow AI we have today. But what happens as AI becomes even more advanced? How can people, municipalities, states, and countries prepare for the changes ahead?

Joining us to discuss these questions are Allan Dafoe and Jessica Cussins. Allan is the Director of the Governance of AI program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. His research seeks to understand the causes of world peace, particularly in the age of advanced artificial intelligence.

Jessica is an AI Policy Specialist with the Future of Life Institute, where she explores AI policy considerations for near and far term. She’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. Jessica and Allan, thank you so much for joining us today.

Allan: Pleasure.

Jessica: Thank you, Ariel.

Ariel: I want to start with a quote, Allan, that’s on your website and also on a paper that you’re working on that we’ll get to later, where it says, “AI will transform the nature of wealth and power.” And I think that’s sort of at the core of a lot of the issues that we’re concerned about in terms of what the future will look like and how we need to think about what impact AI will have on us and how we deal with that. And more specifically, how governments need to deal with it, how corporations need to deal with it. So, I was hoping you could talk a little bit about the quote first and just sort of how it’s influencing your own research.

Allan: I would be happy to. So, we can think of this as a proposition that may or may not be true, and I think we could easily spend the entire time talking about the reasons why we might think it’s true and the character of it. One way to motivate it, as I think has been the case for people, is to consider that it’s plausible that artificial intelligence would at some point be human-level in a general sense, and to recognize that that would have profound implications. So, you can start there, as, for example, if you were to read Superintelligence by Nick Bostrom, you sort of start at some point in the future and reflect on how profound this technology would be. But I think you can also motivate this with much more near-term perspective and thinking of AI more in a narrow sense.

So, I will offer three lenses for thinking about AI and then I’m happy to discuss it more. The first lens is that of general purpose technology. Economists and others have looked at AI and seen that it seems to fit the category of general purpose technology, which are classes of technologies that provide a crucial input to many important processes, economic, political, and military, social, and are likely to generate these complementary innovations in other areas. And general purpose technologies are also often used as a concept to explain economic growth, so you have things like the railroad or steam power or electricity or the motor vehicle or the airplane or the computer, which seem to change these processes that are important, again, for the economy or for society or for politics in really profound ways. And I think it’s very plausible that artificial intelligence not only is a general purpose technology, but is perhaps the quintessential general purpose technology.

And so in a way that sounds like a mundane statement. General purpose, it will sort of infuse throughout the economy and political systems, but it’s also quite profound because when you think about it, it’s like saying it’s this core innovation that generates a technological revolution. So, we could say a lot about that, and maybe I should just to sort of give a bit more color, I think Kevin Kelly has a nice quote where he says, “Everything that we formally electrified, we will now cognitize. There’s almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.” We could say a lot more about general purpose technologies and why they’re so transformative to wealth and power, but I’ll move on to the other two lenses.

The second lens is to think about AI as an information and communication technology. You might think this is a subset of general purpose technologies. So, other technologies in that reference class would include the printing press, the internet, and the telegraph. And these are important because they change, again, sort of all of society and the economy. They make possible new forms of military, new forms of political order, new forms of business enterprise, and so forth. So we could say more about that, and those have important properties related to inequality and some other characteristics that we care about.

But I’ll just move on to the third lens, which is that of intelligence. So, unlike every other general purpose technology, which applied to energy, production, or communication or transportation, AI is a new kind of general purpose technology. It changes the nature of our cognitive processes, it enhances them, it makes them more autonomous, generates new cognitive capabilities. And I think it’s that lens that makes it seem especially transformative. In part because the key role that humans play in the economy is increasingly as cognitive agents, so we are now building powerful complements to us, but also substitutes to us, and so that gives rise to the concerns about labor displacement and so forth. But also innovations in intelligence are hard things to forecast how they will work and what those implications will be for everything, and so that makes it especially hard to sort of see what’s through the mist of the future and what it will bring.

I think there’s a lot of interesting insights that come from those three lenses, but that gives you a sense of why AI could be so transformative.

Ariel: That’s a really nice introduction to what we want to talk about, which is, I guess, okay so then what? If we have this transformative technology that’s already in progress, how does society prepare for that? I’ve brought you both on because you deal with looking at the prospect of AI governance and AI policy, and so first, let’s just look at some definitions, and that is, what is the difference between AI governance and AI policy?

Jessica: So, I think that there are no firm boundaries between these terms. There’s certainly a lot of overlap. AI policy tends to be a little bit more operational, a little bit more finite. We can think of direct government intervention more for the sake of public service. I think governance tends to be a slightly broader term, can relate to industry norms and principles, for example, as well as government-led initiatives or regulations. So, it could be really useful as a kind of multi-stakeholder lens in bringing different groups to the table, but I don’t think there’s firm boundaries between these. I think there’s a lot of interesting work happening under the framework of both, and depending on what the audience is and the goals of the conversation, it’s useful to think about both issues together.

Allan: Yeah, and to that I might just add that governance has a slightly broader meaning, so whereas policy often sort of connotes policies that companies or governments develop intentionally and deploy, governance refers to those, but also sort of unintended policies or institutions or norms and just latent processes that shape how the phenomenon develops. So how AI develops and how it’s deployed, so everything from public opinion to the norms we set up around artificial intelligence and sort of emergent policies or regulatory environments. All of that you can group within governance.

Ariel: One more term that I want to throw in here is the word regulation, because a lot of times, as soon as you start talking about governance or policy, people start to worry that we’re going to be regulating the technology. So, can you talk a little bit about how that’s not necessarily the case? Or maybe it is the case.

Jessica: Yeah, I think what we’re seeing now is a lot of work around norm creation and principles of what ethical and safe development of AI might look like, and that’s a really important step. I don’t think we should be scared of regulation. We’re starting to see examples of policies come into place. A big important example is the GDPR that we saw in Europe that regulates how data can be accessed and used and controlled. We’re seeing increasing examples of these kinds of regulations.

Allan: Another perspective on these terms is that in a way, regulation is a subset, a very small subset, of what governance consists of. So regulation might be especially deliberate attempts by government to shape market behavior or other kinds of behavior, and clearly regulation is sometimes not only needed, but essential for safety and to avoid market failure and to generate growth and other sorts of benefits. But regulation can be very problematic, as you sort of alluded to, for a number of reasons. In general, with technology — and technology’s a really messy phenomenon — it’s often hard to forecast what the next generation of technology will look like, and it’s even harder to forecast what the implications will be for different industries, for society, for political structures.

And so because of that, designing regulation can often fail. It can be misapplied to sort of an older understanding of the technology. Often, the formation of regulation may not be done with a really state-of-the-art understanding of what the technology consists of, and then because technology, and AI in particular, is often moving so quickly, there’s a risk that regulation is sort of out of date by the time it comes into play. So, there are real risks of regulation, and I think a lot of policymakers are aware of that, but also markets do fail and there are really profound impacts of new technologies not only on consumer safety, but in fairness and other ethical concerns, but also more profound impacts, as I’m sure we’ll get to, like the possibility that AI will increase inequality within countries, between people, between countries, between companies. It could generate oligopolistic or monopolistic market structures. So there are these really big challenges emerging from how AI is changing the market and how society should respond, and regulation is an important tool there, but it needs to be done carefully.

Ariel: So, you’ve just brought up quite a few things that I actually do want to ask about. I think the first one that I want to go to is this idea that AI technology is developing a lot faster than the pace of government, basically. How do we deal with that? How do you deal with the fact that something that is so transformative is moving faster than a bureaucracy can handle it?

Allan: This is a very hard question. We can introduce a concept from economics, which is useful, and that is of an externality. So, an externality is some process that when two market actors transact, I buy a product from a seller, it impacts on a third party, so maybe we produce pollution or I produce noise or I deplete some resource or something like that. And policy often should focus on externalities. Those are the sources of market failure. Negative externalities are the ones like pollution that you want to tax or restrict or address, and then positive externalities like innovation are ones you want to promote, you want to subsidize and encourage. And so one way to think about how policy should respond to AI is to look at the character of the externalities.

If the externalities are local and if the sort of relevant stakeholder community is local, then I think a good general policy is to allow a local authority to develop to the lowest level that you can, so you want municipalities or even smaller groups to implement different regulatory environments. The purpose for that is not only so that the regulatory environment is adapted to the local preferences, but also you generate experimentation. So maybe one community uses AI in one way and another employs it in another way, and then over time, we’ll start seeing which approaches work better than others. So, as long as the externalities are local, then that’s, I think, what we should do.

However, many of these externalities are at least national, but most of them actually seem to be international. Then it becomes much more difficult. So, if the externalities are at the country level, then you need country level policy to optimally address them, and then if they’re transnational, international, then you need to negotiate with your neighbors to converge on a policy, and that’s when you get into much greater difficulty because you have to agree across countries and jurisdictions, but also the stakes are so much greater if you get the policy wrong, and you can’t learn from the sort of trial and error of the process of local regulatory experimentation.

Jessica: I just want to push back a little bit on this idea. I mean, if we take regulation out of it for a second and think about the speed at which AI research is happening and kind of policy development, the people that are conducting AI research, it’s a human endeavor, so there are people making decisions, there are institutions that are involved that rely upon existing power structures, and so this is already kind of embedded in policy, and there are political and ethical decisions just in the way that we’re choosing to design and build this technology from the get-go. So all of that’s to say that thinking about policy and ethics as part of that design process I think is really useful and just to not have them as always opposing factors.

One of the things that can really help in this is just improving those communication channels between technologists and policymakers so there isn’t such a wide gulf between these worlds and these conversations that are happening and also bringing in social scientists and others to join in on those conversations.

Allan: I agree.

Ariel: I want to take some of these ideas and look at where we are now. Jessica, you put together a policy resource that covers a lot of efforts being made internationally looking at different countries, within countries, and then also international efforts, where countries are working together to try to figure out how to address some of these AI issues that will especially be cropping up in the very near term. I was wondering if you could talk a little bit about what the current state of AI policy is today.

Jessica: Sure. So this is available publicly. This is futureoflife.org/ai-policy. It’s also available on the Future of Life homepage. And the idea here is that this is a living resource document, so this is being updated regularly and it’s mapping AI policy developments as they’re happening around the world, so it’s more of an empirical exercise in that way, kind of seeing how different groups and institutions, as well as nations, are framing and addressing these challenges. So, in most cases, we don’t have concrete policies on the ground yet, but we do have strategies, we have frameworks for addressing these challenges, and so we’re mapping what’s happening in that space and hoping that it encourages transparency and also collaboration between actors, which we think is important.

There are three complementary resources that are part of this resource. The first one is a map of national and international strategies, and that includes 27 countries and 6 international initiatives. The second resource is a compilation of AI policy challenges, and this is broken down into 14 different issues, so this ranges from economic impacts and technological unemployment to issues like surveillance and privacy or political manipulation and computational propaganda, and if you click on each of these different challenges, it actually links you with relevant policy principles and recommendations. So, the idea is if you’re a policymaker or you’re interested in this, you actually have some guidance. What are people in the field thinking about ways to address these challenges?

And then the third resource there is a set of reading lists. There are dozens of papers, reports, and articles that are relevant to AI policy debates. We have seven different categories here that include things like AI policy overviews or papers that delve into the security and existential risks of AI. So, this is a good starting place if you’re thinking about how to get involved in AI policy discussions.

Ariel: Can you talk a little bit about some of maybe the more interesting programs that you’ve seen developing so far?

Jessica: So, I mean the U.S. is really interesting right now. There’s been some recent developments. The 2019 National Defense Authorization Act was just signed last week by President Trump, and so this actually made official a new national security commission on artificial intelligence. So we’re seeing the kind of beginnings of a national strategy for AI within the U.S. through these kinds of developments that don’t really resemble what’s happening in other countries. This is part of the defense department, much more tailored to national defense and national security, so there’s going to be 15 commission members looking at a range of different issues, but particularly with how they relate to national defense.

We also have a new joint AI center in the DoD that will be looking at an ethical framework but for defense technologies using AI, so if you compare this kind of focus to what we’ve seen in France, for example, they have a national strategy for AI. It’s called AI for Humanity, and there’s a lengthy report that goes into numerous different kinds of issues; they’re talking about ecology and sustainability, about transparency, much more of a focus on having state-led developments kind of pushing back against the idea that we can just leave this to the private sector to figure out, which is really where the U.S. is going in terms of the consumer uses of AI. Trump’s priorities are to remove regulatory barriers as it relates to AI technology, so France is markedly different and they want to push back against the company control of data and the uses of these technologies. So, that’s kind of an interesting difference we’re seeing.

Allan: I would like to add that I think Jessica’s overview of global AI policy looks like a really useful resource. There’s a lot of links to most of the key, I think, readings that I would think you’d want to direct someone to, so I really recommend people check that out. And then specifically, I just want to respond to this remark Jessica made about sort of U.S. approach letting companies more have a free reign at developing AI versus the French approach, especially well articulated by Macron in his Wired interview is the insight that you’re unlikely to be able to develop AI successfully if you don’t have the trust of important stakeholders, and that mostly means the citizens of your country.

And I think Facebook has realized that and is working really hard to regain the trust from citizens and users, and just in general I think, yeah, if AI products are being deployed in an ecosystem where people don’t trust them, that’s going to handicap the deployment of those AI services. There’ll be sort of barriers to their use, there will be opposition regulation that will not necessarily be the most efficient way of generating AI that’s fair or safe or respects privacy. So, I think this conversation between different governmental authorities and the public and NGOs and researchers and companies around what is good AI, what are the norms that we should expect from AI, and then how do we communicate that and enter into a conversation that, between the public and the developers of AI, is really important and is sort of against U.S. national interests to not have that conversation and not develop that trust.

Ariel: I’d actually like to stick with this subject for a minute because trust is something that I find rather fascinating, actually. How big a risk is it, do you think, that the public could decide, “We just don’t trust this technology and we want it to stop,” and if they did decide that, do you think it would actually stop? Or do you think there’s enough government and financial incentive to continue promoting AI that the public trust may not be as big a deal as it has been for some other technologies?

Jessica: I certainly don’t think that there’s gonna be a complete stop from the companies that are developing this technology, but certainly responses from the public and from their employees can shift behavior. At Google, we’re seeing at Amazon that protests from the employees can lead to changes. So in the case of Google, the employees were upset about the involvement with the U.S. military on Project Maven and didn’t want their technology to be used in that kind of weaponized way, and that led Google to publish their own AI ethics principles, which included specifically that they would not renew that contract and that they would not pursue autonomous weapons. There is certainly a back and forth that happens between the public, between employees of companies and where the technology is going. I think we should feel empowered to be part of that conversation.

Allan: Yeah, I would just second that. Investments in AI and in research and development will not stop, certainly globally, but there’s still a lot of interest that could be substantially harmed, including the public interest from the development of valuable AI services and growth from a breakdown in trust. AI services really depend on trust. You see this with the big AI companies that rely on having a large user base and generating a lot of data. So the algorithms often depend on lots of user interaction and having a large user base to do well, and that only works if users are willing to share their data, if they trust that their data is protected and being used appropriately, if there are not political movements to inefficiently, or not in the interest of the public, prevent the accumulation and use of data.

So, that’s one of the big areas, but I think there are a lot of other ways in which a breakdown in trust would harm the development of AI. It will make it harder for start ups to get going. Also, as Jessica mentioned, I think AI researchers are, they’re not just in it for the money. A lot of them have real political convictions, and if they don’t feel like their work is doing good or if they have ethical concerns with how their work is being used, they are likely to switch companies or express their concerns internally as we saw at Google. I think this is really crucial for a country from the national interest perspective. If you want to have a healthy AI ecosystem, you need to develop a regulatory environment that works but also have relationships with key companies and the public that are informed and sort of stays within the bounds of the public interest in terms of all of the range of ethical and other concerns they would have.

Jessica: Two quick additional points on this issue of trust. The first is that policymakers should not assume that the public will necessarily trust their reaction and their approach to dealing with this, and there’s differences in the public policy processes that happen that can enable greater trust. So, for example, I think there’s a lot to learn from the way that France went about developing their strategy. It took place over the course of a year with hundreds of interviews, extremely consultative with members of the public, and that really encourages buy-in from a range of stakeholders, which I think is important. If we’re gonna be establishing policies that stick around, to have that buy-in not only from industry but also from the publics that are implicated and impacted by these technologies.

A second point is just the importance of norms that we’re seeing in creating cultures of trust, and I don’t want to overstate this, but it’s sort of a first step, and I think we also need monitoring services, we need accountability, we need ways to actually check that these norms aren’t just kind of disappearing into the ether but are upheld in some way. But that being said, they are an important first step, and so I think things like the Asilomar AI principles which were again, a very consultative process that were developed by a large number of people and iterated upon, and only those that had quite a lot of consensus made it into the final principles. We’ve seen thousands of people sign onto those. We’ve seen them being referenced around the world, so those kinds of initiatives are important in kind of helping to establish frameworks of trust.

Ariel: While we’re on this topic, you’ve both been sort of getting into roles of different stakeholders in developing policy and governance, and I’d like to touch on that more explicitly. We have, obviously governments, we have corporations, academia, NGOs, individuals. What are the different roles that these different stakeholders play and do you have tips for how these different stakeholders can try to help implement better and more useful policy?

Allan: Maybe I’ll start and then turn it over to Jessica for the comprehensive answer. I think there’s lots of things that can be said here, and really most actors should be involved in multiple ways. The one I want to highlight is I think the leading AI companies are in a good position to be leaders in shaping norms and best practice and technical understanding and recommendations for policies and regulation. We’re actually quite fortunate that many of them are doing an excellent job with this, so I’ll just call out one that I think is commendable in the extent to which it’s being a good corporate citizen and that’s Alphabet. I think they’ve developed their self-driving car technology in the right way, which is to say, carefully. Their policies towards patents is, I think, more in the public interest and that is that they oppose offensive patent litigation and have really sort of invested in opposing that. You can also tell a business case story for why they would do that. I think they’ve supported really valuable AI research that otherwise groups like FLI or other sort of public interest funding sources would want to support. To example, I’ll offer Chris Olah, in Google Brain, who has done work on transparency and legibility of neural networks. This is highly technical but also extremely important for safety in the near and long-term. This is the kind of thing that we’ll need to figure out to have confidence that really advanced AI is safe and working in our interest, but also in the near-term for understanding things like, “Is this algorithm fair or what was it doing and can we audit it?”

And then one other researcher I would flag, also at Google Brain, is Moritz Hardt has done some excellent work on fairness. And so here you have Alphabet supporting AI researchers who are doing, really I think, frontier work on the ethics of AI and developing technical solutions. And then of course, Alphabet’s been very good with user data and in particular, DeepMind, I think, has been a real leader in safety, ethics, and AI for good. So I think the reason I’m saying this is because I think we should develop a norm, a strong norm that says, “Companies who are the leading beneficiaries of AI services in terms of profit have a social responsibility to exemplify best practice,” and we should call out the ones who are doing a good job and also the ones that are doing bad jobs and encourage the ones that are not doing good jobs to do better, first through norms and then later through other instruments.

Jessica: I absolutely agree with that. I think that we are seeing a lot of leadership from companies and small groups, as well, not even just the major players. Just a couple days ago, an AI marketing company released an AI ethics policy and just said, “Actually, we think every AI company should do this, and we’re gonna start and say that we won’t use negative emotions to exploit people, for example, and that we’re gonna take action to avoid prejudice and bias.” I think these are really important ways to establish as best practices exactly as you said.

The only other thing I would say is that more than other technologies in the past, AI is really being led by a small handful of companies at the moment in terms of the major advances. So I think that we will need some external checks on some of the processes that are happening. If we kind of analyze the topics that come up, for example, in the AI ethics principles coming from companies, not every issue is being talked about. I think there certainly is an important role for governments and academia and NGOs to get involved and point out those gaps and help kind of hold them accountable.

Ariel: I want to transition now a little bit to talk about Allan, some of the work that you are doing at the Governance of AI program. You also have a paper that I believe will be live when this podcast goes live. I’d like you to talk a little bit about what you’re doing there and also maybe look at this transition of how we go from governance of this narrow AI that we have today to looking at how we deal with more advanced AI in the future.

Allan: Great. So the Governance of AI Program is a unit within the Future of Humanity Institute at the University of Oxford. The Future of Humanity Institute was founded by Nick Bostrom, and he’s the Director, and he’s also the author of Superintelligence. So you can see a little bit from that why we’re situated there. The Future of Humanity Institute is actually full of really excellent scholars thinking about big issues, as the title would suggest. And many of them converged on AI as an important thing to think through, an important phenomenon to think through, for the highest stakes considerations. Almost no matter what is important to you, over the time scale of say, four decades and certainly further into the future, AI seems like it will be really important for realizing or failing to realize those things that are important to you.

So, we are primarily focused on the highest stakes governance challenges arising from AI, and that’s often what we’re indicating when we talk about transformative AI. Is that we’re really trying to focus on the kinds of AI, the developments in AI, and maybe this is several decades in the future, that will radically transform wealth and power and safety and world order and other values. However, I think you can motivate a lot of this work by looking at near-term AI, so we could talk about a lot of developments in near-term AI and how they suggest the possibilities for really transformative impacts. I’ll talk through a few of those or just mention a few.

One that we’ve touched on a little bit is labor displacement and inequality. This is not science fiction to talk about the impact of automation and AI on inequality. Economists are now treating this as a very serious hypothesis, and I would say the bulk of belief within the economics community is that AI will at least pose displacement challenges to labor, if not more serious challenges in terms of persistent unemployment.

Secondarily is the issue of inequality that there’s a number of features of AI that seem like they could increase inequality. The main one that I’ll talk about is that digital services in general, but AI in particular, have what seems like a natural global monopoly structure. And this is because the provision of an AI service, like a digital service, often has a very low marginal cost. So it’s effectively free for Netflix to give me a movie. In a market like that for Netflix or for Google Search or for Amazon e-commerce, the competition is all in the fixed cost of developing the really good AI “engine” and then whoever develops the best one can then outcompete and sort of capture the whole market. And then the size of the market really depends on if there’s sort of cultural or consumer heterogeneity.

All of this to say, we see these AI giants, the three in China and the handful in the U.S. Europe, for example, is really concerned that they don’t have an AI giant, and they’re wondering how do they produce an AI champion. And it’s plausible that a combination of factors means it’s actually going to be very hard for Europe to generate the next AI champion. So this has important geopolitical implications, economic implications, implications for welfare of citizens in these countries, implications for tax.

Everything I’m saying right now is really, I think, motivated by near-term and quite credible possibilities. We can then look to other possibilities, which seem more like science fiction but are happening today. For example, the possibilities around surveillance and control from AI and from autonomous weapons, I think, are profound. So, if you have a country or any authority, that could be a company as well, that is able to deploy surveillance systems that can be surveilling your online behavior, for example your behavior on Facebook or your behavior at the workplace. When I leave my chair, if there’s a camera in my office, it can watch if I’m working and what I’m doing, and then of course my behavior in public spaces and elsewhere, then the authority can really get a lot of information on the person who’s being surveilled. And that could have profound implications for the power relations between governments and publics or companies and publics.

And this is the fundamental problem of politics, is how do you build this leviathan, this powerful organization that doesn’t abuse its power. And we’ve done pretty well in many countries developing institutions to discipline the leviathan so that it doesn’t abuse its power, but AI is now providing this dramatically more powerful surveillance tool and then sort of coercion tool, and so that could, say, at the least, enable leaders of totalitarian regimes to really reinforce their control over their country. More worryingly, it could lead to sort of an authoritarian sliding in countries that are less robustly democratic, and even countries that are pretty democratic, they might still worry about how it will shift power between different groups. And that’s another issue area, which again is, the stakes are tremendous, but we’re not invoking sort of radical advances in AI to get there.

And there’s actually some more that we could talk about, such as strategic stability, but I’ll skip it. Those are sort of all the challenges from near-term AI — AI as we see it today or likely it’s going to be coming in five years. But AI’s developing quickly, and we really don’t know how far it could go, how quickly. And so it’s important to also think about surprises. Where might we be in 10, 15, 20 years? And this is obviously very difficult, but I think, as you’ve mentioned, because it’s moving so quickly, it’s important that some people, scholars and policymakers, are looking down the tree a little bit farther to try to anticipate what might be coming and what we could do today to steer in a better direction.

So, at the Governance of AI Program, we work on every aspect of the development and deployment and regulation and norms around AI that we see as bearing on the highest stakes issues. And this document that you mentioned, it’s entitled AI Governance: A Research Agenda, is an attempt to articulate the space of issues that people could be working on that we see as potentially touching on these high stakes issues.

Ariel: One area that I don’t think you mentioned that I would like to ask about is the idea of an AI race. Why is that a problem, and what can we do to try to prevent an AI race from happening?

Allan: There’s this phenomenon that we might call the AI race, which has many layers and many actors, and this is the phenomenon where actors (those could be an AI researcher, they could be a lab, they could be a firm, they could be a country or even a region like Europe) perceive that they need to work really hard, invest resources, and move quickly to gain an advantage in AI — in AI capabilities, in AI innovations, deploying AI systems, entering a market — because if they don’t, they will lose out on something important to them. So, that could be, for the researchers, it could be prestige, right? “I won’t get the publication.” For firms it could be both prestige and maybe financial support. It could be a market. You might capture or fail to capture a really important market.

And then for countries, there’s a whole host of motivations. Everything from making sure there’s industries in our country for our workers to having companies that pay tax revenue so that the idea is if we have an AI champion, then we will have more taxable revenue but also other advantages. There’ll be more employment. Maybe we can have a good relationship with that champion and that will help us in other policy domains. And then, of course, there’s the military considerations that if AI becomes an important complement to other military technologies or even crucial tech in itself, then countries are often worried about falling behind and being inferior and always looking towards what might be the next source of advantage. So, that’s another driver for this sense that countries want to not fall behind and get ahead.

Jessica: We’re seeing competing interests at the moment. There are nationalistic kinds of tendencies coming up. We’re seeing national strategies emerging from all over the world, and there’s really strong economic and military motivations for countries to take this kind of stance. We’ve got Russian President Vladimir Putin telling students that whoever leads artificial intelligence will be the ruler of the world. We’ve got China declaring a national policy that they intend to be the global leader in AI by 2030, and other countries as well. Trump has said that he intends for the U.S. to be the global leader. The U.K. has said similar things.

So, there’s a lot of that kind of rhetoric coming from nations at the moment, and they do have economic and military motivations to say that. They’re competing for a relatively small number of AI researchers and a restricted talent pool, and everybody’s searching for that competitive advantage. That being said, as we see AI develop, particularly from more narrow applications to potential more generalized ones, the need for international cooperation, as well as more robust safety and reliability controls, are really going to increase, and so I think there are some emerging signs of international efforts that are really important to look to, and hopefully we’ll see that outweigh some of the competitive race dynamics that we’re seeing now.

Allan: The sort of crux of the problem is if everyone’s driving to achieve this performance achievement, they want to have the next most powerful system, and if there’s any other value that they might care about or society might care about, that’s sort of in the way or that there’s a trade-off. They have an incentive to trade away some of that value to gain a performance lead. Things that we see today, like privacy, so maybe countries that have a stricter privacy policy may have troubles generating an AI champion. Some look to China and see that maybe China has an AI advantage because it has such a cohesive national culture and close relationship between government and the private sector, as compared with, say, the United States, where you can see a real conflict at times between, say, Alphabet and parts of the U.S. government, which I think the petition around Project Maven really illustrates.

So, values you might lose include privacy or maybe not developing autonomous weapons, according to some ethical guidelines that you would want. There’s other concerns that put people’s lives at stake, so if you’re rushing to market with a self-driving car that isn’t sufficiently safe, then people can die. And the small numbers, they’re independent risks, but if say the risk that you’re deploying is that the self-driving car system itself is hackable at scale, then you might be generating a new weapon of mass destruction. So, there’s these accident risks or malicious use risks that are pretty serious, and then when you really start looking towards AI systems that would be very intelligent, hard for us to understand because they’re sort of opaque, complex, fast moving when they’re plugged into financial systems, energy grids, cyber systems, cyber defense, there’s an increasing risk that we won’t even know what risks we’re exposing ourselves to because of these highly complex interdependent, fast-moving systems.

And so if we could sort of all take a breath and reflect a little bit, that might be more optimal from everyone’s perspective. But because there’s this perception of a prize to be had, it seems likely that we are going to be moving more quickly than is optimal. It’s a very big challenge. It won’t be easily solved, but in my view, it is the most important issue for us to be thinking about and working towards over the coming decades, and if we solve it, I think we’re much more likely to develop beneficial advanced AI, which will help us solve all our other problems. So I really see this as the global issue of our era to work on.

Ariel: We sort of got into this a little bit earlier, but what are some of the other countries that have policies that you think maybe more countries should be implementing? And maybe more specifically, if you could speak about some of the international efforts that have been going on.

Jessica: Yeah, so an interesting thing we’re seeing from the U.K. is that they’ve established a center for data ethics and innovation, and they’re really making an effort to prioritize ethical considerations of AI. So I think it remains to be seen exactly what that looks like, but that’s an important element to keep in mind. Another interesting thing to watch, Estonia is working on an AI law at the moment, so they’re trying to make very clear guidelines so that when companies come in and they want to work on AI technology in that country, they know exactly what the framework they’re working in will be like, and they actually see that as something that can help encourage innovations. I think that’ll be a really important one to watch, as well.

But there’s a lot of great work happening. There’s task forces emerging, and not just at the federal level, at the local level, too. New York now has an algorithm monitoring task force and actually trying to see where are algorithms being used in public services and trying to encourage accountability about where those exist, so that’s a really important thing that potentially could spread to other states or other countries.

And then you mentioned international developments, as well. So, there are important things happening here. The E.U. is certainly a great example of this right now. 25 European countries signed a Declaration of Cooperation on AI. This is a plan, a strategy to actually work together to improve research and work collectively on the kind of social and security and legal issues that come up around AI. There’s also, at the G7 meeting, they signed, it’s called the Charlevoix Common Vision for the Future of AI. That again, it’s not regulatory, but setting out a vision that includes things like promoting human-centric AI and fostering public trust, supporting lifelong learning and training, as well as supporting women and underrepresented populations in AI development. So, those kinds of things, I think, are really encouraging.

Ariel: Excellent. And was there anything else that you think is important to add that we didn’t get a chance to discuss today?

Jessica: Just a couple things. There are important ways that government can shape the trajectory of AI that aren’t just about regulation. For example, deciding how to leverage government investment really changes the trajectory of what AI is developed, what kinds of systems people prioritize. That’s a really important policy lever that is different from regulation that we should keep in mind. Another one is around procurement standards. So, when governments want to bring AI technologies into government services, what are they going to be looking for? What are the best practices that they require for that? So, those are important levers.

Another issue just is somewhat taken for granted in this conversation but just to state it, is that, shaping AI for a safe and beneficial future is, we can’t just have technical fixes; these are really built by people, and we’re making choices about how and where they’re deployed and for what purposes, so these are social and political choices. This has to be a multidisciplinary process, and involve governments along with industry and civil society, so really encouraging to see these kinds of conversations take place.

Ariel: Awesome. I think that’s a really nice note to end on. Well, so Jessica and Allan, thank you so much for joining us today.

Allan: Thank you, Ariel, it was a real pleasure. And Jessica, it was a pleasure to chat with you. And thank you to all the good work coming out of FLI promoting beneficial AI.

Jessica: Yeah, thank you so much, Ariel, and thank you Allan, it’s really an honor to be part of this conversation.

Allan: Likewise.Ariel: If you’ve been enjoying the podcasts, please take a moment to like them, share them, follow us on whatever platform you’re listening to us on. And, I will be back again next month, with a new pair of experts.

[end of recorded material]

Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity

Click here to see this page in other languages:  Russian 

Finance, education, medicine, programming, the arts — artificial intelligence is set to disrupt nearly every sector of our society. Governments and policy experts have started to realize that, in order to prepare for this future, in order to minimize the risks and ensure that AI benefits humanity, we need to start planning for the arrival of advanced AI systems today.

Although we are still in the early moments of this movement, the landscape looks promising. Several nations and independent firms have already started to strategize and develop polices for the governance of AI. Last year, the UAE appointed the world’s first Minister of Artificial Intelligence, and Germany took smaller, but similar, steps in 2017, when the Ethics Commission at the German Ministry of Transport and Digital Infrastructure developed the world’s first set of regulatory guidelines for automated and connected driving.

This work is notable; however, these efforts have yet to coalesce into a larger governance framework that extends beyond national boundaries. Nick Bostrom’s Strategic Artificial Intelligence Research Center seeks to assist in resolving this issue by understanding, and ultimately shaping, the strategic landscape of long-term AI development on a global scale.

 

Developing a Global Strategy: Where We Are Today

The Strategic Artificial Intelligence Research Center was founded in 2015 with the knowledge that, to truly circumvent the threats posed by AI, the world needs a concerted effort focused on tackling unsolved problems related to AI policy and development. The Governance of AI Program (GovAI), co-directed by Bostrom and Allan Dafoe, is the primary research program that has evolved from this center. Its central mission, as articulated by the directors, is to “examine the political, economic, military, governance, and ethical dimensions of how humanity can best navigate the transition to such advanced AI systems.” In this respect, the program is focused on strategy — on shaping the social, political, and governmental systems that influence AI research and development — as opposed to focusing on the technical hurdles that must be overcome in order to create and program safe AI.

To develop a sound AI strategy, the program works with social scientists, politicians, corporate leaders, and artificial intelligence/machine learning engineers to address questions of how we should approach the challenge of governing artificial intelligence. In a recent 80,0000 Hours podcast with Rob Wiblin, Dafoe outlined how the team’s research shapes up from a practical standpoint, asserting that the work focuses on answering questions that fall under three primary categories:

  • The Technical Landscape: This category seeks to answer all the questions that are related to research trends in the field of AI with the aim of understanding what future technological trajectories are plausible and how these trajectories affect the challenges of governing advanced AI systems.
  • AI Politics: This category focuses on questions that are related to the dynamics of different groups, corporations, and governments pursuing their own interests in relation to AI, and it seeks to understand what risks might arise as a result and how we may be able to mitigate these risks.
  • AI Governance: This category examines positive visions of a future in which humanity coordinates to govern advanced AI in a safe and robust manner. This raises questions such as how this framework should operate and what values we would want to encode in a governance regime.

The above categories provide a clearer way of understanding the various objectives of those invested in researching AI governance and strategy; however, these categories are fairly large in scope. To help elucidate the work they are performing, Jade Leung, a researcher with GovAI and a DPhil candidate in International Relations at the University of Oxford, outlined some of the specific workstreams that the team is currently pursuing.

One of the most intriguing areas of research is the Chinese AI Strategy workstream. This line of research examines things like China’s AI capabilities vis-à-vis other countries, official documentation regarding China’s AI policy, and the various power dynamics at play in the nation with an aim of understanding, as Leung summarizes, “China’s ambition to become an AI superpower and the state of Chinese thinking on safety, cooperation, and AGI.” Ultimately, GovAI seeks to outline the key features of China’s AI strategy in order to understand one of the most important actors in AI governance. The program published Deciphering China’s AI Dream in March of 2018a report that analyzes new features of China’s national AI strategy, and has plans to build upon research in the near future.

Another workstream is Firm-Government Cooperation, which examines the role that private firms play in relation to the development of advanced AI and how these players are likely to interact with national governments. In a recent talk at EA Global San Francisco, Leung focused on how private industry is already playing a significant role in AI development and why, when considering how to govern AI, private players must be included in strategy considerations as a vital part of the equation. The description of the talk succinctly summarizes the key focal areas, noting that “private firms are the only prominent actors that have expressed ambitions to develop AGI, and lead at the cutting edge of advanced AI research. It is therefore critical to consider how these private firms should be involved in the future of AI governance.”

Other work that Leung highlighted includes modeling technology race dynamics and analyzing the distribution of AI talent and hardware globally.

 

The Road Ahead

When asked how much confidence she has that AI researchers will ultimately coalesce and be successful in their attempts to shape the landscape of long-term AI development internationally, Leung was cautious with her response, noting that far more hands are needed. “There is certainly a greater need for more researchers to be tackling these questions. As a research area as well as an area of policy action, long-term safe and robust AI governance remains a neglected mission,” she said.

Additionally, Leung noted that, at this juncture, although some concrete research is already underway, a lot of the work is focused on framing issues related to AI governance and, in so doing, revealing the various avenues in need of research. As a result, the team doesn’t yet have concrete recommendations for specific actions governing bodies should commit to, as further foundational analysis is needed. “We don’t have sufficiently robust and concrete policy recommendations for the near term as it stands, given the degrees of uncertainty around this problem,” she said.

However, both Leung and Defoe are optimistic and assert that this information gap will likely change — and rapidly. Researchers across disciplines are increasingly becoming aware of the significance of this topic, and as more individuals begin researching and participating in this community, the various avenues of research will become more focused. “In two years, we’ll probably have a much more substantial research community. But today, we’re just figuring out what are the most important and tractable problems and how we can best recruit to work on those problems,” Dafoe told Wiblin.

The assurances that a more robust community will likely form soon are encouraging; however, questions remain regarding whether this community will come together with enough time to develop a solid governance framework. As Dafoe notes, we have never witnessed an intelligence explosion before, so we have no examples to look to for guidance when attempting to develop projections and timelines regarding when we will have advanced AI systems.

Ultimately, the lack of projections is precisely why we must significantly invest in AI strategy research in the immediate future. As Bostrom notes in Superintelligence: Paths, Dangers, and Strategies, AI is not simply a disruptive technology, it is likely the most disruptive technology humanity will ever encounter: “[Superintelligence] is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Edit: The title of the article has been changed to reflect the fact that this is not about regulating AI.

AI Alignment Podcast: The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce

What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure?  What’s the difference between moral realism and anti-realism and how is each view grounded?  And just what does any of this really have to do with AI?

The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, currently rebranded Humanity+. You might know him for his work on The Hedonistic Imperative, a book focusing on our moral obligation to work towards the abolition of suffering in all sentient life. Brian is a researcher at the Foundational Research Institute. He writes about ethics, animal welfare, and future scenarios on his website “Essays On Reducing Suffering.” 

Topics discussed in this episode include:

  • What metaethics is and how it ties into AI alignment or not
  • Brian and David’s ethics and metaethics
  • Moral realism vs antirealism
  • Emotivism
  • Moral epistemology and motivation
  • Different paths to and effects on AI alignment given different metaethics
  • Moral status of hedonic tones vs preferences
  • Can we make moral progress and what would this mean?
  • Moving forward given moral uncertainty
In this interview we discuss ideas contained in the work of Brian Tomasik and David Pearce. You can learn more about Brian’s work here and here, and David’s work hereYou can hear more in the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment podcast series with the Future of Life Institute. Today, we’ll be speaking with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, rebranded humanity plus, and is a prominent figure within the transhumanism movement in general. You might know him from his work on the Hedonistic Imperative, a book which explores our moral obligation to work towards the abolition of suffering in all sentient life through technological intervention.

Brian Tomasik writes about ethics, animal welfare and for far-future scenarios from a suffering-focused perspective on his website reducing-suffering.org. He has also helped found the Foundational Research Institute, which is a think tank that explores crucial considerations for reducing suffering in the long term future. If you have been finding this podcast interesting or useful, remember to follow us on your preferred listening platform and share the episode on social media. Today, Brian, David, and I speak about metaethics, key concepts and ideas in the space, explore the metaethics of Brian and David, and how this all relates to and is important for AI alignment. This was a super fun and interesting episode and I hope that you find it valuable. With that, I give you Brian Tomasik and David Pearce.

Thank you so much for coming on the podcast.

David: Thank you Lucas.

Brian: Glad to be here.

Lucas: Great. We can start off with you David and then, you Brian and just giving a little bit about your background, the intellectual journey that you’ve been on and how that brought you here today.

David: Yes. My focus has always been on the problem of suffering, very ancient problem, Buddhism and countless other traditions preoccupied by the problem of suffering. I’m also a transhumanist and what transhumanism brings to the problem is suffering is the idea that it’s possible to use technology, in particular biotechnology to phase out suffering, not just in humans throughout the living world and ideally replace them by gradients of intelligent wellbeing. Transhumanism is a very broad movement embracing not just radical mood enrichment but also super longevity and super intelligence. This is what brings me in and us here today in that there is no guarantee that human preoccupations are the problems of suffering are going to overlap with those of post human super intelligence.

Lucas: Awesome, and so you, Brian.

Brian: I’ve been interested in utilitarianism since I was 18 and I discovered the word. I immediately looked it up and was interested to see that the philosophy mirrored some of the things that I had been thinking about up to that point. I became interested in animal ethics and the far future. A year after that, I actually discovered David’s writings of the Hedonistic Imperative, along with other factors. His writings helped to inspire me to care more about suffering relative to the creation of happiness. Since then, I’ve been what you might call suffering-focused, which means I think that the reduction of suffering has more moral priority than other values. I’ve written about both animal ethics including wild animal suffering as well as risks of astronomical future suffering, what are called s-risks. You had a recent podcast episode with Kaj Sotala to talk about s-risks.

I, in general think that from my perspective, one important thing to think about was during AI is what sorts of outcomes could result in large amounts of suffering? We should try to steer away from those possible future scenarios.

Lucas: Given our focuses on AI alignment, I’d like to just offer a little bit of context. Today, this episode will be focusing on ethics. The AI Alignment problem is traditionally seen as something which is prominently something technical. While a large, large portion of it is technical, the end towards which the technical AI is aimed or the ethics which is imbued within it or embodied within it is still an open and difficult question. Broadly, just to have everything defined here, we can understand ethics here just a method of seeking to understand what we ought to do and what counts as moral or good.

The end goal of AI safety is to create beneficial intelligence not undirected intelligence. What beneficial exactly entails is still an open question that largely exist in the domain of ethics. Even if all the technical issues surrounding the creation of an artificial general intelligence or super intelligence are solved, we will still face deeply challenging ethical questions that will have tremendous consequences for earth-originating intelligent life. This is what is meant when it is said that we must do philosophy or ethics on a deadline. In the spirit of that, that’s why we’re going to be focusing this podcast today on metaethics and particularly the metaethics of David Pearce and Brian Tomasik, which also happen to be ethical views which are popular I would say among people interested in the AI safety community.

I think that Brian and David have enough disagreements that this should be pretty interesting. Again, just going back to this idea of ethics, I think given this goal, ethics can be seen as a lens through which to view safe AI design. It’s also a cognitive architecture to potentially be instantiated in AI through machine ethics. That would potentially make AIs ethical reasoners, ethical decision-makers, or both. Ethics can also be developed, practiced and embodied by AI researchers and their collaborators, and can also be seen as a discipline through which we can guide AI research and adjudicate it’s impacts in the world.

There is an ongoing debate about what the best path forward is for generating ethical AI, whether it’s project of machine ethics through bottom up or for top down approaches, or just a broad project of AI safety and AI safety engineering where we seek out corrigibility and docility, and alignment, and security in machine systems or probably even some combination of the two. It’s unclear what the outcome of AI will be but what is more certain though is that AI promises to produce and make relevant both age-old and novel moral considerations through areas such as algorithmic bias and technological disemployment and autonomous weapons, and privacy, big data systems, and even possible phenomenal states in machines.

We’ll even see new ethical issues with what might potentially one day be super intelligence and beyond. Given this, I think I’d like to just dive in first with you Brian and then, with you David. If you could just get into what the foundation is of your moral view? Then, afterwards, we can dive into the metaethics behind it.

Brian: Sure. At bottom, the reason that I placed foremost priority on suffering is emotion. Basically, the emotional experience of having suffered myself intensely from time to time and having empathy when I see others suffering intensely. That experience of either feeling it yourself or seeing others in extreme pain carries just a moral valence to me or a spiritual sensation you might call it that seems different from the sensation that I feel from anything else. It seems just obvious at an emotional level that say torture or being eaten alive by a predatory animal or things of that nature have more moral urgency than anything else. That’s the fundamental basis. You can also try to make theoretical arguments to come to the same conclusion. For example, people have tried to advance what’s called the asymmetry, which is the intuition that it’s bad to create a new being who will suffer a lot but it’s not wrong to fail to create a being that will be happy or at least not nearly as wrong.

From that perspective, you might care more about preventing the creation of suffering beings than about creating additional happy beings. You can also advance the idea that maybe preferences are always a negative debt that has to be repaid. Maybe when you have a preference that’s a bad thing and then, it’s only by fulfilling the preference that you erase the bad things. This would be similar to the way in which Buddhism says that suffering arises from craving. The goal is to cease the cravings which can be done either through the fulfilling the cravings, giving the organism what the organism wants or not having the cravings in the first place. Those are some potential theoretical frameworks from which to also derive a suffering-focused ethical view. For me personally, the emotional feeling is the most important basis.

David: I would very much like to echo what Brian was saying there. I mean there is something about the nature of intense suffering. One can’t communicate it to someone who hasn’t suffered. I mean someone who is for example born with congenital anesthesia or insensitivity to pain but there is something that is self-intimatingly nasty and disvaluable about suffering. However, evolution hasn’t engineered us of course to care impartially about the suffering of all sentient beings. My suffering and those of my genetic kin tends to matter far more to me than anything else. So far as we aspire to become transhuman and posthuman, we should be aspiring to this godlike perspective that takes into account the suffering of all sentient beings that the egocentric illusionist is a genetically adaptive lie.

How does this tie in to the question of posthuman super intelligence? Of course, there are very different conceptions of what posthuman super intelligence is going to be. I’ve always had what might say a more traditional conception of super intelligence in which posthuman super intelligence is going to be our biological descendants enhanced by AI but nonetheless still our descendants. However, there are what might crudely be called two other conceptions of post human super intelligence. One is this Kurzweilian fusion of humans and our machines, such that the difference between humans and our machine ceases to be relevant.

There’s another conception of super intelligence that you might say in some ways is the most radical is the intelligence explosion that was first conceived by I.J. Good but has been developed by Eliezer Yudkowsky, MIRI, and most recently by Nick Bostrom that conceives of some kind of runaway explosion, recursively self-improving AI and yes, there being no guarantee that the upshot of this intelligence explosion is going to be in any way congenial to human values as we understand them. I’m personally skeptic about the intelligence explosion in this sense but yeah, it’s worth clarifying what one means by posthuman super intelligence.

Lucas: Wonderful. Right before we dive into the metaethics behind these views and their potential relationship with AI alignment and just broadening the discussion to include ethics and exploring some of these key terms. I just like to touch on the main branches of ethics to provide some context and mapping for us. Generally, ethics is understood to have three branches, those being metaethics, normative ethics, and applied ethics. Traditionally, applied ethics is viewed as the application of normative and metaethical views to specific cases and situations to determine the moral status of said case or situation in order to decide what ought to be done.

An example of that might be applying one’s moral views to factory farming to determine whether or not it is okay to factory farm animals for their meat. The next branch moving upwards in abstraction would be normative ethics, which examines and deconstructs or constructs the principles and ethical systems we use for assessing the moral worth and permissibility of specific actions and situations. This branch is traditionally viewed as the formal ethical structures that we apply to certain situations and people are familiar with the deontological ethics and consequentialism, or utilitarianism, or virtue ethics. These are all normative ethical systems.

What we’ll be discussing today is primarily metaethics. metaethics seeks to understand morality and ethics itself. It seeks to understand the nature of ethical statements, attitudes, motivation, properties and judgments. It seeks to understand whether or not ethics relates to objective truths about the world and about people, or whether it’s just simply subjective or if all ethical statements are in fact false. Seeks to understand when people mean when they express ethical judgments or statements. This gets into things like ethical uncertainty and justification theories, and substantial theories, and semantic theories of ethics.

Obviously, these are all the intricacies of the end towards which AI maybe aimed. Given even the epistemology of metaethics and ethics in general that also have major implications for what AIs might be able to discover about ethics or what they may not be able to discover about ethics. Again today, we’ll just be focusing on metaethics and the metaethics behind David and Brian’s views. I guess just to structure this a little bit, just to really start to use the formal language of metaethics. As a little bit of background again, semantic theories is an ethics seek to address the question of what is the linguistic meaning of moral terms or judgments.

These are primarily concerned with whether or not moral statements contain truth values or are arbitrary and subjective. There are other branches within semantic theories but there are main two branches. The first of that is noncognitivism. Noncognitivism refers to a group of theories which hold that moral statements are neither true nor false because they do not express genuine propositions. Usually, these forms of noncognitive views with things like emotivism where people think that when people are expressing our moral views or attitudes like suffering is wrong, they’re simply saying an emotion like boohoo it’s a suffering. Or I’m expressing the emotion that I think that suffering merely bothers me or is bad to me. Rather than you expressing some sort of truth or false claim about the world. Standing in contrast to noncognitivism is just cognitivism, which refers to a set of theories which hold that moral sentences express genuine propositions. That means that they can have truth of false values.

This is to say that they are capable of being true or false. Turning back to Brian and David’s views, how would you each view your moral positions as you’ve expressed thus far. Would you hold yourself to a cognitivist view or a noncognitivist view. I guess we can start with you David.

David: Yes. I just say it’s just built into the nature of let’s say agony that agony is disvaluable. Now, you might say that there is nothing in the equations of physics and science that says anything over and above the experience itself, something like redness. Yeah, redness is subjective. It’s mind-dependent. Yet, unless one thinks minds don’t exist in the physical universe. Nonetheless, redness is an objective feature of the natural physical world. I would say that for reasons we simply don’t understand, pleasure-pain axis discloses the world’s inbuilt metric of value and disvalue. It’s not an open question whether something like agony is disvaluable to the victim.

Now, of course, someone might say, “Well, yes. Agony is disvaluable to you but it’s not disvaluable to me.” I would say that this reflects an epistemological limitation and that in so far as you can access what it is like to be me and I’m in agony, then you will appreciate why agony is objectively disvaluable.

Lucas: Right. The view here is a cognitivist view where you think that it is true to say that there is some intrinsic property or quality to suffering or joy that makes it I guess analytically true that it is valuable or disvaluable.

David: Yes. Well, it has to be very careful about using something like analytically because yeah, someone says that god is talking to me and it is analytically true that these voices are the voices of god. Yeah, one needs to be careful not to smuggle in too much. It is indeed very mysterious. What could be this hybrid descriptive evaluative state of finding something valuable or disvaluable. The intrinsic nature of the physical is very much an open question. I think there are good powerful reasons for thinking that the reality is exhaustively described by the equations of physics. The intrinsic nature of that stuff, the essence of the physical, the fire in the equations is controversial. Physics itself is silent.

Lucas: Right. I guess here, you would describe yourself given these views as a moral realist or an objectivist.

David: Yes, yes.

Brian: Just to jump in before we get to me. Couldn’t you say that your view is still based on mind-dependence because at least based on the thing about if somebody else were hooked up to you, that person would appreciate the badness of suffering. That’s still just dependent on that other mind’s judgment or even if you have somebody who could mind meld with the whole universe and experience all suffering at once. That would still be the dependence of that mind. That mind is judging it to be a bad thing. Isn’t it still mind-depending ultimately?

David: Mind-dependent but I would say that minds are features of the physical world and so, obviously one can argue for some kind of dualism but I’m monistic physicalist at least that’s my working assumption.

Brian: I think objective moral value usually … the definition is usually that it’s not mind-dependent. Although, maybe it just depends what definition we’re using.

David: Yes. It’s rather like something physicalism, it’s often used as a stylistic variant of materialism. One can be non-materialist physicalist and idealist. As I said, minds are objective features of the physical world. I mean at least tentatively at any rate taks seriously the idea that our experience discloses the intrinsic nature of the physical. This is obviously controversial opinion. It’s associated with someone like Galen Straussen or more likely Phil Goff but it stretches back via Grover Maxwell and Russell, ultimately to Schopenhauer. A much more conventional view of course would be that the intrinsic nature of the physical, the fire and the equations is non-experiential. Then, at sometime during the late pre-Cambrian, something happened. Not just organizational but ontological eruption into the fabric of the world first person experience.

Lucas: Just to echo what Brian was saying. The traditional objectivist or more realist view is that the way in which science is the project of interrogating third person facts like what is simply true about the person regardless of what we think about it. In some ways, I think that traditionally the moral realist view is that if morality deals with objective facts, then, these facts are third person objectively true and can be discovered through the methods and tools of ethics. In the same way that someone who might be a mathematical realist would say that one does not invent certain geometric objects rather one discovers them through the application of mathematical reasoning and logic.

David: Yes. I think it’s very tempting to think of first person facts as having some kind of second rate ontological status but as far as I’m concerned, first person facts are real. If someone is in agony or experiencing redness, these are objective tracks about the physical world.

Lucas: Brian, would you just like to jump in with the metaethics behind your own view that you discussed earlier?

Brian: Sure. On cognitivism versus noncognitivism, I don’t have strong opinions because I think some of the debate is just about how people use language, which is not a metaphysical fundamental issue. It’s just like however humans happen to use language. I think the answer to the cognitivism, noncognitivism, if I had to say something would be it’s messy probably. Humans do talk about moral statements, the way they talk about other statements, other factual statements. We use reasoning and we care about maintaining logical consistency among sets of moral statements. We treat them as regular factual statements in that regard. There maybe also be a sense in which moral statements do strongly express certain emotions. I think probably most people don’t really think about it too much.

It’s like people know what they mean when they use moral statements and they don’t have a strong theory of exactly how to describe what they mean. One analogy that you could use is I think moral statements are like swear words. They’re used to make people feel more strongly about something or express how strongly you feel about something. People think that they don’t just refer to one’s emotions and even at a subjective level. If you say my moral view is suffering as bad. That feels different than saying I like ice cream because there’s a deeper, more spiritual or more like fundamental sensation that comes along with the moral statements that doesn’t come along with the, “I like ice cream,” statements.

I think metaphysically, that doesn’t reflect anything fundamental. It just means that we feel differently about moral statements and thoughts than about nonmoral ones. Subjectively, it feels different. Yeah. I think most people just feel that difference and then, exactly how you cash out whether that’s cognitive or noncognitive is a semantic dispute. My metaphysical position is anti-realism. I think that moral statements are mind-dependent. They reflect ultimately our own preferences even if they maybe very spiritual and like deep fundamental preferences. I think Occam’s Razor favors this view because it would add complexity to the world for there to be independent truths. I’m not even sure what that would mean, based on similar reason, I reject mathematical truths and anything non-physicalist. I think moral truths, mathematical truths and so on can all be thought of as fictional constructions that we make. We can reason within these fictional universes of ethics and mathematics that we construct using physical thought processes. That’s my basic metaphysical stance.

Lucas: Just stepping back to the cognitivism and noncognitivism issue, I guess I was specifically interested in yourself. When you were expressing your own moral view earlier, did you find that it’s simply a mixture of expressing your own emotions and also, trying to express truth claims or given your anti-realism, do you think that you’re simply only expressing emotions when you’re conveying your moral view?

Brian: I think very much of myself as an emotivist. It’s very clear to me that what I’m doing when I do ethics is what the emotivist as people are doing. Yes, since I don’t believe in moral truth, it would not make sense for me to be gesturing at moral truths. Except maybe in so far as my low level brain wiring intuitively thinks in those terms.

David: Just to add to this and that although it is possible to imagine, say something you like spectrum inversion, color inversion, some people who like ice cream and some people who hate ice cream. One thing it isn’t possible to do is imagine a civilization in which an inverted pleasure-pain axis. It seems to just be a basic fact about the world that unbearable, agony and despair is experienced as disvaluable and even cases that might appear to contradict this slight that say that masochist are in fact merely confirm a claim because, yeah, I mean the masochist enjoys the intensity rewarding release of endogenous opioids when the masochist undergoes activities that might otherwise be humiliating or painful.

Lucas: Right. David, it seems you’re making a claim about there being a perfect convergence in the space of all possible minds among the pleasure-pain axis having the same sort of function. I guess I’m potentially just missing the gap or pointing out the gap between that and I guess your cognitivist objectivism?

David: It seems to be built into the nature of let’s say agony or despair itself that it is disvaluable. It’s not I’m in agony. Is this valuable or not? It’s not open question whereas anything else. However, abhorrent, your eye might regard it one can still treat it as an open question and ask, is child abuse or slavery really disvaluable? Whereas in the case of agony, it’s built in the nature of the experience itself.

Lucas: I can get behind that. I think that sometimes when I’m feeling less nihilistic about morality, I am committed to that view. I think just to push back a little bit here. I think in the space of all possible minds, I think I can imagine a mind which has a moral judgment and commitment to the maximization of suffering within itself and within the world. It’s simply … it’s perfect in that sense. It’s perfect in maximizing suffering for itself in the world and it’s judgment and moral epistemology is very brittle, such that it will never change or deviate from this. How would you deal with something like that?

David: Is it possible? I mean one can certainly imagine a culture in which displays of machismo and the ability to cope with great suffering are highly valued and would be conspicuously displayed. This would fitness enhancing but nonetheless, it doesn’t really challenge the sovereignty of their pleasure-pain axis as the axis of value and disvalue. Yeah, I would struggle to conceive some kind of intelligence that values its own despair or agony.

Brian: From my perspective, I agree with what Lucas is saying depending on how you define things. One definition of suffering could be that part of the definition is desire to avoid it. From that perspective, you could say it’s not possible for an agent to seek something that it avoids. I think you could have systems where there are different parts in conflict so you could a hedonic assessment system that outputs a signal that this is suffering but then, another system then chooses to favor the suffering. Humans even have something like this when we can override our own suffering. We might have hedonic systems that say going out in the cold is painful but then, we have other systems or other signals that override that avoidance response and cause us to go out in the cold anyway for the sake of something else. You could imagine the wiring, such that wasn’t just enduring pain for some greater good but the motivational system was actively seeking to cause the hedonic system more experiences of pain. It’s just that that would be highly nonadaptive so we don’t see that anywhere in nature.

David: I would agree with what Brian says there. Yes, very much so.

Lucas: Okay. Given these views, would you guys have expressed and starting to get a better sense of them. Another branch of metaethics here that we might be able to explore how it fits in with your guy’s theories, justification theories within metaethics. These are attempts at understanding moral epistemology and motivation for acting in accordance with morality. It attempts to answer the question of how are moral judgments to be supported or defended? If possible, how does one make moral progress? This again will include moral epistemology and in terms of AI and value alignment, if one is anti-realist as Brian is or if one is an objectivist as David is then this completely changes the way and path forward towards AI alignment and value alignment if we are realist as David is then a sufficiently robust and correct moral epistemology in an AI system could essentially realize the hedonistic imperative as David sees it, where you would just have an optimization process extending out from planet earth, which was maximizing for the objectively good hedonic states in all possible sentient beings. I guess it’s a little unclear for me how this fits in with David’s theory or how David’s theory would be implemented.

David: There is a real problem with any theory of value that makes sovereign either the minimization of suffering or classical utilitarianism. Both Buddhism and negative utilitarianism appear to have this apocalyptic implication that if overriding responsibilities to minimize suffering but no. Isn’t that cleanest, quickest, efficient way to eliminate suffering to sterilize the planet, which is now technically feasible and though one can in theory imagine cosmic rescue missions if there is sentence elsewhere. There is apparently this not so disguised apocalyptic implication. When Buddha says allegedly or hopefully I teach one thing and one thing only. Suffering and the relief of suffering, or the end of suffering, yeah, in his day, there was no way to destroy the world. Today, there is.

Much less discussed, indeed I haven’t seen it adequately or not discussed at all in the scholarly literature is that a disguised implication of a classical utilitarian ethic that gives this symmetry to pleasure and pain is that we ought to be launching something like utilitronium shockwave where utilitronium is matter and energy optimized for pure bliss. The shockwave alludes to its velocity of propagation. Though humans perhaps are extremely unlikely even if and when we’re in a position to do so to launch a utilitronium shockwave. If one imagines a notional artificial, super intelligent with a utility function of classical utilitarianism, why wouldn’t that super intelligent launch a utilitronium shockwave that maximizes the cosmic abundance of positive value within our cosmological horizon.

Personally, I would imagine a future of gradients of intelligent bliss. I think that is in fact sociologically highly likely that post-human civilization will have a hedonic range that’s very crudely and schematically as is minus 10 to zero, to plus 10. I can imagine future civilization of let’s say plus 70 to plus 100 or plus 90 to a plus 100. From the perspective classical utilitarianism and classical utilitarianism is arguably the dominant some kind of watered-down version at least is the dominant secular ethic, and academia and elsewhere. That kind of civilization is suboptimal. It’s not moral or apparently has this obligation to launch this kind of cosmic orgasm so to speak.

Lucas: Right. I mean I think just pushing a little bit back on the first thing that you said there about the very negative scenario, which I think people tend to see as an implication of a suffering reducing focused ethic where there can’t be any suffering if there’s no sentient beings. That to me isn’t very plausible because it discounts the possibility of future wellbeing. I take the view that we actually do have a moral responsibility to create more happy beings and I view a  symmetry between pain and suffering. I don’t have a particularly suffering-focused ethic where I think there’s asymmetry where I think we should alleviate suffering prior to maximizing wellbeing. I guess David, maybe you can just unpack a little bit before we jump into these justification theories about whether or not you view there as being asymmetry between suffering and wellbeing.

David: I think there’s an asymmetry. There’s this fable of Ursula Le Guin, short story, Ones Who Walk Away From Omelas. We’re invited to imagine this city of delights, vast city of incredible wonderful pleasures but the existence of Omelas, this city of delights depends on the torment and abuse of a single child. The question is would you walk away from Omelas and what does walking away from Omelas entail. Now, personally I am someone who would walk away from Omelas. The world does not have an off switch, an off button and I think if one is whether a Buddhist of a negative utilitarian, or someone who believes in suffering-focused ethics, rather than to consider these theoretical apocalyptic scenarios it is more fruitful to work with secular and religious life lovers to phase out the biology of suffering in favor of gradients of intelligent wellbeing because one of the advantages of hedonic recalibration, i.e. ratcheting up hedonic set points is that it doesn’t ask people to give up their existing values and preferences with complications.

If you ask me, just convenient, this is a rather trivial example. Imagine, 100 people, 100 different football teams. There’s simply no way to reconcile conflicting preferences but what one can do if one ratchets up everyone’s hedonic set point is to improve quality of life. By focusing on ratcheting up hedonic set points rather than trying to reconcile the irreconcilable, I think this is the potential way forward.

Brian: There are a lot of different points to comment on. I agree with David that negative utilitarians should not aim for world destruction for several reasons. One being that it would be make people turn against the cause of suffering reduction. It’s important to have other people not regard that as something to be appalled by. For example, animal rights terrorists, plausibly give the animal rights movement a pretty bad name and may set back the cause of animal rights by doing that. Negative utilitarians would almost certainly not succeed anyway, so the most likely outcome is that they hurt their own cause.

As far as David’s suggestion of improving wellbeing to reduce disagreements among competing football teams, I think that would potentially help giving people greater wealth and equality in society can reduce some tensions. I think there will always be some insatiable appetites especially from moral theories. For example, classical utilitarian has an insatiable appetite for computational resources. Egoists and other moral people may have their own insatiable appetites. We see that in the case of humans trying to acquire wealth beyond what is necessary for their own happiness. I think there will always be those agents who want to acquire as many resources as possible. The power maximizers will tend to acquire power. I think we still have additional issues of coordination and social science being used to control the thirst for power among certain segments of society.

Lucas: Sorry. Just to get this clear. It sounds like you guys are both committed to different forms of hedonic consequentialism. You’re bringing up preferences and other sorts of things. Is there a room for ultimate metaphysical value of preferences within your ethics? Or are preferences simply epistemically and functionally useful indicators of what will often lead to positive hedonics and agents within you guys as ethical theories?

Brian: Personally, I care to some degree about both preferences and hedonic wellbeing. Currently, I care some more about hedonic wellbeing just based on … from my meta-ethical standpoint, it’s ultimately my choice, what I want to care about. I happen to care a lot about hedonic suffering when I imagine that. From a different standpoint, you can argue that ultimately the golden rule for example commits you to caring about whatever it is and other organisms cares about whether that’s hedonic wellbeing or some arbitrary wish. For example, a deathbed wish would be a good example of a preference that doesn’t have hedonic content to it, whether you think it’s important to keep deathbed wishes even after a person has died ignoring side effects in terms of later generations realizing that promises are not being kept.

I think even ignoring those side effects, a deathbed wish does have some moral importance based on the idea that if I had a deathbed wish, I would strongly want it to be carried out if you are acting the way you want others to treat you. Then, you should care to some degree about other people’s deathbed wishes. Since I’m more emotionally compelled by extreme hedonic pain, that’s what I give the most weight to.

Lucas: What would your view be of an AI or machine intelligence, which has a very strong preference, whatever that computational architecture might look like a bit be flip one way rather than another. It just keeps flipping a bit back and forth, and then, you would have a preference utilitronium shockwave going out in the world. It seems intuitive to me also that we only care about preferences and so far as they … I guess this previous example does this work for me is that we only care about preferences in so far as that they have hedonic effects. I’ll bite the bullet on the deathbed wish thing and I think that ignoring side effects like if someone wishes for something and then, they die, I don’t think that we need to actually carry it out if we don’t think it will maximize hedonic wellbeing.

Brian: Ignoring the side effects. There are probably good hedonistic reasons to fulfill deathbed wishes so that current people will not be afraid that their wishes won’t be kept also. As far as the bit flipping, I think a bit flipping agent does, I think it’s preference does have moral significance but I weigh organisms in proportion to the sophistication of their minds. I care more about a single human than a single ant for example because a human has more sophisticated cognitive machinery. It can do more kinds of … have more kinds of thoughts about its own mental states. When a human has a preference, there’s more stuff going on within its brain to back that up so to speak. A very simple computer program that has a very simple preference to flip a bit doesn’t matter very much to me because there’s not a lot of substance behind that preferences. You could think of it as an extremely simple mind.

Lucas: What if it’s a super intelligence that wants to keep flipping bits?

Brian: In that case, I would give a significant way because it has so much substance in its mind. It probably has lots of internal processes that are reflecting on its own welfare so to speak. Yeah, if it’s a very sophisticated mind, I would give that significant weight. It might not override the preferences of seven billion humans combined. I tend to give less than linear weight to larger brains. As the size of the brain increases, I don’t scale the moral weight of the organism exactly linearly. That would alter reduce that utility monster inclusion.

Lucas: Given Brian’s metaethics being an anti-realist and viewing him as an emotivist, I guess the reasons or arguments that you could provide against this view would only be, they don’t refer back to any metaphysical objective, anything really. David, wouldn’t you say that in the end, it would just be your personal emotional choice whether or not to find something compelling here.

David: It’s to do with the nature of first person facts. What is it that the equations of physics ultimately describe and if you think subjectivity or at least take it seriously the conjecture of that subjectivity is the essence of the physical, the fire in the equations, then yeah, it’s just objectively in the case that first person agony is disvaluable. Here we get into some very controversial issues. I would just like to go back to one thing Brian was saying about sophistication. I don’t think it’s plausible that let’s say a pilot whale is more cognitively sophisticated than humans but it’s very much an open question whether a pilot whale with a substantially larger brain, substantially larger neocortex, substantially larger pain and pleasure centers that the intensity of experience undergone by a pilot whale let’s say may be greater than that of humans. Therefore, other things being equal, I would say that it’s so profoundly aversive states undergone by the whale matter more than a human. It’s not the level of sophistication or complexity that counts.

Lucas: Do you want to unpack a little bit your view about the hedonics versus the preferences, and whether or not preferences have any weight in your view?

David: Only indirectly weight and that ultimately, yeah, as I said I think what matters is the pleasure-pain axis and preferences only matter in so far as they impact that. Thanks to natural selection, we have countless millions and billions of preferences that are being manufactured all the time as social primates countless preferences conflict with each other. There is simply no way to reconcile a lot of them. Whereas one can continue to enrich and enhance wellbeing so, yeah sure. Other things being equal satisfy people’s preferences. In so many contexts, it is logically impossible to do so from politics, the middle east, interpersonal relationships, the people’s desire to be the world famous this, that or the other. It is logically impossible to satisfy a vast number of preferences.

Lucas: I think it would be interesting and useful to dive into, within justification theories, like moral epistemology and ethical motivation. I think I want to turn to Brian now. Brian, I’m so curious to know if it’s possible given your view of anti-realism and suffering focused ethics, whether or not you can make moral progress or what it means to make moral progress. How does one navigate the realm of moral issues in your view, given the metaethics that you hold? Why ought I or others, or why not ought I or others to follow your ethics or not?

Brian: Moral progress I think can be thought of as many people have a desire to improve their own moral views using standards of improvement that they choose. For example, a common standard would be I think that the moral views that I will hold after learning more, I will generally now defer to those views as the better ones. There might be some exceptions especially if you get too much into some subject area that distorts your thinking relative to the way it was before. Basically, you can think of brain state changes as either being approved of or not approved of by the current state. Moral progress would consist of doing updates to your brain that you approve of, like installing updates to computer that you choose to install.

That’s what moral progress would be. Basically, you designated which changes do I want to happen and then, if those happen according to the rules then it’s on a progress relatively to what my current state thought. You can have failures of goal preservation. The example that Eliezer Yudkowsky gives is if you give Gandhi a pill that would make him want to kill people. He should not take it because that would change his goals in a way that his current goals don’t approve of. That would be moral anti-progress relative to Gandhi’s current goals. Yeah, that’s how I would think of it. Different people have different preferences about how much you can call preference idealization.

Preference idealization is the idea of imagining what preferences you would hold if you knew more, were smarter, had more experiences, and so on. Different people couldn’t want different amounts of preference idealization. There are some people who say I have almost no idea what I currently value and I want to defer that to an artificial intelligence to help me figure that out. In my case, it’s very clear to me that extreme suffering is what I want to continue to value and if I change from that stance, that would be a failure of goal preservation relative to my current values. There are still questions on which I do have significant uncertainty in a sense that I would defer to my future self.

For example, the question of how to weigh different brain complexities against each other is something where I still have significant uncertainty. The question of how much weight to give to what’s called higher order theory in consciousness versus first order theories basically how much you think that high level thoughts are an important component of what consciousness is. That’s an issue where I have significant moral uncertainty. There are issues where I want to learn more, think more about it, have more other people think about it before I make up my mind fully on what I think about that. Then, why should you hold my moral view? The real answer is because I want you to and I’ll try to come up with arguments to make it sound more convincing to you.

David: I find subjectivism troubling. I support my football team is Manchester United. I wouldn’t take a pill, less induced me to support Manchester City because that would subvert my values in some sense. Nonetheless, ultimately, support for Manchester United is arbitrary. It is a support for the reduction of suffering merely a kin to I once support lets say of Manchester United.

Brian: I think metaphysically, they’re the same. It feels very different. There’s more of a spiritual, like your whole being is behind reduction of suffering in the way that’s not true for football teams. Ultimately, there’s no metaphysical difference.

Intentional objects ultimately are arbitrary that natural selection has eschewed us a define certain intentional objects. This is philosophy jargon for the things we care about, whether it’s a football or politics, or anything. Nonetheless, it’s unlike these arbitrary intentional objects, it just seems to built into the nature of agony or despair that they are disvaluable. It’s simply not possible to instantiate such states and find it an open question whether they’re disvaluable or not.

Brian: I don’t know if we want to debate now but I think it is possible. I mean we already have examples of one organism who finds the suffering of another organism to be possibility valuable.

David: They are not mirror-touch synesthete. They do not accurately perceive what is going on and in so far as one does either as a mirror-touch synesthete or can do the equivalent of a Vulcan mind meld or something like that, one is not going to perceive the disvaluable as valuable. Its an epistemological limitation.

Brian: My objection to that is it depends how you hook up the wires between the two minds. Like if you hook up one person suffering to another person’s suffering, then the second person will say it’s also bad. If you hook up one person’s suffering neurons to another person’s pleasure neurons, then, the second person will say it’s good. It just depends how you hook up the wires.

David: It’s not all or nothing but if one is let’s say a mirror-touch synesthete today and someone’s, they stub their toe and you have an experience of pain, it’s simply not possible to take pleasure in their stubbing their toe. I think if one does have this notional god’s eye perspective, an impartial view from nowhere that one will act accordingly.

Brian: I disagree with that because I think you can always imagine just reversing the motivational wires so to speak. Just flip the wire that says this is bad. Flip it to saying this is good in terms of the agent’s motivation.

David: Right. Yes. I was trying to visualize what this would entail.

Brian: Even in a synesthete example, just imagine a brain where the same stimulus currently in normal humans, this stimulus triggers negative emotional responses just have the neurons hook up to the positive emotional responses instead.

David: Once again, wouldn’t this be an epistemological limitation rather than some deep metaphysical truth about the world?

Brian: Well, it depends how you define epistemology but you could be a psychopath where you correctly predict another organism’s behavior but you don’t care. You can have a difference between beliefs and motivations. The beliefs could correctly recognize this I think but the motivations could have the wires flipped such that there’s motivation to cause more of the suffering.

David: It’s just that I would say that the psychopath has an epistemological limitation in that the psychopath does not adequately take into account other perspectives. In that sense, psychopath lacks an adequate theory of mind. The psychopath is privileging one particular here and now over other here and nows, which is not metaphysically sustainable.

Brian: It might be a definitional dispute like whether you can consider having proper motivation to be part of epistemological accuracy or not. It seems that you’re saying if you’re not properly motivated to reduce … you don’t have proper epistemological access to it by definition.

David: Yes. One has to be extremely careful with using this term by definition. Yes. I would say that we are all to some degree sociopathic. One is quasi sociopathic to one’s future self for example and so far is one let’s say doesn’t prudently save but squanders money and stuff. We are far more psychopathic towards other sentient beings because one is failing to fully to take into account their perspective. It’s hardwired epistemological limitation. One thing I would very much agree with Brian on is moral uncertainty and being prepared to reflection and take into account other perspectives and allow for the possibility one can be wrong. It’s not always possible to have the luxury of moral reflection uncertainty.

If a kid is drowning, hopefully one that dashes into the water to save the kid. Is this the right thing to do? Well, what happens if the kid, this is the real story, happens to be a toddler grows up to the Adolf Hitler and plunges the world into war. One doesn’t know the long term consequences of one’s action. Wherever possible, yes, one urges reflection and caution in the context of a discussion or debate. One isn’t qualifying, one’s uncertainty, agnosticism carefully but in a more deliberative context perhaps of what one should certainly do so.

Lucas: Let’s just bring it a little bit back to the ethical epistemology behind and ethical motivation behind your hedonistic imperative given your objectivism. I guess here, it’d also be interesting to know if you could also explore key metaphysical uncertainties and physical uncertainties, and what more and how we might go about learning about the universe such that your view would be further informed.

David: Happy to launch into long spiel about my view. One thing I think it really is worth stressing is that one doesn’t need to buy into any form of utilitarianism or suffering-focused ethics to believe that we can and should phase out the biology of involuntary suffering. It’s common to all manner of secular and religious views that we should be other things being equal minimizing suffering reducing unnecessary suffering and this is one thing that technology, it could buy a technology allows us to do and support for something like universal access for implantation, genetic screening, phasing out factory farming and shutting slaughter houses, going on to essentially reprogram the biosphere.

It doesn’t involve a commitment to some particular one specific ethical or meta-ethical view. For something like pain-free surgery anesthesia, you don’t need to sign up for it to recognize it’s a good thing. I suppose my interest is very much in building bridges with other ethical traditions. Yeah, I am happy to go into some of my own personal views but I just don’t want to tie this idea that we can use bio-tech to get rid of suffering into anything quirky or idiosyncratic to me. I have a fair number of idiosyncratic views.

Lucas: It would be interesting if you’d explain whether or not you think that super intelligences or AGI will necessarily converge on what you view to be objective morality or if that is ultimately down to AI researchers to be very mindful of implementing.

David: I think there are real risk here when one starts speaking as though posthuman super intelligence is going to end up endorsing a version of one’s own views and values, which a priori ,if one thinks about, is extremely unlikely. I think too one needs to ask yeah, when I was talking about post human super intelligence, if post human super intelligence is biological descendants, I think post human super intelligence will have a recognizable descendant of pleasure-pain axis. I think it will be ratcheted up so that say experience below hedonic zero is impossible.

In that sense, I do see a convergence. By contrast, if one has a conception of post human super intelligence such that post human super intelligence may not be sentient, may not be experiential at all then, there is no guarantee that such a regime would be friendly to anything recognizably human in its values.

Lucas: The crux here there are different ways of doing value alignment and one such way is descriptively through a super intelligence being able to gain enough information about the set of all values that human beings have and say aligning to those or to some fraction of those or to some idealized version of those through something like a coherent extrapolated volition. Another one is where we embed a moral epistemology within the machine system, so that the machine becomes an ethical reasoner, almost a moral philosopher in its own right. It seems that given your objectivist ethics that with that moral epistemology, it would be able to converge on what is true. Do these different paths forward makes sense to you and/or it also seems that the role of mind melding seems to be very crucial and core to the realization of the correct ethics in your view?

David: With some people, their hearts sinks when the topic of machine consciousness crops up because they know it’s going to be a long inconclusive philosophical discussion and a shortage of any real empirical tests. Yeah, I will just state. I do not think a classical digital computer is capable of phenomenal binding, therefore it will not understand the nature of consciousness or pleasure and pain, and I see the emotion of value and disvalue is bound with the pleasure-pain axis. In that sense, I think what we’re calling machine artificial general intelligence, in one sense it’s invincibly ignorant. I know a lot of people would disagree with this description but if you think humans or at least some humans spend a lot of their time thinking about, talking about, exploring consciousness and it’s all varieties in some cases exploring psychedelia, what are we doing? There are vast range of cognitive domains that are completely, cognitively inaccessible to digital computers.

Lucas: Putting aside the issue of machine consciousness, it seems that being able to first-person access hedonic states provides a extremely foundational and core motivational or at least epistemological role in your ethics David.

David: Yes. I mean part of intelligence involves being able to distinguish the important from the trivial, which ultimately as far as I can see boils down to the pleasure-pain axis. Digital zombies have no conception of what is important or what is trivial I would say.

Lucas: Why would that be if a true zombie in the David Chalmers sense is functionally isomorphic to a human. Presumably that zombie would properly care about suffering because all of its functional behavior is the same. Do you think in the real world, digital computers can’t do the same functional computation that a human brain does?

David: None of us have the slightest idea how one would set about programming a computer to do the kinds of things that humans are doing when they talk about and discuss consciousness when they take psychedelics or discuss the nature of the self. I’m not saying work arounds are impossible. I just don’t think they’re spontaneously going to happen.

Brian: I agree. Just like building intelligence itself, it requires a lot of engineering to create those features of humanlike psychology.

Lucas: I don’t see why it would be physically or technically impossible to instantiate an emulation of that architecture or an architecture that’s basically identical to it in a machine system. I don’t understand why computer architecture, computer substrate is really so different from biological architecture or substrate such that it’s impossible for this case.

David: It’s whether one feels the force of the binding problem or not. The example one can give, imagine the population of the USA are skull bound minds, imagine them implementing any kind of computation you like. They are ultra fast, electromagnetic signaling far faster than the retro chemical signaling and the CNS is normally conceived. Nonetheless, short of a breakdown with monistic physicalism, there is simply no way that the population of the USA is spontaneously going to become subject to experience to apprehend perceptual objects. Essentially, all you have is a micro experiential zombie. The question is why are 86 billion odd membrane bound supposedly classical neurons any different?

Why aren’t we micro experiential zombies? One way to appreciate, i think, the force, the adaptive role of phenomenal binding is to look at syndromes where binding even harshly breaks down such as simultanagnosia where the subject can only see one thing at once. Or motion blindness or akinetopsia, where one can’t apprehend motion or severe forms of schizophrenia where there is no longer any unitary self. Somehow right now, you instantiate a unitary world simulation populated by multiple phenomenally bound dynamical objects and this is tremendously fitness enhancing.

The question is how can a bunch of membrane-bound nerve cells, a pack of neurons carry out what is classically impossible. I mean one can probe the CNS with temporary course grained and neuro scans… individual feature process, edge detectors, motion detectors, color detectors. Apparently, there are no perceptual objects there. How is it that right now that your mind/brain is capable of running this egocentric world simulation in almost real time. It’s astonishing computational feat. I argue for a version of quantum mind but one needn’t buy into this to recognize that it’s profound an unsolved problem. I mean why aren’t we like the population of the USA?

Lucas: Just to bring this back to the AI alignment problem and putting aside issues in phenomenal binding, and consciousness for a moment. Putting aside also the conception that super intelligence is likely to be some sort of biologic instantiation if we imagine the more AI safety mainstream approach, the MIRI idea of there being simply a machine super intelligence. It seems that in your view David and I think here this elucidates a lot of the interdependencies and difficulties where one’s meta-ethical views are intertwined in the end with what is true about consciousness and computation. It seems that in your view, close to or almost maybe perhaps impossible to actually do AI alignment or value alignment on machine super intelligence.

David: It is possible to do value alignment but I think the real worry is that if you take the MIRI scenario seriously, this recursively self-improving software that will somehow … This runaway intelligence. There’s no knowing where it may lead by MIRI as far as I know have very different conception of the nature of consciousness and value. I’m not aware that they tackle the binding problem. I just don’t see that unitary subjects of experience or values, or pleasure-pain axis are spontaneously going to emerge from software. It seems to involve some form of strong emergence.

Lucas: Right. I guess to tie this back and ground it a bit. It seems that the portion of your metaethics, which is going to be informed by empirical facts about consciousness and minds in general is the view in there that without access to the phenomenal pleasure-pain axis, what you view to have an intrinsic goodness or wrongness to it because it is foundationally and physically, and objectively the pleasure-pain axis of the universe, the heat and the spark in the equation I guess as you say. Without access to that, then ultimately, one will go awry in one’s ethics if one does not have access to phenomenal hedonic states given that that’s the core of value.

David: Yeah. In theory, an intelligent digital computer stroke robot could impartially pave the cosmos with either dolorium or hedonium without actually understanding the implications of what it was doing. Hedonium being or utilitronium, matter and energy optimized for pure bliss. Dolorium being matter and energy optimized for, lack of a better word, for pure misery or despair. That’s the system in question we do not understand the implications of what it was doing. That I know a lot of people do think that well, sooner or later, classical, digital computers, our machines are going to wake up. I don’t think it’s going to happen. Rather we’re not talking about hypothetical quantum computers next century and beyond. Simply an expansion of today’s programmable digital computers. I think they’re zombies and will remain zombies.

Lucas: Fully autonomous agents which are very free and super intelligent in relation to us will in your view require a fundamental access to that which is valuable, which is phenomenal states, which is the phenomenal pleasure-pain axis. Without that, it’s missing its key epistemological ingredient. It will fail in value alignment.

David: Yes, yeah, yeah. It just simply does not understand the nature of the world. It’s rather like claiming where the system is intelligent but doesn’t understand the second or of thermodynamics. It’s not a full spectrum super intelligence.

Lucas: I guess my open question there would be then, whether or not it would be possible to not have access to fundamental hedonic states but still be something of a Bodhisattva with a robust moral epistemology that was heading in the right direction or what might be objective.

David: The system in question would not understand the implications of what it was doing.

Lucas: Right. It wouldn’t understand the implications but if it got set off in that direction and it was simply achieving the goal, then I think in some cases we might call that value aligned.

David: Yes. One can imagine … Sorry Brian. Do intervene when you’re ready but yeah, one could imagine for example being skeptical of the possibility of interstellar travel for biological humans but programming systems to go out across the cosmos or at least within our cosmological horizons and convert matter and energy into pure bliss. I mean one needn’t assume that this will apply to our little bubble of civilization but watch if we do about inert matter and energy elsewhere in the galaxy. One can leave it as it is or if one is let’s say a classical utilitarian, one could convert it into pure bliss. Yeah, one can send out probes. One could restructure, reprogram matter and energy in that way.

That would be a kind of compromise solution in one sense. Keep complexity within our little tiny bubble of civilization but convert the rest of the accessible cosmos into pure bliss. Though that technically would not strictly speaking maximize the abundance of positive value in our hubble volume, nonetheless it could become extraordinarily close to it from a classical utilitarian perspective.

Lucas: Brian, do you have anything to add here?

Brian: While I disagree on many, many points, I think digital computation is capable of functionally similar enough processing as the brain does. Even that weren’t the case, a paperclip maximizer with a very different architecture would still have a very sophisticated model of human emotions and its motivations wouldn’t be hooked up to those emotions but it would understand for all other sense of the word understand human pleasure and pain. Yeah, I see it more as a challenge of hooking up the motivation properly. As far as my thoughts on alignment in general based on my metaethics, I tend to agree with the default approach like the MIRI approach, which is unsurprising because MIRI is also anti-realist on metaethics. That approach sees the task as taking human values and somehow translating them into the AI and so that could be in a  variety of different ways learning human values implicitly from certain examples or with some combination of maybe top down programming of certain ethical axioms.

That could send to exactly how you do alignment and there are lots of approaches to that. The basic idea that you need to specifically replicate the complexity of human values in machines and the complexity of the way humans reason. It won’t be there by default in any way shared between my opinion and that of the mainstream AI alignment approach.

Lucas: Do you take a view then similar to that of coherent extrapolated volition?

Brian: In case anybody doesn’t know, coherent extrapolated volition is Eliezer Yudkowsky’s idea of giving the AI the meta … You could call it a metaethics. It’s a meta rule for learning values to take humanity and think about what humanity want to want if it was smarter, knew, had more positive interactions with each other and thought faster and then, try to identify points of convergence among the values of different idealized humans. In terms of theoretical things to aim for, I think CEV is one reasonable target for reasons of cooperation among other humans. I mean if I controlled the world, I would prefer to have the AI implement my own values rather than humanities values because I care more about my values. Some human values are truly abhorrent to me and others seem at least unimportant to me.

In terms of getting everybody together to not fight endlessly over the outcome of AI in this theoretical scenario, CEV would be a reasonable target to strive for. In practice, I think that’s unrealistic like a pure CEV is unrealistic because the world does not listen to moral philosophers to any significant degree. In practice, things are determined by politics, economic power, technological and military power, and forces like that. Those determine most of what happens in the world. I think we may see approximations to CEV that are much more crude like you could say that democracy is an approximation to CEV in the sense that different people with different values, at least in theory, discuss their differences and then, come up with a compromise outcome.

Something like democracy maybe power-weighted democracy in which more powerful actors have more influence will be what ends up happening. The philosophers dream of idealizing values to perfection is unfortunately not going to happen. We can push in directions that are slightly more reflective. We can push aside towards slightly more reflection towards slightly more cooperation and things like that.

David: Couple of points that first, what to use an example we touched on before. What would be coherent extrapolated volition for all the world’s football supporters? Essentially, there’s simply no way to reconcile all their preferences. One may say that if they were fully informed football supporters, wouldn’t waste their time passionately supporting one team or another but essentially I’m not sure that the notion of coherent extrapolated volition there would make sense. Of course, there are more serious issues in football but the second thing when it comes to the nature of value, regardless of one’s metaphysical stance on whether one’s a realist or an anti-realist about value. I think it is possible by biotechnology to create states that are empirically, subjectively far more valuable than anything physiologically feasible today.

Take Prince Myshkin in Dostoevsky’s The Idiot. Like Dostoevsky was a temporal lobe epileptic and he said, “I would give my whole life for this one instant.” Essentially, there are states of consciousness that are empirically super valuable and rather than attempting to reconcile irreconcilable preferences, I think you could say that we should be and so far as we aspire to long term full spectrum super intelligence, perhaps we should be aiming to create these super valuable states. I’m not sure whether it’s really morally obligatory. I said my own focus is on the overriding importance of phasing out suffering but for someone who does give some weight or equal weight to positive experiences positively valuable experiences, that there is a vast range of valuable experience that is completely inaccessible to humans that could be engineered via biotechnology.

Lucas: A core difference here is going to be that given Brian’s view of anti-realism, AI alignment or value alignment would in the end be left to those powers which he described in order to resolve irreconcilable preferences. That is if human preferences don’t converge strongly enough after enough time and information that there are no longer irreconcilable preferences, which I guess I would suppose is probably wrong.

Brian: Which is wrong?

Lucas: That it would be wrong that human beings preferences would converge strongly enough that there would no longer be irreconcilable preferences after coherent extrapolated volition.

Brian: Okay, I agree.

Lucas: I’m saying that in the end, value alignment would be left up to economic forces, military forces, other forces to determine what comes out of value alignment. In David’s view, it would simply be down to if we could get the epistemology right and we could know enough about value and the pleasure-pain axis and the metaphysical status of phenomenal states that that would be value alignment would be to capitalize on that. I didn’t mean to interrupt you Brian. You want to jump in there?

Brian: I was going to say the same thing you did that I agree with David that there would be irreconcilable differences and in fact, many different parameters of the CEV algorithm would probably affect the outcome. One example that you could give is that people tend to crystallize their moral values as they age. You could imagine somebody who was presented with utilitarianism as a young person would be more inclined toward that whereas, maybe if that person haad been presented with deontology as a young person would the person would prefer  deontology as he got older and so depending on seemingly arbitrary factors such as the order in which you are presented with moral views or what else is going on in your life at the time that you confront a given moral view or 100 other inputs. The output could be sensitive to that. CEV is really a class of algorithms depending on how you tune the parameters. You could get substantially different outcomes.

Yeah, CEV is an improvement even if there’s no obvious unique target. As I said, in practice, we won’t even get pure CEV but we’ll get some kind of very rough power-weighted approximation similar to our present world of democracy and competition among various interest groups for control.

Lucas: Just to explain how I’m feeling so far. I mean Brian, I’m very sympathetic to your view but I’m also very sympathetic to David’s view. I hover somewhere in between. I like this point that David made where he quoted Russell, something along the lines that one ought to be careful when discussing ethical metaphysics such that one is not simply trying to make one’s own views and preferences objective.

David: Yeah. When one is talking about well, just in general, when one speaks about the nature for example post human super intelligence, think of the way today that the very nature and notion of intelligence is a contested term. Simply sticking the words super in front of it is just how illuminating is it. When I read someone’s account of super intelligence, I’m really reading an account of what kind of person they are, their intellect and their values. I’m sure when I discuss the nature of full spectrum super intelligence, at least now I can see what I can’t the extent to which I’m simply articulating my own limitations.

Lucas: I guess for me here to get all my partialities out of the way, I hope that objectivism is true because I think that it makes the value alignment way less messy. In the end, we could have something actually good and beautiful, which I don’t know is some preference that I have that might be objective or not just simply wrong, or confused. The descriptive picture that I think Brian is committed to, which gives rise to the MIRI and Tomasik form of anti-realism is just one where in the beginning, there was entropy and noise and many generations of stars fusing atoms into heavier elements. One day one of these disks turn into a planet and a sun shone some light on a planet, and the planet began to produce people. There’s an optimization process there in the end, which simply seems to be ultimately driven by entropy and morality seems to simply a part of this optimization process, which just works to facilitate and mediate the relations between angry mean primates like ourselves.

Brian: I would point out there’s also a lot of spandrel to morality in my opinion, specially these days not that we’re not heavily optimized by biological pressures. All these conversation that we’re having right now is a spandrel in the sense that it’s just an outgrowth of certain abilities that we evolve but it’s not at all adaptive in any direct sense.

Lucas: Right. In this view, it really just seems like morality and suffering, and all of this is just byproduct of the screaming entropy and noise of whatever led to this universe. At the same time, the objective process and I think this is the part the people who are committed to MIRI anti-realism and I guess just relativism and skepticism about ethics in general, maybe are not tapping into enough. At the same time, this objectivity is producing a very real and objective phenomenal self and story, which is caught up in suffering where suffering is really suffering and really sucks to suffer. It all seems at face value true in that moment throughout the suffering that this is real. The suffering is real. The suffering is bad. It’s pretty horrible.

This bliss is something that I would never give up or if the rest of the universe were this bliss, that would just be the most amazing thing ever. In this very subjective phenomenal, I like just experiential thing that the universe produces, the subjective phenomenal story and narrative that we live. It seems there’s just this huge tension between that and I think the anti-realism, the clear suffering of suffering and just being a human being.

Brian: I’m not sure if there’s a tension because the anti-realist agrees that humans experience suffering as meaningful and they experience it as the most important thing imaginable. There’s not really a tension and you can explore why humans quest for objectivity. There seems to be certain glow that attaches to things by saying that they’re objectivity moral. That’s just a weird quirk of human brains. I would say that ultimately, we can choose to care about what we care about whether it’s subjective or not. I often say even if objective truth exist, I don’t necessarily care what it says because I care about what I care about. It could turn out that objective truth orders you to torture squirrels. If it does, then, I’m not going to follow the objective truth. On reflection, I’m not unsatisfied at all with anti-realism because what more could you want than what you want.

Lucas: David, feel free to jump in if you’d like.

David: Well, there it’s just … there’s this temptation to oscillate between two senses of the words subjective. Subjective in neither truth nor false, and subject in the sense of first-person experience. My being in agony or you’re being in agony or someone being in despair is as I said as much an objective property of reality as the rest mass of the electron. I mean what we can be doing is working in such ways as to increase the theory to maximize the amount of subjective value in the world regardless of whether or not one believes that this has any transcendent significance with the proviso here that there is a risk that if one aims strictly speaking to maximize subjective value, that one gets the utilitronium shockwave. If one is as I said, what I personally advocate as aiming for a civilization of super intelligent bliss one is not asking people to give up their core values and preferences unless one of those core values and preferences is to keep hedonic set points unchanged. That’s not very intellectually satisfying but it’s … this idea if one is working towards some kind of census, compromise.

Lucas: I think now I want to get into a bit more just about ethical uncertainty and specifically with regards to meta-ethical uncertainty. I think that just given the kinds of people that we are, that even if we disagree about realism versus anti-realism or ascribe different probabilities to each view. We might pretty strongly converge on how we ought to do value alignment given our kinds of moral considerations that we have. I’m just curious to explore a little bit more about what you guys are most uncertain about what it would take to change your mind? What new information you would be looking for that might challenge or make you revise your metaethical view? How we might want to proceed with AI alignment given our metaethical uncertainty?

Brian: Can you do those one by one?

Lucas: Yeah, for sure. If I can remember everything I just said. First to start off, what do you guys most uncertain about within your meta-ethical theories?

Brian: I’m not very uncertain meta-ethically. I can’t actually think of what would convince me to change my metaethics because as I said, even if it turned out that metaphysically moral truth was a thing out there in some way whatever that would mean, I wouldn’t care about it except for like instrumental reasons. For example, if it was a god, then you’d have to instrumentally care about god punishing you or something but in terms of what I actually care about, it would be not connected to moral truth. Yeah, I would have to be some sort of revision of the way I conceive of my own values. I’m not sure what that would look like to be meta-ethically uncertain.

Lucas: There’s a branch of metaethics, which has to tackle this issue of meta-ethical commitment or moral commitment to meta-ethical views. If some sort of meta-ethical thing is true, why ought I to follow what is metaethically true? In your view Brian, it is just simply why ought you not to follow or why ought it not matter for you to follow what is meta-ethically true if there ends up being objective moral facts.

Brian: The squirrel example is a good illustration if ethics turned out to be, you must torture as many squirrels as possible. Then, screw moral truth. I don’t see what this abstract metaphysical thing has to do with what I care about myself. Basically, my ethics comes from empathy, seeing others in pain, wanting that to stop. Unless moral truths somehow gives insight about that, like maybe moral truths is somehow based on that kind of empathy, sophisticated way then, it would be another person giving me thoughts on morality. The metaphysical nature of it would be irrelevant. It would only be useful in so far as it would appeal to my own emotions and sense of what morality should be for me.

David: If I might interject. Undercutting my position and negative utilitarianism and suffering-focus ethics, I think it quietly likely that posthuman super intelligence, advance civilization with a hedonic range ratcheted right up to 70 to 100 or something like that. We’d look back on anyone articulating the kind of view that I am, that anyone who believes in suffering-focused ethics does and seeing it as some kind of depressive psychosis while intuitively assumes that our successes will be wiser than we are and perhaps, well they will be in many ways. Yet in another sense, I think we should be aspiring to ignorance that once we have done absolutely everything in our power to minimize mitigate, abolish and prevent suffering, I think we should forget it even existed. I hope that eventually any experience below hedonic zero will be literally inconceivable.

Lucas: Just to jump to you here David. What are your views about what you are most meta-ethically uncertain about?

David: It’s this worry that what one is doing however much one is pronouncing about the nature of reality, or the future of intelligence life in the universe and so on. What one is really doing is some kind of disguised autobiography. Given that quite a number of people sadly pain and suffering have loomed larger in my life than pleasure, turning this into deep metaphysical truth about the universe. This potentially undercuts my view. As I said, I think there are arguments against the symmetry view that suffering is self-intimatingly bad where there is nothing self-intimatingly bad about being  insentient system or a system that it’s really content. Nonetheless, yeah, I take seriously the possibility that’s all I’m doing is expressing obliquely by own limitations of perspective.

Lucas: Given these uncertainties and the difficulty and expected impact of AI alignment, if we’re again committing ourselves to this MIRI view of an intelligence explosion with quickly recursive self-improving AI systems, how would you both, if you were the king of AI strategy, how would you go about allocating your metaethics and how would you go about working on the AI alignment problem and thinking about the strategy given your uncertainties and your views?

Brian: I should mention that my most probable scenario for AI is a slow take off in which lots of components of intelligence emerge piece by piece rather than a localized intelligence explosion. As far as the intelligence like if it were a hard take off localized intelligence explosion, then, yeah I think the diversity approaches that people are considering is what I would do as well. It seems to me, you have to somehow learn values because in the same way that we’ve discovered that teaching machines by learning is more powerful than teaching them by hard coding rules. You probably have to mostly learn values as well. Although, there might be hard coding mixed in. Yeah, I would just pursue a variety of approaches and the way that the current community is doing.

I support the fact that there is also a diversity of short term versus long term focus. Some people are working on concrete problems. Others are focusing on issues like decision theory and logical uncertainty and so on because I think some of those foundational issues will be very important. For example, decision theory could make a huge difference to the AI’s effectiveness as well as issues of what happens in conflict situations. Yeah, I think a diversity of approaches is valuable. I don’t have a specific advice on when I would recommend tweaking current approaches. I guess I expected that the concrete problems work will mostly be done automatically by industry because those are the kinds of problems that you need to make AI work at all. If anything, I might invest more in the kind of long-term approaches that practical applications are likely to ignore or at least put off until later.

David: Yes, because of my background assumptions are different, it’s hard for me to deal with your question. If one believes that subjects of experience that could suffer could simply emerge at different levels of abstraction, I don’t really know how to tackle this because this strikes me as a form of strong emergence. One of the reasons why philosophers don’t like strong emergence is that essentially, all bets are off. Yeah, you imagine if life hadn’t been reducible to molecular biology and hence, ultimately to content chemistry and physics. Yeah, I’m not probably the best person to answer your question.

I think in terms of real moral focus, I would like to see essentially the molecular signature of unpleasant experience identified and essentially, you’re just making it completely off limits and biologically impossible for any sentient being to suffer. If one also believes that there are or could be subjects of experience that somehow emerge in classical digital computers, then, yeah, I’m floundering my theory of mind and reality would be wrong.

Lucas: I think touching on the paper that Kaj Sotala had written on suffering risks, I think that a lot of different value systems would also converge with you on your view David. Whether or not we take the view of realism or anti-realism, I think that most people would agree with you. I think the issue comes about with again, preference conflicts where some people I think even this might be a widespread view in catholicism where you view suffering as really important because it teaches you things and/or it has some special metaphysical significance with relation to god. Within the anti-realism view, with Brian’s view, I would find it very… just dealing with varying preferences on whether or not we should be able to suffer is something I just don’t want to deal with.

Brian: Yeah, that illustrates what I was saying about I prefer my values over the collective values of humanity. That’s one example.

David: I don’t think it would be disputed that sometimes suffering can teach lessons. The question is are there any lessons that couldn’t be functionally replaced by something else. This idea that we can just offload the nasty side of life on to software. In the case of pain, nociception one knows that yeah, so they brought software systems can be program or trained up to avoid noxious stimuli without the nasty raw feels should we be doing the same thing for organic biological robots too. When it comes to this, the question of suffering, one can have quite fierce and lively disputes with someone who says that yeah, they want to retain the capacity to suffer. This is very different from involuntary suffering. I think that quite often someone can see that no, they wouldn’t want to force another sentient being to suffer against their will. It should be a matter of choice.

Lucas: To tie this all into AI alignment again, really the point of this conversation is that again, we’re doing ethics on a deadline. If you survey the top 100 AI safety researchers or AI researches in the world, you’ll see that they give a probability distribution of the likelihood of human level artificial intelligence with about a 50% probability at 2050. This, many suspect, will have enormous implications for earth originating-intelligent life and our cosmic endowment. Our normative and descriptive and applied ethical practices that we engage with are all embodiments and consequential to the sorts of meta-ethical views, which we hold, which may not even be explicit. I think many people don’t really think about metaethics very much. I think that many AI researchers probably don’t think about metaethics very much.

The end towards which AI will be aimed will largely be a consequence of some aggregate of meta-ethical views and assumptions or the meta-ethical views and assumptions of a select few. I guess Brian and David, just to tie this all together, what do you guys view as really the practicality of metaethics in general and in terms of technology and AI alignment.

Brian: As far as what you said about metaethics determining the outcome, I would say maybe the implicit metaethics will determine the outcome but I think as we discuss before, 90 some percent of the outcome will be determined by ordinary economic and political forces. Most people in politics in general don’t think about metaethics explicitly but they still engage in the process and have a big impact on the outcome. I think the same will be true in AI alignment. People will push for things they want to push for and that’ll mostly determine what happens. It’s possible that metaethics could inspire people to be more cooperative depending on how it’s framed. CEV as a practical metaethics could potentially inspire cooperation if it’s seen as an ideal to work towards, although the extent to which it can actually be achieve is questionable.

Sometimes, you might have a naïve view where a moral realist assumes that a super intelligent AI would necessarily converge to the moral truth or at least a super intelligent AI could identify the moral truth and then, maybe all you need to do is program the AI to care about the moral truth once it discovers it. Those particular naïve approaches, I think would produce the wrong outcomes because there would be no moral truth to be found. I think it’s important to be wary of that assumption that a super intelligence will figure it out on its own and we don’t need to do the hard work of loading complex human values ourselves. It seems like the current AI alignment community largely recognizes that they recognize that there’s a lot of hard work in loading values and it won’t just happen automatically.

David: In terms of metaethics, consider the nature of pain-free surgery, surgical anesthesia. When it was first introduced in the mid 19th century, it was for about 15 years controversial. There were powerful voices who spoke against it but nonetheless, very rapidly a consensus emerge and we all now, almost all take it for granted for major surgery anesthesia. It didn’t require a consensus on the nature of value and metaethics. It’s just this is the obvious given our nature. Clearly, I would hope that eventually something similar will happen not just for physical pain but also psychological pain too. Just as we now take it for granted that it was the right thing to do to eradicate smallpox, no one is seriously suggesting that we bring smallpox back and it doesn’t depend on consensus on metaethics.

I would hope that the experience below hedonic zero, which one can possibly we’ll be able to find its precise molecular signature. I hope that consensus will emerge that we should phase it out too. Sorry, this isn’t much in the way of practical guidance to today’s roboticist and AI researchers but I suppose I’m just expressing my hope here.

Lucas: No, I think I share that. I think that we have to do ethics on a deadline but I think that there are certain ethical things whose deadline is much longer or which doesn’t necessarily have a real concrete deadline. I like… with your example of the pain anesthesia drugs.

Brian: In my view, metaethics is mostly useful for people like us or other philosophers and effective altruists who can inform our own advocacy. We want to figure out what we care about and then, we go for it and push for that. Then, maybe to some extent, it may diffuse through society in certain ways but in the start, it’s just helping us figure out what we want to push for.

Lucas: There’s an extent to which the evolution of human civilization has also been an evolution of metaethical views, which are consciously or unconsciously being developed. Brian, your view is simply that 90% of what has causal efficacy over what happens in the end are going to be like military and economic and just like raw optimization forces that work on this planet.

Brian: Also, politics and memetic spandrels. For example, like people talk about the rise of postmodernism as replacement of metaethical realism with anti-realism in popular culture. I think that is a real development. One can question to what extent, it matters. Maybe it’s correlated with things like a decline in religiosity which matters more. I think that is one good example of how metaethics can actually go popular and mainstream.

Lucas: Right. Just to bring this back, I think that in terms of the AI alignment problem, I think I try to or at least I’d like to be a bit more optimistic about how much causal efficacy each part of thinking has causal efficacy over the AI alignment problem. I like to not or I tend not to think that 90% of it will in the end be due to rogue impersonal forces like you’re discussing. I think that everyone no matter who you are stands to gain from more metaethical thinking in so far as that whether you take realist or anti-realist views. The expression of your values or whatever you think your values might be whether they might be conventional or relative, or arbitrary in your view, or whether they might relate to some objectivity. They’re much likely less to be expressed and I think a reasonable in a good way, without sufficient metaethical thinking and discussion.

David: One thing I would very much hope that before for example, radiating out across the cosmos, we would sort out our problems on earth in the solar system first regardless of whether one is secular or religious, or a classical or a negative utilitarian, let’s not start thinking about colonizing nearby solar systems or anything there. Yeah, if one is an optimist or maybe thinking of opportunities forgone but at least wait a few centuries. I think in a fundamental sense, we do not understand the nature of reality and not understanding the nature of reality comes with not understanding the nature of value and disvalue or the experience of value and disvalue as Brian might put it.

Brian: Unfortunately, I’m more pessimistic than David. I think the forces of expansion will be hard to stop as they always have been historically. Nuclear weapons are something that almost everybody wishes hadn’t been developed and yet they were developed. Climate change is something that people would like to stop but it has a force of its own due to the difficulty of coordination. I think the same will be true for space colonization and AI development as well that we can make tweaks around the edges but the large trajectory will be determined by the runaway economic and technological situation that we find ourselves in.

David: I fear Brian maybe right. I used to sometimes think about the possibilities of so-called cosmic rescue missions if the rare earth hypothesis is false and suffering Darwinian life exists within our cosmological horizon. I used to imagine this idea that we would radiate out and prevent suffering elsewhere. A, I suspect the rare earth hypothesis is true but B, I suspect even if for suffering life forms do exist elsewhere within our hubble volume. It’s probably more likely humans or our successes would go out and just create more suffering or it’s a rather dark and pessimistic view in my more optimistic moments I think we will phase out suffering all together in the next few centuries but these are guesses really.

Lucas: We’re dealing with ultimately given AI and it being the most powerful optimization process or the seed optimization process to radiate out from earth. I mean we’re dealing with potential astronomical waste or astronomical value, or astronomical disvalue and if we tie this again into moral uncertainty and start thinking about William MacAskill’s work on moral uncertainty where we just do what might be like expected value calculations with regards to our moral uncertainty. We’ve tried to be very mathematical about it and consider the amount of matter and energy that we are dealing with here. Given a super intelligent optimization process coming from Earth.

I think that tying this all together and considering it all should potentially plan an important role in our AI strategy. I definitely feel very sympathetic to Brian’s views that in the end, it might all simply come down to these impersonal economic and political, and militaristic, and memetic forces which exist. Given moral uncertainty, given meta-ethical uncertainty and given the amount of matter and energy that is at stake, potentially some portion of AI strategy should play into circumventing those forces or trying to get around them or decrease them and their effects and hold on AI alignment.

Brian: Yeah. I think it’s tweaks around the edges as I said unless these approaches become very mainstream but I think the prior probability that AI alignment of the type that you would hope for becomes worldwide is low because the prior probability that any given thing becomes worldwide mainstream is low. You can certainly influence local communities who share those ideals and they can try to influence things to the extent possible.

Lucas: Right. I mean maybe something potentially more sinister is that it doesn’t need to become worldwide if there’s a singleton scenario or if the power and control over the AI is very small within a tiny organization or some smaller organization which has power in autonomy to do this kind of thing.

Brian: Yeah, I guess I would again say the probability that you will influence those people would be low. Personally, I would imagine it would be either within a government or a large corporation. Maybe we have disproportionate impact on AI developers relative to the average human. Especially as AI becomes more powerful, I would expect more and more actors to try to have an influence. Our proportional influence would decline.

Lucas: Well, I’m feel very pessimistic after all this. Morality is not real and everything’s probably going to shit because economics and politics is going to drive it all in the end, huh?

David: It’s also possible that we’re heading for a glorious future of super human bliss beyond the bounds of every day experience and that this is just the fag end of Darwinian life.

Lucas: All right. David, we’ll be having I think as you say one day we might have thoughts as beautiful as sunsets.

David: What a beautiful note to end on.

Lucas: I hope that one day we have thoughts as beautiful as sunsets and that suffering is a thing of the past whether that be objective or subjective within the context of an empty cold universe of just entropy. Great. Well, thank you so much Brian and David. Do you guys have any more questions or anything you’d like to say or any plugs, last minute things?

Brian: Yeah, I’m interested in promoting research on how you should tweak AI trajectories if you are foremost concerned about suffering. A lot of this work is being done by the Foundational Research Institute, which aims to avert s-risks especially as they are related to AI. I would encourage people interested in futurism to think about suffering scenarios in addition to extinction scenarios. Also, people who are interested in suffering-focused ethics to become more interested in futurism and thinking about how they can affect long-term trajectories.

David: Visit my websites urging the use of biotechnology to phase out suffering in favor of gradients of intelligent bliss for all sentient beings. I’d also like just to say yeah, thank you Lucas for this podcast and all the work that you’re doing.

Brian: Yeah, thanks for having us on.

Lucas: Yeah, thank you. Two Bodhisattvas if I’ve ever met them.

David: If only.

Lucas: Thanks so much guys.

If you enjoyed this podcast, please subscribe. Give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

Podcast: Six Experts Explain the Killer Robots Debate

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated.

In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of lethal autonomous weapons systems (LAWS), where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founder Mary Wareham and Bonnie Docherty of Human Rights Watch (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39).

Topics discussed in this episode include:

  • the history of semi-autonomous weaponry in World War II and the Cold War (including the Tomahawk Anti-Ship Missile)
  • how major military powers like China, Russia, and the US are imbuing AI in weapons today
  • why it’s so difficult to define LAWS and draw a line in the sand
  • the relationship between LAWS proliferation and war crimes
  • FLI’s recent pledge, where over 200 organizations and over 2800 individuals pledged not to assist in developing or using LAWS
  • comparing LAWS to blinding lasers and chemical weapons
  • why there is hope for the UN to address this issue

Publications discussed in this episode include:

You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

If you work with artificial intelligence in any way, and if you believe that the final decision to take a life should remain a human responsibility rather than falling to a machine, then please consider signing this pledge, either as an individual or on behalf of your organization.

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. As you may have seen, this month we announced a pledge against lethal autonomous weapons. The pledge calls upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. But in the meantime signatories agree that they they will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. At the time of this recording, over 220 AI-related organizations and over 2800 individuals have signed. Signatories include Google DeepMind and its founders, University College London, the XPRIZE Foundation, Clearpath Robotics, Silicon Valley Robotics, the European Association for Artificial Intelligence — and many other AI societies and organizations from around the world. Additionally, people who signed include Elon Musk, Google’s head of research and machine learning Jeff Dean, many other prominent AI researchers, such as Stuart Russell, Toby Walsh, Meredith Whitaker, Anca Dragan, Yoshua Bengio, and even politicians, like British MP Alex Sobel.

But why? We’ve all seen the movies and read the books about AI gone wrong, and yet most of the signatories agree that the last thing they’re worried about is malicious AI. No one thinks the Terminator is in our future. So why are so many people in the world of AI so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. For the longer answer, we have this podcast.

For this podcast, I spoke with six of the leading experts in autonomous weapons. You’ll hear from defense expert Paul Scharre, who recently released the book Army of None: Autonomous Weapons and the Future of War. We discuss the history of autonomous and semi-autonomous weaponry, which dates back to WWII, as well as some of the more nuanced issues today that often come up for debate. AI researcher Toby Walsh looks at lethal autonomous weapons from a more technical perspective, considering the impact of autonomous weapons on society, and also the negative effects they could have for AI researchers if AI technology is used to kill people. Richard Moyes, with Article 36, coined the phrase meaningful human control, which is what much of the lethal autonomous weapons debate at the United Nations now focuses on. He describes what that means and why it’s important. Mary Wareham and Bonnie Docherty joined from Human Rights Watch, and they’re also with the Campaign to Stop Killer Robots. They talk about the humanitarian impact of lethal autonomous weapons and they explain the process going on at the United Nations today as efforts move toward a ban. Finally, my interviews end with Peter Asaro with the International Committee for Robot Arms Control and also the Campaign to Stop Killer Robots. Peter considers the issue of lethal autonomous weapons from an ethical and legal standpoint, looking at the impact killer robots could have on everything from human dignity to war crimes.

But I’ll let each of them introduce themselves better when their interviews begin. And because this podcast is so long, in the description, we’ve included the times that each interview starts, so that you can more easily jump around or listen to sections as you have time.

One quick, final point to mention is that everyone was kind enough to join at the last minute, which means not all of the audio is perfect. Most of it is fine, but please bear with us if you can hear people chattering in the background or any other similar imperfections.

And now for the first interview with Paul Scharre.

Paul: I’m Paul Scharre. I’m a senior fellow and director of the Technology and National Security Program at the Center for a New American Security. We’re a Washington, D.C.-based national security think tank that’s an independent bipartisan research organization.

Ariel: You have a background in weaponry. You were in the military, correct?

Paul: Yeah. I served about five and a half years in the US Army as a Ranger and a civil affairs team leader. I did multiple tours to Iraq and Afghanistan, and then I worked for several years after that in the Pentagon in the Office of the Secretary of Defense, where I actually worked on policy issues for emerging weapons technologies, including autonomous weapons.

Ariel: Okay. One of the very first questions that I want to start with is, how do you define an autonomous weapon?

Paul: That’s sort of the million-dollar question in a lot of ways. I don’t want to imply that all of the debate around autonomous weapons is a misunderstanding of semantics. That’s not true at all. There are clearly people who have very different views on what to do about the technology, but it is a big complicating factor because I have certainly seen, especially at the United Nations, very heated disagreements where it’s clear that people are just talking past each other in terms of what they’re envisioning.

When you say the term “autonomous weapon,” it conjures all sorts of different ideas in people’s minds, some people envisioning super advanced intelligent machines that have human-like or superhuman intelligence, something like a Terminator or Cylon from science fiction. The other people are envisioning something that might be very simple and doable today, like a Roomba with a gun on it.

Both of those things are probably really bad ideas but for very different kinds of reasons. And I think that that’s a complicating factor. So one of the dimensions of autonomy that people tend to get fixated on is how smart the weapon system is. I actually don’t think that that’s a useful way to define an autonomous weapon. Sometimes I’ll hear people say things like, “Well, this is not an autonomous weapon. This is an automated weapon because of the level of sophistication.” I don’t think that’s very helpful.

I think it’s much better, actually, to focus on the functions that the weapon is performing on its own. This is similar to the approach that the International Committee of the Red Cross has, which focuses on critical functions in weapons systems. The way that I define it in my book is I basically define an autonomous weapon as one that can complete an entire engagement cycle on its own. That is to say, it has all of the functionality needed to search for targets, to identify them, to make a decision about whether or not to attack them, and then to start the engagement and carry through the engagement all by itself.

So there’s no human in this loop, this cognitive loop, of sensing and deciding and acting out on the battlefield all by itself. That defines it in such a way that there are some things — and this is where it gets into some of the tricky definitional issues — there are weapons that have been around since World War II that I would call semi-autonomous weapons that have some degree of autonomy, that have some sensors on board. They can detect the enemy, and they can make some rudimentary kinds of actions, like maneuvering towards the enemy.

Militaries generally call these “homing munitions.” They’re torpedoes or air-to-air missiles or surface-to-air, air-to-ground missiles. They have sensors on them that might use sonar or radar or acoustic signatures. They can sense that the enemy is there, and then they use those sensors to maneuver towards the enemy to strike the target. These are generally launched by people at targets where the human knows there’s a target there.

These were originally invented in World War II by the Germans to hit Allied ships in the submarine wars in the Atlantic in World War II. You can imagine there’s a technical challenge trying to hit a moving target of a ship that’s moving. In a submarine, you’re trying to fire a torpedo at it and you might miss. So the first versions of these had microphones that could listen to the sound of the propellers from Allied ships and then steer towards where the sound was greatest so they could hit the ship.

In those cases — and this is still the case in the ones that are used today — humans see the target or have some indication of the target, maybe from a radar or sonar signature. And humans say, “There’s something out there. I want to launch this weapon to go attack it.” Those have been around for 70 years or so. I bring them up because there are some people who sometimes say, “Well, look. These autonomous weapons already exist. This is all a bunch of hullaballoo about nothing.”

I don’t think that’s really true. I think that a lot of the weapons systems that you see concern about going forward, would be things that will be quite qualitatively different, things that are going out over a wide area and searching for targets on their own, where humans don’t necessarily know where the enemy is. They might have some suspicion that the enemy might be in this area at this point in time, but they don’t know, and they launch the weapon to then find the enemy. And then, without radioing back to a human for approval, that weapon is delegated the authority to attack on its own.

By and large, we don’t see weapons like this in existence today. There are some exceptions. The Israeli Harpy drone or loitering munition is an exception. There were a couple experimental US systems in the ’80s and ’90s that are no longer in service. But this isn’t something that is in widespread use. So I do think that the debate about where we’re going in the future is at least a very valid one, and we are on the cusp of, potentially, things that will be quite different than anything we’ve seen before in warfare.

Ariel: I want to ask a quick question about the Harpy and any other type of weapon similar to that. Have those actually been used to kill anyone yet, to actually identify a target and kill some enemy? Or are they still just being used for identifying and potentially targeting people, but it’s still a human who is making the final decision?

Paul: That’s a great question. To the best of my knowledge, the Israeli Harpy has not been used in its fully autonomous mode in combat. So a couple things about how the Harpy functions. First of all, it doesn’t target people per se; it targets radars. Now, having said that, if a person is standing next to a radar that it targets, you’re probably going to be killed. But it’s not looking for individual persons. It’s looking for radar signatures and then zeroing in on them.

I mention that as important for two reasons. One, sometimes in some of the concerns that people raise about autonomous weapons, it can sometimes be unclear, at least to a listener, whether they are concerned about specifically weapons that would target humans or any weapon that might target anything on the battlefield. So that’s one consideration.

But, also, from sort of a practicality standpoint, it is easier to identify radar signatures more accurately than people who, of course, in many modern conflicts are not wearing uniforms or insignia or the things that might clearly identify them as a combatant. So a lot of the issues around distinction and accurately discriminating between combatants and noncombatants are harder for weapons that would target people.

But the answer to the question is a little bit tricky because there was an incident a couple years ago where a second-generation version of the Harpy called the Harop, or Harpy II, was used in the Nagorno-Karabakh region in the conflict there between Azerbaijan and Armenia. I think it was used by Azerbaijan and used to attack what looked like — I believe it was a bus full of fighters.

Now, by all accounts, the incident was one of actual militants being targeted — combatants — not civilians. But here was a case where it was clearly not a radar. It was a bus that would not have been emitting radar signatures. Based on my understanding of how the technology works, the Harop, the Harpy II, has a human-in-the-loop mode. The first-generation Harpy, as far as I understand, is all autonomous. The second-generation version definitely has a human-in-the-loop mode. It looks like it’s not clear whether it also has an autonomous version.

In writing the book, I reached out to the manufacturer for more details on this, and they were not particularly forthcoming. But in that instance, it looks like it was probably directed by a human, that attack, because as far as we know, the weapon does not have the ability to autonomously target something like a bus.

Ariel: Okay.

Paul: That’s a really long-winded answer. This is what actually makes this issue super hard sometimes because they depend a lot on the technical specifications of the weapon, which a) are complicated and b) are not always very transparent. Companies are not always very transparent publicly about how their weapons systems function.

One can understand why that is. They don’t want adversaries to come up with methods of fooling them and countermeasures. On the other hand, for people who are interested in understanding how companies are pushing the bounds of autonomy, that can be very frustrating.

Ariel: One of the things that I really like about the way you think is that it is very nuanced and takes into account a lot of these different issues. I think it’s tempting and easy and, I don’t want to make it sound like I’m being lazy, because I personally support banning lethal autonomous weapons. But I think it’s a really complicated issue, and so I’d like to know more about What are your thoughts on a ban?

Paul: There are two areas on this topic that I think is where it gets really complicated and really tricky. If you start with a broad principle that someone might have of something like, “Humans should be making decisions about lethal force,” or, “Only humans should be deciding to take human life.” There’s two areas where you try to … How do I put them into practice? And then you really run into some serious challenges.

And I’m not saying that makes it impossible because difficult answers you have to really sort of roll up your sleeves and get into some of the details of the issue. One is, how do you translate a broad concept like that into technical specifications of a weapon? If you start with an idea and say, “Well, only humans should be responsible for taking human life,” that seems like a reasonable idea.

How do you translate that into technical guidance that you give weapons developers over what they can and cannot build? That’s actually really hard, and I say that as having done this when I worked at the Pentagon and we tried to write guidance that was really designed to be internal to the US Defense Department and to give guidance to defense companies and to military researchers on what they could build.

It was hard to translate some of these abstract concepts like, “Humans should decide the targets,” to technical ideas. Well, what does that mean for how long the weapon can loiter over a target area or how big its sensor field should be or how long it can search for? You have to try to figure out how to put those technical characteristics into practice.

Let me give you two examples of a weapon to illustrate how this can be challenging. You might imagine a weapon today where a human says, “Ah, here’s an enemy target. I want to take that target out.” They launch a missile, and the missile flies towards the target. Let’s say it’s a tank. The missile uses a millimeter-wave seeker on the tank. It’s an active seeker, sends out millimeter-wave radar signatures to see the tank and illustrate it and sort of highlight it from the background and then zero in on the tank, because the tank’s moving and they need to have the sensor to hit the moving tank.

If the weapon and the sensor can only search for a very limited space in time and geography, then you’ve constrained the autonomy enough that the human is still in control of what it’s targeting. But as you start to open that aperture up, and maybe it’s no longer that it’s searching for one minute in a one-kilometer area, it’s now searching for eight hours over 1,000 kilometers, now you have a completely different kind of weapon system. Now it’s one that’s much more like … I make the analogy in the book of the difference between a police dog that might be set loose to go chase down a suspect, where the human says, “There’s the suspect. Dog, go get them,” versus a mad dog roaming the streets attacking anyone at will.

You have two different paradigms, but where do you draw the line in between? And where do you say, “Well, is 1 minute of loiter time, is it 2 minutes, is it 10 minutes, is it 20 minutes? What’s the geographic area?” It’s going to depend a lot on the target, the environment, what kind of clutter is in the environment. What might be an appropriate answer for tanks in an urban combat setting might be very different than naval ships on the high seas or submarines underwater or some other target in a different environment.

So that’s one challenge, and then the other challenge, of course, which is even more contested, is just sort of, “What’s the feasibility of a ban and getting countries to come together to actually agree to things?” because, ultimately, countries have militaries because they don’t trust each other. They don’t trust international law to constrain other countries from aggressive action. So regardless of whether you favor one country or another, you consider yourself an American or a Russian or a Chinese or a French or Israeli or Guinean or someone else, countries in general, they have militaries because they don’t trust others.

That makes … Even if you get countries to sign up to a ban, that’s a major challenge in getting people to actually adhere to, then, because countries are always fearful about others breaking these rules and cheating and getting the upper hand.

Ariel: We have had other bans. We’ve banned biological weapons, chemical weapons, landmines, space weapons. Do you see this as different somehow?

Paul: Yeah. So one of the things I go through in my book is, as comprehensive as I can come up with, a list of all of the attempts to regulate and control emerging technologies dating back to antiquity, dating back to ancient Indian prohibitions and Hindu Laws of Manu or the Mahabharata on poisoned and barbed arrows and fire-tip weapons.

It’s really a mixed bag. I like to say that there’s sort of enough examples of both successes and failures for people to pick whichever examples they want for whatever side they’re arguing for because there are many examples of successful bans. And I would say they’re largely successful. There are some examples of isolated incidences of people not adhering to them. Very few bans are universally adhered to. We certainly have Bashar al-Assad using chemical weapons in Syria today.

But bans that have been largely successful and that they’ve at least had a major effect in reducing these weapons include landmines, cluster munitions, blinding lasers, biological weapons, chemical weapons, using the environment as a weapon, placing nuclear weapons on the seabed or in orbit, placing any weapons of any kind on the moon or Antarctica, various regulations during the Cold War, anti-ballistic missile systems, intermediate-range nuclear ground-launch missiles, and then, of course, regulations on a number of nuclear weapons.

So there are a lot of successful examples. Now, on the other side of the coin, there are failed attempts to ban, famously, the crossbow, and that’s often brought up in these conversations. But in more recent memory, attempts of the 20th century to ban and regulate aircraft and air-delivered weapons, submarine warfare, of course the failure of attempts to ban poison gas in World War I. So there are examples on other sides of the ledger as well.

One of the things that I try to do in my book is get beyond sort of just picking examples that people like, and say, “Well, is there a pattern here? Are there some common conditions that make certain bans more likely to succeed or fail?” There’s been great scholarship done by some others before me that I was able to build on. Rebecca Crootof and Sean Welsh have done work on this trying to identify some common patterns.

I think that that’s a … If you want to look at this analytically, that’s a fruitful place to start, is to say, “Why do some bans succeed and some fail?” And then, when you’re looking at any new technology, whether it’s autonomous weapons or something else, where do they fall on this spectrum, and what does that suggest about the feasibility of certain attempts at regulation versus others?

Ariel: Can you expand on that a little bit? What have you found, or what have they found in terms of patterns for success versus failure for a ban?

Paul: I think there’s a couple criteria that seem to matter. One is the clarity of a ban is really crucial. Everyone needs to have a clear agreement on what is in and what is out. The simpler and clearer the definition is, the better. In some cases, this principle is actually baked into the way that certain treaties are written. I think the ban on cluster munitions is a great example of this, where the Cluster Munition Convention has a very, very simple principle in the treaty. It says, “Cluster munitions are banned,” full stop.

Now, if you go into the definition, now there’s all sorts of nuance about what constitutes a cluster munition or not. That’s where they get into some of the horse trading with countries ahead of time. But sort of the principle is no cluster munitions. The archetype of this importance of clarity comes in the success of restraint among European powers in using chemical weapons against each other in World War II. All sides had them. They didn’t use them on the battlefield against each other. Of course, Germany used them in the Holocaust and there were some other isolated incidences in World War II of use against others who didn’t have them.

But the European powers all had tens of thousands of tons of mustard gas stockpiled, and they didn’t use it against each other. At the outset of World War II, there were also attempts to restrain aerial bombing of cities. It was widely viewed as reprehensible. It was also illegal under international law at the time, and there were attempts on all sides to refrain from that. At the outset of the war, in fact, they did, and Hitler actually put a directive to the Luftwaffe. I talk about this a little bit in the book, although unfortunately, a lot of the detail on some of this stuff got cut for space, which I was disappointed by.

Hitler put a directive to the Luftwaffe saying that they were not to engage in bombing of civilian targets, a terror bombing, in Britain, they were only to engage in bombing military targets, not because he was a humanitarian, because he was concerned about Britain retaliating. This attempt at restraint failed when, in the middle of the night, a German bomber strayed off course and bombed central London by mistake. In retaliation, Churchill ordered the bombing of Berlin. Hitler was incensed, gave a speech the following day announcing the launch of the London Blitz.

So here’s an example where there was some slippage in the principle of what was allowed and what was not, and so you had a little bit of accidental crossing of the line in conflict. So the sharper and clearer this line is, the better. You could extrapolate from that and say it’s likely that if, for example, what World War II powers had agreed to in World War II was that they could only use poison gas against military targets but not against civilian targets, that it would have quickly escalated to civilian targets as well.

In the context of autonomous weapons, that’s one of the arguments why you’ve see some advocates of a ban say that they don’t support what is sometimes called a partition treaty, which is something that would create a geographic partition that would say you could only use autonomous weapons outside of populated areas. What some advocates of a ban have said is, “Look, that’s never going to hold in combat.” That sounds good. I’ve heard some international humanitarian lawyers say that, “Oh, well, this is how we solve this problem.” But in practice, I agree that’s not likely to be very feasible.

So clarity’s important. Another factor is the relative value of, the military value of a weapon, versus its perceived horribleness. I think, again, a good case in point here is the difference in the International Committee’s success in largely getting most countries to give up chemical weapons, but the lack of success on nuclear weapons. Nuclear weapons by any reasonable measure are far more terrible in terms of their immediate and long-lasting effects on human life and the environment, but they have much more military value, at least perceived military value. So countries are much more reluctant to give them up.

So that’s another factor, and then there are some other ones that I think are fairly straightforward but also matter, things like the access to the weapon and the number of actors that are needed to get agreement. If only two countries have the technology, it’s easier to get them on board than if it’s widely available and everyone needs to agree. But I think those are some really important factors that are significant.

One of the things that actually doesn’t matter that much is the legality of a weapons treaty. I’m not saying it doesn’t matter at all, but you see plenty of examples of legally binding treaties that are violated in wartime, and you see some examples, not a ton, but some examples of mutual restraint among countries when there is no legally binding agreement or sometimes no agreement at all, no written agreement. It’s sort of a tacit agreement to refrain from certain types of competition or uses of weapons.

All of those, I think, are really important factors when you think about the likelihood of a ban actually succeeding on any weapons — not just autonomous weapons, any weapons — but the likelihood of a ban actually succeeding in wartime.

Ariel: I’m probably going to want to come back to this, but you mentioned something that reminded me of another question that I had for you. And that is, in your book, you mentioned … I don’t remember what the weapon was, but it was essentially an autonomous weapon that the military chose not to use and then ended up giving up because it was so costly, and ultimately they didn’t trust it to make the right decisions.

I’m interested in this idea of the extent to which we trust the weapons to do whatever it is that they’re tasked with if they’re in some sort of autonomous mode, and I guess where we stand today with various weapons and whether military will have increasing trust in their weapons in the future.

Paul: The case study I think you’re referring to was an anti-ship missile called the Tomahawk anti-ship missile, or TASM, that was in service by the US Navy in the 1980s. That I would classify as an autonomous weapon. It was designed to go over the horizon to attack Soviet ships, and it could fly a search pattern. I think, actually, in the book I included the graphic of the search pattern that it would fly to look for Soviet ships.

The concern was that the way this would work in anti-surface warfare is the navy would send out patrol aircraft because they’re much faster. They have much longer range than ships. And they would scout for other enemy ships. The principle in a wartime environment is patrol aircraft would find a Soviet ship and then radio back to a destroyer the Soviet ship’s location, and the destroyer would launch a missile.

Now, the problem was, by the time the missile got there, the ship would have moved. So the ship would now have what the military would call an area of uncertainty that the ship might be in. They wouldn’t have the ability to continuously track the ship, and so what they basically would do was the missile would fly a search pattern over this area of uncertainty, and when it found the ship, it would attack it.

Now, at the time in the 1980s, the technology was not particularly advanced and it wasn’t very good at discriminating between different kinds of ships. So one of the concerns was that if there happened to be another kind of ship in the area that was not an enemy combatant, it still might attack it if it was within this search pattern area. Again, it’s originally cued by a human that had some indication of something there, but there was enough uncertainty that it flies this pattern on its own. And I only for that reason call it autonomous weapon because there was a great amount of uncertainty about sort of what it might hit and whether it might do so accurately. And it could, once launched, it would sort of find and attack all on its own.

So it was never used, and there was great hesitance about it being used. I interview a retired US Navy officer who was familiar with it at the time, and he talks about that they didn’t trust that its targeting was good enough that once they let it loose, that it might hit the right target. Moreover, there was the secondary problem, which is it might hit the wrong target, sort of a false positive, if you will, but it also might miss the Soviet ship, in which case they would have simply wasted a weapons system.

That’s another problem that militaries have, which is missiles are costly. They don’t have very many of them in their inventory. Particularly if it’s something like a ship or an aircraft, there’s only so many that they can carry physically on board. So they don’t want to waste them for no good reason, which is another practical to an operational consideration. So eventually it was taken out of service for what I understand to be all of these reasons, and that’s a little bit of guesswork, I should say, as to why it was taken out of service. I don’t have any official documentation saying that, but that’s at least, I think, a reasonable assumption about some of the motivating factors based on talking to people who were familiar with it at the time.

One of the things that I think is an important dynamic that I talk about in the book, which is that, that is really an acute problem, the wasting the weapon problem for missiles that are not recoverable. You launch it, you’re not going to get it back. If the enemy’s not there, then you’ve just wasted this thing. That changes dramatically if you have a drone that can return back. Now, all of the concerns about it hitting the wrong target and civilian casualties, those still exist and those are very much on the minds of at least Western military professionals who are concerned about civilian casualties and countries that care about the rule of law more broadly.

But this issue of wasting the weapon is less of an issue when you have something that’s recoverable and you can send it out on patrol. So I think it’s possible, and this is a hypothesis, but it’s possible that as we see more drones and combat drones in particular being put into service and intended to be used in contested areas where they may have jammed communications, that we start to see that dynamic change.

To your question about trust, I guess I’d say that there is a lot of concern at least among the military professionals that I talk to in the United States and in other Allied countries, NATO countries or Australia or Japan, that there was a lot of concern about trust in these systems, and in fact, I see much more confidence … I’m going to make a broad generalization here, okay? So forgive me, but in general I would say that I see much more confidence in the technology coming from the engineers who are building them at military research labs or at defense companies, than in the military professionals in uniform who have to push the button and use them, that they’re a little bit more skeptical of wanting to actually trust these and delegate, what they see as their responsibility, to this machine.

Ariel: What do you envision, sort of if we go down current trajectories, as the future of weaponry specifically as it relates to autonomous weaponry and potentially lethal autonomous weaponry? And to what extent do you think that international agreements could change that trajectory? And maybe, even, to what extent to you think countries might possibly even appreciate having guidelines to work within?

Paul: I’ll answer that, but let me first make an observation about most of the dialogue in the space. There’s sort of two different questions wrapped up in there. What is the likely outcome of a future of autonomous weapons? Is it a good future or a bad future? And then another one is, what is the feasibility of some kind of international attention to control or regulate or limit these weapons? Is that possible or unlikely to succeed?

What I tend to hear is that people on all sides of this issue tend to cluster into two camps. They tend to either say, “Look, autonomous weapons are horrible and they’re going to cause all these terrible effects. But if we just all get together, we can ban them. All we need to do is just … I don’t know what’s wrong with countries. We need to sit down. We need to sign a treaty and we’ll get rid of these things and our problems will be solved.”

Other people in the opposite camp say, “Bans don’t work, and anyways, autonomous weapons would be great. Wouldn’t they be wonderful? They could make war so great, and humans wouldn’t make mistakes anymore, and no innocent people would be killed, and war would be safe and humane and pristine.” Those things don’t necessarily go together. So it’s entirely possible … Like if you sort of imagine a two-by-two matrix. It’s really convenient that everybody’s views fit into those boxes very harmoniously, but it may not be possible.

I suspect that, on the whole, autonomous weapons that have no human control over targeting are not likely to make war better. It’s hard for me to say that would be a better thing. I can see why militaries might want them in some instances. I think some of the claims about the military values might be overblown, but there are certainly some in situations where you can imagine they’d be valuable. I think it kind of remains to be seen how valuable and what context, but you can imagine that.

But in general, I think that humans add a lot of value to making decisions about lethal force, and we should be very hesitant to take humans away. I also am somewhat skeptical of the feasibility of actually achieving restraint on these topics. I think it’s very unlikely the way the current international dynamics are unfolding, which is largely focused on humanitarian concerns and berating countries and telling them that they are not going to build weapons that comply with international humanitarian law.

I just don’t think that’s a winning argument. I don’t think that resonates with most of the major military powers. So I think that when you look at, actually, historical attempts to ban weapons, that right now what we’re seeing is a continuation of the most recent historical playbook, which is that elements of civil society have kind of put pressure on countries to ban certain weapons for humanitarian reasons. I think it’s actually unusual when you look at the broader historical arc. Most attempts to ban weapons were driven by great powers and not by outsiders, and most of them centered on strategic concerns, concerns about someone getting an unfair military advantage, or weapons making war more challenging for militaries themselves or making life more challenging for combatants themselves.

Ariel: When you say that it was driven by powers, do you mean you’d have, say, two powerful countries and they’re each worried that the other will get an advantage, and so they agree to just ban something in advance to avoid that?

Paul: Yeah. There’s a couple time periods that kind of seem most relevant here. One would be a flurry of attempts to control weapons that came out of the Industrial Revolution around the dawn of the 20th century. These included air balloons, or basically air-delivered weapons from balloons or airplanes, submarines, poison gas, what was called fulminating projectiles. You could think of projectiles or bullets that have fire in them or are burning, or exploding bullets, sawback bayonets. There was some restraint on their use in World War I, although it wasn’t ever written down, but there seems to be a historical record of some constraint there.

That was one time period, and at the time, that was all driven by the great powers at the time. So these were generally driven by the major European powers and then Japan as Japan sort of came rising on the international stage and particularly was involved as a naval power in the naval treaties. The Washington Naval Treaty is another example of this that attempts to control a naval arms race.

And then, of course, there were a flurry of arms control treaties during the Cold War driven by the US and the USSR. Some of them were bilateral. Many of them were multilateral but driven principally by those two powers. So that’s not to say there’s anything wrong with the current models of NGOs in civil society pushing for bans, because it’s worked and it’s worked in landmines and cluster munitions. I’m not sure that the same conditions apply in this instance, in large part because in those cases, there was real humanitarian harm that was demonstrated.

So you could really, I think, fairly criticize countries for not taking action because people were being literally maimed and killed every day by landmines and cluster munitions, whereas here it’s more hypothetical, and so you see people sort of extrapolating to all sorts of possible futures and some people saying, “Well, this going to be terrible,” but other people saying, “Oh, wouldn’t it be great,” and some say it’d be wonderful.

I’m just not sure that the current playbook that some people are using, which is to sort of generate public pressure, will work when the weapons are still hypothetical. And, frankly, they sound like science fiction. There was this recent open letter that FLI was involved in, and I was sitting in the break room at CNN before doing a short bit on this and talking to someone about this. They said, “Well, what are you going on about?” I said, “Well, some AI scientists wrote a letter saying they weren’t going to build killer robots.”

I think to many people it just doesn’t sound like a near-term problem. That’s not to say that it’s not a good thing that people are leading into the issue. I think it’s great that we’re seeing people pay attention to the issue and anticipate it and not wait until it happens. But I’m also just not sure that the public sentiment to put pressure on countries will manifest. Maybe it will. It’s hard to say, but I don’t think we’ve seen it yet.

Ariel: Do you think in terms of considering this to be more near term or farther away, are military personnel also in that camp of thinking that it’s still farther away, or within militaries is it considered a more feasible technology in the near term?

Paul: I think it depends a little bit on how someone defines the problem. If they define an autonomous weapon as human-level intelligence, then I think there’s a wide agreement. Well, at least within military circles. I can’t say wide agreement. There’s probably a lot of people on the podcast who might, maybe, have varying degrees of where they think that might be in terms of listeners.

But in military circles, I think there’s a perception that that’s just not a problem in the near term at all. If what you mean is something that is relatively simple but can go over a wide area and identify targets and attack them, I think many military professionals would say that the technology is very doable today.

Ariel: Have you seen militaries striving to create that type of weaponry? Are we moving in that direction, or do you see this as something that militaries are still hesitating to move towards?

Paul: That’s a tricky question. I’ll give you my best shot at understanding the answer to that because I think it’s a really important one, and part of it is I just don’t know because there’s not great transparency in what a lot of countries are doing. I have a fairly reasonable understanding of what’s going on in the United States but much less so in other places, and certainly in countries like authoritarian regimes like Russia and China, it’s very hard to glean from the outside what they’re doing or how they’re thinking about some of these issues.

I’d say that almost all major military powers are racing forward to invest in more robotics and autonomous artificial intelligence. I think for many of them, they have not yet made a decision whether they will cross the line to weapons that actually choose their own targets, to what I would call an autonomous weapon. I think for a lot of Western countries, they would agree that there’s a meaningful line there. They might parse it in different ways.

The only two countries that have really put any public guidance out on this are the United States and the United Kingdom, and they actually define autonomous weapon in quite different ways. So it’s not clear from that to interpret sort of how they will treat that going forward. US defense leaders have said publicly on numerous occasions that their intention is to keep a human in the loop, but then they also will often caveat that and say, “Well, look. If other countries don’t, we might be forced to follow suit.”

So it’s sort of in the loop for now, but it’s not clear how long “for now” might be. I think it’s not clear to me whether countries like Russia and China even see the issue in the same light, whether they even see a line in the same place. And at least some of the public statements out of Russia, for example, talking about fully roboticized units or some Russian defense contractors claiming to have built autonomous weapons that can do targeting on their own, it would suggest that they may not even see the light in the same way.

In fairness, that is a view that I hear among some military professionals and technologists. I don’t want to say that’s the majority view, but it is at least a significant viewpoint where people will say, “Look, there’s no difference between that weapon, an autonomous weapon that can choose its own targets, and a missile today. It’s the same thing, and we’re already there.” Again, I don’t totally agree, but that is a viewpoint that’s out there.

Ariel: Do you think that the fact that countries have these differing viewpoints is a good reason to put more international pressure on developing some sort of regulations to try to bring countries in line, bring everyone onto the same page?

Paul: Yeah. I’m a huge supporter of the process that’s been going on with the United Nations. I’m frustrated, as many are, about the slowness of the progress. Part of this is a function of diplomacy, but part of this is just that they haven’t been meeting very often. When you add up all of the times over the last five years, it’s maybe five or six weeks of meetings. It’s just not very much time they spend together.

Part of it is, of course … Let’s be honest. It’s deliberate obstinacy on the part of many nations who want to slow the progress of talks. But I do think it would be beneficial if countries could come to some sort of agreement about rules of the road, about what they would see as appropriate in terms of where to go forward.

My view is that we’ve gotten the whole conversation off on the wrong foot by focusing on this question of whether or not to have a legally binding treaty, whether or not to have a ban. If this was me, that’s not how I would have framed the discussion from the get-go, because what happens is that many countries dig in their heels because they don’t want to sign to a treaty. So they’re just like they start off on a position of, “I’m opposed.” They don’t even know what they’re opposed to. They’re just opposed because they don’t want to sign a ban.

I think a better conversation to have would be to say, “Let’s talk about the role of autonomy and machines and humans in lethal decision-making in war going forward. Let’s talk about the technology. Let’s talk about what it can do, what it can’t do. Let’s talk about what humans are good at and what they’re not good at. Let’s think about the role that we want humans to play in these kinds of decisions on the battlefield. Let’s come up with a view of what we think ‘right’ looks like, and then we can figure out what kind of piece of paper we write it down on, whether it’s a piece of paper that’s legally binding or not.”

Ariel: Talking about what the technology actually is and what it can do is incredibly important, and in my next interview with Toby Walsh, we try to do just that.

Toby: I’m Toby Walsh, I’m a Scientia Professor of Artificial Intelligence at the University of New South Wales, which is in Sydney, Australia. I’m a bit of an accidental activist, in the sense that I’ve been drawn in, as a responsible scientist, to the conversation about the challenges, the opportunities, the risks that artificial intelligence pose in fighting war. And there’s many good things that AI’s going to do in terms of reducing casualties and saving lives, but equally, I’m very concerned, like many of my colleagues are, about the risks that it poses, especially when we hand over full control to computers and remove humans from the loop.

Ariel: So that will segue nicely into the first question I had for you, and that was what first got you thinking about lethal autonomous weapons? What first gave you reason for concern?

Toby: What gave me concern about the development of lethal autonomous weapons was to see prototype weapons being developed. And knowing the challenges that AI poses — we’re still a long way away from having machines that are as intelligent as humans, and knowing the limitations, and being very concerned that we were handing over control to machines that weren’t technically capable, and certainly weren’t morally capable, of making the right choices. And therefore, too, I felt a responsibility, as any scientist, that we want AI to be used for good and not for bad purposes. Unfortunately, like many technologies, it’s completely dual use. They’re pretty much the same algorithms that are going to go into your autonomous car, that are going to identify, track, and avoid pedestrians and cyclists, are going to go into autonomous drones that are going to identify combatants, track them, and kill them. It’s a very small change to turn one algorithm into the other. And we’re going to want autonomous cars, they’re going to bring great benefits to our lives, save lots of lives, give mobility to the elderly, to the young, to the disabled. So there can be great benefits for those algorithms, but equally, the same algorithms can be repositioned and used to make warfare much more terrible and much more terrifying.

Ariel: And with AI, we’ve seen some breakthroughs in recent years, just generally speaking. Do any of those give you reason to worry that lethal autonomous weapons are closer than maybe we thought they might have been five or ten years ago? Or has the trajectory been consistent?

Toby: The recent breakthroughs have to be put into the context and that they’ve been in things like games, like the game of Go, very narrow-focus task without uncertainty. The real world doesn’t interfere when you’re playing a game of Go, it’s very precise rules and very constrained actions that you need to do and things that you need to think about. And so to us it’s good to see progress in these narrow domains. We’re still not making much progress, there’s still a huge amount to be done to build machines that are as intelligent as us. But it’s not machines as intelligent as us that I’m very worried about, although that will be in 50 or 100 years time, when we have them, that will be something that we’ll have to think about then.

It’s actually stupid AI, the fact that we’re already thinking about giving responsibility to quite stupid algorithms that really cannot make the right distinctions, either in a technical sense, in terms of being able to distinguish combatants and civilians as required by international humanitarian law. And also from a moral ground, that they really can’t decide things like proportionality, they can’t make the moral distinctions that humans have. They don’t have any of the things like empathy and consciousness that allow us to make those difficult decisions that are made in the battlefield.

Ariel: If we do continue on our current path and we aren’t able to get a ban on these weapons, what concerns do you have? What do you fear will happen? Or what do you anticipate? What type of weapons?

Toby: The problem is, I think with the debate, is that people try and conflate the concerns that we have into just one concern. And there’s different concerns at different points in time and different developments of the technology.

So the concerns I have in the next 10 years or so are definitely concerns I would have in 50 years time. Now the concerns I would have in the next 10 years or so is largely around incompetence. The machines would not be capable of making the right distinctions. And later on, there are concerns that come, as the machines become more competent, different concerns. They would actually now change the speed, the duration, the accuracy of war. And they would be very terrible weapons that any ethical safeguards that we could, at that point, build in, might be removed by bad actors. Sadly, plenty of bad actors out there who would be willing to remove any of the ethical safeguards that we might build in. So there’s not one concern. I think, unfortunately, when you hear the discussion, often it’s people try and distill it down to just a single concern at a single point in time. And depending on the state of the technology, there are different concerns as the technology gets more sophisticated and more mature. But it’s only to begin with, I would be very concerned that we will introduce a rather stupid algorithm into battlefield and they couldn’t make the right moral and right technical distinctions that are required until IHL.

Ariel: Have you been keeping track at all of what sorts of developments have been coming out of different countries?

Toby: You can see, if you just go into YouTube, you can see there are prototype weapons. Pretty much in every theater of battle — in the air, there are autonomous drones and PA systems have autonomous drones that’s now been under development for a number of years. And on the sea, the US Navy’s launched, more than a year ago now, it’s first fully autonomous ship. And interestingly, when it was launched, they said it would just have defensive measures that we can use, hunting for mines, hunting for submarines, and now they’re talking about putting weapons on it. Under the sea, we have an autonomous submarine, an autonomous submarine the size of a bus that’s believed to be halfway across the Pacific, fully autonomously. And on land there are a number of different autonomous weapons. Certainly there are prototypes of autonomous tanks, autonomous sentry robots, and the like. So there is a bit of an arms race happening and it’s certainly very worrying to see that we’re sort of locked into one of these bad equilibria, where everyone is racing to develop these weapons, in part just because the other side is.

China is definitely one of the countries to be worried about. It’s made very clear its ambitions to seek economic military dominance through the use, in large part, in technologies like artificial intelligence and it’s investing very heavily to do that. The military and commercial companies are very tightly close together. It will give it quite a unique position, perhaps even some technical advantages to the development of AI, especially in the battlefield. So it was quite surprising, all of us at the UN meeting in April were pretty surprised when China came out and called for a ban on the deployment of autonomous weapons. It didn’t say anything about development of autonomous weapons, so that’s probably not as far as I would like countries to go because if they’re developed, then you still run the risk that they will be used, accidentally or otherwise. The world is still not as safe as if they’re not actually out there with their triggers waiting to go. But it’s interesting to see that they made that call. It’s hard to know whether they’re just being disruptive or whether they really do see the serious concern we have.

I’ve talked to my colleagues, academic researchers in China around, and they’ve been, certainly in private, sympathetic to the cause of regulating autonomous weapons. Of course, unfortunately, China is a country in which it’s not possible, in many respects, to talk freely. And so they’ve made it very clear that it would be a career-killing move for them, perhaps, to speak publicly like scientists in the West have done about these issues. Nevertheless, we have drawn signatures from Hong Kong, where it is possible to speak a bit more freely, which I think demonstrates that, within the scientific community internationally, across nations, there is actually broad support for these sorts of actions. But the local politics may prevent scientists from speaking out in their home country.

Ariel: A lot of the discussion around lethal autonomous weapons focuses on the humanitarian impact, but I was wondering if you could speak at all to the potential destabilizing effect that they could have for countries?

Toby: One of the aspects of autonomous weapons that I don’t think is discussed enough is quite how destabilizing they will be as a technology. They will be relatively easy, certainly cheap to get your hands on. As I was saying when I was in Korea most recently to the Koreans, the presence of autonomous weapons would make South Korea even less safe than it is today. A country like North Korea has demonstrated it’s willing to go to great lengths to attain atomic weapons. And it would be much easier for them to obtain autonomous weapons and that would put South Korea in a very difficult situation because if they were attacked by autonomous weapons and they weren’t able to defend themselves adequately, then that would escalate and we might well find ourselves in a nuclear conflict. One that, of course, none of us would like to see. So they will be rather destabilizing, like the weapons that fall into the wrong hands, they’ll be used not just by the superpowers, they’ll be used by smaller nations, even rogue states. Potentially, they might even be used by terrorist organizations.

And then another final aspect that makes them very destabilizing is one of attribution. If someone attacks you with autonomous weapons, then it’s going to be very hard to know who’s attacked you. It’s not like you can bring one of the weapons down, you can open it up and look inside it. It’s not going to tell you who launched it. There’s not a radio signal you can follow back to a base to find out who’s actually controlling this. So it’s going to be very hard to work out who’s attacking you and the countries will deny, vehemently, that it’s them, even if they went and attacked you. So they will be perfect weapons of terror, perfect weapons for troubling nations to do their troubling with.

One other concern that I have as a scientist is the risk of the field receiving a bad reputation by the misuse of the technology. We’ve seen this in areas like genetically modified crops. The great benefits that we might have had by that technology — making crops more disease-resistant, more climate-resistant, and that we need, in fact, to deal with the pressing problems that climate change and growing population’s put on our planet — have been negated by the fact that people were distrustful of the technology. And we run a similar sort of risk, I think, with artificial intelligence. That if people see the AI being used to fight terrible wars and to be used against civilians and other people, that the technology will have a stain on it. And all the many good uses and the great potential of the technology might be at risk because people will turn against all sorts of developments of artificial intelligence. And so that’s another risk and another reason many of my colleagues feel that we have to speak out very vocally to ensure that we get the benefits and that the public doesn’t turn against the whole idea of AI being used to improve the planet.

Ariel: Can you talk about the different between an AI weapon and an autonomous weapon?

Toby: Sure. There’s plenty of good things that the military can use artificial intelligence for. In fact, the U.S. military has historically been one of the greatest funders of AI research. There’s lots of good things you can use artificial intelligence for, in the battlefield and elsewhere. No one should risk a life or limb clearing a minefield, a perfect job for a robot because it could go rogue and blow up the robot and you can replace the robot easily. Equally, filtering through all the information coming at you, making sure that you can work out who are combatants and who are civilians, using AI to help you in a situation, once again, that’s a perfect job that will actually save lives, stop some of the mistakes that inevitably happen in the fog of war. And in lots of other areas in logistics and so on, there’s lots of good things in humanitarian aid that AI will be used for.

So I’m not against the use of AI in militaries, I think I can see great potential for it to save lives, to make war a little less dangerous. But there is a complete difference when we look at removing humans completely from the decision loop in a weapon and ending up with a fully autonomous weapon where it is the machine that is making the final decision as to who lives and who dies. And as I said before, that raises many technical, moral, and legal questions that we shouldn’t go down that line. And ultimately, I think there’s a very big moral argument, which is that we shouldn’t hand over those sorts of decisions, that would be taking us into a completely new moral territory that we’ve never seen before in our lives. Warfare is a terrible thing and we sanction it, and in part because we’re risking our own lives and it should be a matter of last resort, not something that we hand over easily to machines.

Ariel: Is there anything else that you think we should talk about?

Toby: I think we’d want to talk about whether regulating autonomous weapons, regulating AI, would hinder the benefits for peaceful or non-military uses. I’m very unconcerned, as many of my colleagues, that if we regulate autonomous weapons that that will actually hinder the development, in any way at all, of the peaceful and the good uses of AI. In fact, as I had mentioned earlier, I’m actually much more fearful that if we don’t regulate, there will be a backlash against the technology as a whole and that will actually hinder the good uses of AI. So I’m completely unconcerned, just like the bans on chemical weapons have not held back chemistry, the bans on biological weapons have not held back biology, the bans on nuclear weapons have not held back the development of peaceful uses of nuclear power. So I’m completely unconcerned, as many of my colleagues are, that regulating autonomous weapons will actually hold back the field in any way at all, in fact quite the opposite.

Ariel: Regulations for lethal autonomous weapons will be more effective if the debate is framed in a more meaningful way, so I’m happy Richard Moyes could talk about how the concept of meaningful human control has helped move the debate in a more focused direction.

Richard: I’m Richard Moyes, and I am Managing Director of Article 36, which is a non-governmental organization which focuses on issues of weapons policy and weapons law internationally.

Ariel: To start, you have done a lot of work, I think you’re credited with coining the phrase “meaningful human control.” So I was hoping you could talk a little bit about first, what are some of the complications around defining whether or not a human is involved and in control, and maybe if you could explain some of the human in the loop and on the loop ideas a little bit.

Richard: We developed and started using the term meaningful human control really as an effort to try and get the debate on autonomous weapons focused on the human element, the form and nature of human engagement that we want to retain as autonomy develops in different aspects of weapons function. First of all, that’s a term that’s designed to try and structure the debate towards thinking about that human element.

I suppose, the most simple question that we raised early on when proposing this term was really a recognition that I think everybody realizes that some form of human control would be needed over new weapon technologies. Nobody is really proposing weapon systems that operate without any human control whatsoever. At the same time, I think people could also recognize that simply having a human being pressing a button when they’re told to do so by a computer screen, without really having any understanding of what the situation is that they’re responding to, having a human simply pressing a button without understanding of the context, also doesn’t really involve human control. So even though in that latter situation, you might have a human in the loop, as that phrase goes, unless that human has some substantial understanding of what the context is and what the implications of their actions are, then simply a pro forma human engagement doesn’t seem sufficient either.

So, in a way, the term meaningful human control was put forward as a way of shifting the debate onto that human element, but also putting on the table this question of, well, what’s the quality of human engagement that we really need to see in these interactions in order to feel that our humanity is being retained in the use of force.

Ariel: Has that been successful in helping to frame the debate?

Richard: I think this sort of terminology, of course, different actors use different terms. Some people talk about necessary human control, or sufficient human control, or necessary human judgment. There’s different word choices there. I think there are pros and cons to those different choices, but we don’t tend to get too hung up on the specific wording that’s chosen there. The key thing is that these are seen bundled together as being a critical area now for discussion among states and other actors in multilateral diplomatic conversation about where the limits of autonomy in weapon systems lie.

I think coming out of the Group of Governmental Experts meeting of the Convention on Conventional Weapons that took place earlier this year, I think the conclusion of that meeting was more or less that this human element really does now need to be the focus of discussion and negotiation. So one way or another, I think the debate has shifted quite effectively onto this issue of the human element.

Ariel: What are you hoping for in this upcoming meeting?

Richard: Perhaps what I’m hoping for and what we’re going to get, or what we’re likely to get, might be rather different things. I would say I’d be hoping for states to start to put forward more substantial elaborations of what they consider the necessary human control, human element in the use of force to be. More substance on that policy side would be a helpful start, to give us material where we can start to see the differences and the similarities in states’ positions.

However, I suspect that the meeting in August is going to focus mainly on procedural issues around the adoption of the chair’s report, and the framing of what’s called the mandate for future work of the Group of Governmental Experts. That probably means that, rather than so much focus on the substance, we’re going to hear a lot of procedural talk in the room.

That said, in the margins, I think there’s still a very good opportunity for us to start to build confidence and a sense of partnership amongst states and non-governmental organizations and other actors who are keen to work towards the negotiation of an instrument on autonomous weapon systems. I think building that partnership between sort of progressive states and civil society actors and perhaps others from the corporate sector, building that partnership is going to be critical to developing a political dynamic for the period ahead.

Ariel: I’d like to go back, quickly, to this idea of human control. A while back, I talked with Heather Roff, and she gave this example, I think it was the empty hanger problem. Essentially what it is is no one expects some military leader to walk down to the airplane hangar and discover that the planes have all gone off to war without anyone saying something.

I think that gets at some of the confusion as to what human control looks like. You’d mentioned briefly the idea that a computer tells a human to push a button, and the human does that, but even in fully autonomous weapon systems, I think there would still be humans somewhere in the picture. So I was wondering if you could elaborate a little bit more on maybe some specifics of what it looks like for a human to have control or maybe where it starts to get fuzzy.

Richard: I think that we recognize that in the development of weapon technologies, already we see significant levels of automation, and a degree of handing over certain functions to sensors and to assistance from algorithms and the like. There are a number of areas that I think are of particular concern to us. I think, in a way, this is to recognize that a commander needs to have a sufficient contextual understanding of where it is that actual applications of force are likely to occur.

Already, we have weapon systems that might be projected over a relatively small area, and within that area, they will identify the heat shape of an armored fighting vehicle for example, and they may direct force against that object. That’s relatively accepted in current practice, but I think it’s accepted so long as we recognize that the area over which any application of force may occur is actually relatively bounded, and it’s occurring relatively shortly after a commander has initiated that mission.

Where I think my concerns, our concerns, lie is that that model of operation could be expanded over a greater area of space on the ground, and over a longer period of time. As that period of time and that area of space on the ground increase, then the ability of a commander to actually make an informed assessment about the likely implications of the specific applications of force that take place within that envelope becomes significantly diluted, to the point of being more or less meaningless.

For us, this is linked also to the concept of attacks as a term in international law. There’s a legal obligation that bears on human commanders at their unit of the attack, so there are certain legal obligations that a human has to fulfill for an attack. Now an attack doesn’t mean firing one bullet. An attack could retain a number of applications of actual force, but it seems to us that if you simply expand the space and the time over which an individual weapon systems can identify target objects for itself, ultimately you’re eroding that notion of an attack, which is actually a fundamental building block of the structure of the law. You’re diluting that legal framework to the point of it arguably being meaningless.

We want to see a reasonably constrained period of, say, let’s call it independence of operation for a system, it may not be fully independent, but where a commander has the ability to sufficiently understand the contextual parameters within which that operation is occurring.

Ariel: Can you speak at all, since you live in the UK, on what the UK stance is on autonomous weapons right now?

Richard: I would say the UK has, so far, been a somewhat reluctant dance partner on the issue of autonomous weapons. I do see some, I think, positive signs of movement in the UK’s policy articulations recently. One of the main problems they’ve had in the past is that they adopted a definition of lethal autonomous weapon systems, which is the terminology used in the CCW. It’s undetermined what this term lethal autonomous weapon systems means. That’s a sort of moving target in the debate, which makes the discussion quite complicated.

But the UK adopted a definition of that term which was somewhat in the realm of science fiction as far as we’re concerned. They describe lethal autonomous weapon systems as having the ability to understand a commander’s intent. I think, in doing so, they were suggesting an almost human-like intelligence within the system, which is a long way away, if even possible. It’s certainly a long way away from where we are now, and where already developments of autonomy in weapon systems are causing legal and practical management problems. By adopting that sort of futuristic definition, they a little bit ruled themselves out of being able to make constructive contributions to the actual debate about how much human control should there be in the use of force.

Now recently in certain publications, the UK has slightly opened up some space to recognize that that definition might actually not be so helpful, and maybe this focus on the human control element that needs to be retained is actually the most productive way forward. Now how positive the UK will be, from my perspective, in that discussion, and then talking about the level of human control that needs to be retained? I think that remains to be seen, but I think at least they’re engaging with some recognition that that’s the area where there needs to be more policy substance. So finger’s crossed.

Ariel: I’d asked Richard about the UK’s stance on autonomous weapons, but this is a global issue. I turned to Mary Wareham and Bonnie Docherty for more in-depth information about international efforts at the United Nations to ban lethal autonomous weapons.

Bonnie: My name’s Bonnie Docherty. I’m a senior researcher at Human Rights Watch, and also the director of Armed Conflict and Civilian Protection at Harvard Law School’s International Human Rights Clinic. I’ve been working on fully autonomous weapons since the beginning of the campaign doing most of the research and writing regarding the issue for Human Rights Watch and Harvard.

Mary: This is Mary Wareham. I’m the advocacy director of the Arms Division at Human Rights Watch. I serve as the global coordinator of the Campaign to Stop Killer Robots. This is the coalition of non-governmental organizations that we co-founded towards the end of 2012 and launched in April 2013.

Ariel: What prompted the formation of the Campaign to Stop Killer Robots?

Bonnie: Well, Human Rights Watch picked up this issue, we published our first report in 2012. Our concern was the development of this new technology that raised a host of concerns, legal concerns, compliance with international and humanitarian law and human rights law, moral concerns, accountability concerns, scientific concerns and so forth. We launched a report that was an initial foray into the issues, trying to preempt the development of these weapons before they came into existence because the genie’s out of the bottle, it’s hard to put it back in, hard to get countries to give up a new technology.

Mary: Maybe I can follow up there just to establish the Campaign to Stop Killer Robots. I did a lot of leg work in 2011, 2012 talking to a lot of the people that Bonnie was talking to for the preparation of the report. My questions were more about what should we do once we launch this report? Do you share the same concerns that we have at Human Rights Watch, and, if so, is there a need for a coordinated international civil society coalition to organize us going forward and to present a united voice and position to governments who we want to take action on this? For us, working that way in a coalition with other non-governmental organizations is what we do. We’ve been doing it for the two last decades on other humanitarian disarmament issues, the International Campaign to Ban Landmines, the Cluster Munition Coalition. We find it’s more effective when we all try to work together and provide a coordinated civil society voice. There was strong interest, and therefore, we co-founded the Campaign to Stop Killer Robots.

Ariel: What prompted you to consider a ban versus your trying to … I guess I don’t know other options there might have been.

Bonnie: We felt from the beginning that what was needed to address fully autonomous weapons is a preemptive ban on development, production and use. Some people have argued that existing law is adequate. Some people have argued you only need to regulate it, to limit it to certain circumstances, but in our mind a ban is essential, and that draws on past work on other conventional weapons such as landmines and cluster munitions, and more recently nuclear weapons.

The reason for a ban is that if you allow these weapons to exist, even to come into being, to be in countries’ arsenals, they will inevitably get in the hands of dictators or rogue actors that will use them against the law and against the rules of morality. They will harm combatants as well as civilians. It’s impossible once a weapon exists to restrict it to a certain circumstance. I think those who favor regulation assume the user will follow all the rules, and that’s just not the way it happens. We believe it should be preemptive because once they come into existence it’s too late. They will be harder to control, and so if you prevent them from even happening that will be the most effective solution.

The last point I’d make is that it also increases the stigma against the weapons, which can influence even countries that aren’t party to a treaty banning them. This is proven in past weapons treaties, and even there’s been a preemptive ban on blinding lasers in the 1990s, and that’s been very effective. There is legal precedent for this, and many arguments for why a ban is the best solution.

Mary: Yeah, there’s two ways of framing that call, which is not just the call of Human Rights Watch, but the call of the Campaign to Stop Killer Robots. We seek a preemptive ban on the development, production and use of fully autonomous weapons. That’s a kind of negative way of framing it. The positive way is that we want to retain meaningful human control over the use of force and over weapons systems going forward. There’s a lot of interest, and I’d say convergence on those two points.

We’re five years on since the launch of the campaign, 26 countries are now supporting the call for a ban and actively trying to get us there, and an even larger number of countries, actually, virtually all of the ones who’ve spoken to-date on this topic, acknowledge the need for some form of human control over the use of force and over weapons systems going forward. It’s been interesting to see in the five diplomatic meetings that governments have held on this topic since May 2014, the discussions keep returning to the notion of human control and the role of the human and how we can retain that going forward because autonomy and artificial intelligence are going to be used by militaries. What we want to do, though, is draw a normative line and provide some guidance and a framework going forward that we can work with.

Ariel: You just referred to them as fully autonomous weapons. At FLI we usually talk about lethal autonomous weapons versus non-lethal fully autonomous weapons, and so that sort of drives me to the question of, to what extent do definitions matter?

Then, this is probably a completely different question, how are lethal autonomous weapons different from conventional weapons? The reason I’m combining these two questions is because I’m guessing definition does play a little bit of a role there, but I’m not sure.

Bonnie: Well, it’s important for countries to make international law they have to have a general, common understanding of what we’re talking about. Generally, in a legal treaty the last thing to be articulated is the actual definition. It’s premature to get a detailed, technical definition, but we feel that, although a variety of names have been used, lethal autonomous weapon systems, fully autonomous weapons, killer robots, in essence they’re all talking about the same thing. They’re all talking about a system that can select a target and choose to fire on that target without meaningful human control. There’s already convergence around this definition, even if it hasn’t been defined in detail. In terms of conventional munitions, they are, in essence, a conventional munition if they deploy conventional weapons. It depends on what the payload is. If a fully autonomous system were launching nuclear weapons it would not be a conventional weapon. If it’s launching cluster munitions it would be a conventional. It’s not right to say they’re not conventional weapons.

Mary: The talks are being held at the Convention on Conventional Weapons in Geneva. This is where governments decided to house this topic. I think it’s natural for people to want to talk about definitions. From the beginning that’s what you do with a new topic, right? You try and figure out the boundaries of what you’re discussing here. Those talks in Geneva and the reporting that has been done to date and all of the discourse, I think it’s been pretty clear that this campaign and this focus on fully autonomous weapons is about kinetic weapons. It’s not about cyber, per se, it’s about actual things that can kill people physically.

I think the ICRC, the Red Cross, has made it an important contribution with its suggestion to focus on the critical functions of weapons systems, which is what we were doing in the campaign, we just weren’t calling it that. That’s this action of identifying and selecting a target, and then firing on it, using force, lethal or otherwise. Those are the two functions that we want to ensure remain under human control, under meaningful human control.

For some others, some other states, they like to draw what we call the very wide definition of meaningful human control. For some of them it means good programming, nice design, a weapons review, a kind of legal review of if the weapon system will be legal and if they can proceed to develop it. You could kind of cast a very wide loop when you’re talking about meaningful human control, but for us the crux of the whole thing is about this notion of selecting targets and firing on them.

Ariel: What are the concerns that you have about this idea of non-human control? What worries you about that?

Mary: Of autonomy in weapon systems?

Ariel: Yeah, essentially, yes.

Mary: We’ve articulated legal concerns here at Human Rights Watch just because that’s where we always start, and that’s Bonnie’s area of expertise, but there are much broader concerns here that we’re also worried about, too. This notion of crossing a moral line and permitting a machine to take human life on the battlefield or in policing or in border control and other circumstances, that’s abhorrent, and that’s something that the Nobel Peace Laureates, the faith leaders and the others involved in the Campaign to Stop Killer Robots want to prevent. For them that’s a step too far.

They also worry about outsourcing killing to machines. Where’s the ethics in that? Then, what impact is this going to have on the system that we have in place globally? How will it be destabilizing in various regions, and, as a whole, what will happen when dictators and one-party states and military regimes get ahold of fully autonomous weapons? How will they use them? How will non-state armed groups use them?

Bonnie: I would just add, building on what Mary said, another reason human control is so important is that humans bring judgment. They bring legal and ethical judgment based on their innate characteristics, on their understanding of another human being, of the mores of a culture, and that a robot cannot bring, certain things cannot be programmed. For example, when they’re weighing whether the military advantage will justify an attack if it causes civilian harm, they apply that judgment, which is both legal and ethical. A robot won’t have that, that’s a human thing. Losing humanity in use of force, potentially, violate the law, and as well as raise serious moral concerns that Mary discussed.

Ariel: I want to go back to the process to get these weapons banned. It’s been going on for quite a few years now. I was curious, is that slow, or is that just sort of the normal speed for banning a weapon?

Mary: Look at nuclear weapons, Ariel.

Ariel: Yeah, that’s a good point. That took a while.

Mary: That took so many years, you know? That’s the example that we’re trying to avoid here. We don’t want to be negotiating a non-proliferation treaty in 20 years time with the small number of countries who’ve got these and the other states who don’t. We’re at a crossroads here. Sorry to interrupt you.

Ariel: No, that was a good point.

Mary: There have been five meetings on this topic to date at the United Nations in Geneva, but each of those meetings has only been up to a week long, so, really, it’s only five weeks of talks that have happened in the last four years. That’s not much time to make a lot of progress to get everybody around the same table understanding, but I think there’s definitely been some progress in those talks to delineate the parameters of this issue, to explore it and begin to pull apart the notion of human control and how you can ensure that that’s retained in weapons systems in the selection of targets and the use of force. There’s a wide range of different levels of knowledge on this issue, not just in civil society and academia and in the public, but also within governments.

There’s a lot of leg work to be done there to increase the awareness, but also the confidence of governments to feel like they can deal with this. What’s happened, especially I think in the past year, has been increased calls to now move from exploring the issue and talking about the parameters of the challenge to, “What are we good do about it?” That’s going to be the big debate at the next meeting, which is coming up at the end of August, is what will the recommendation be for future work? Are the governments going to keep talking about this, which we hope they do, but what are they going to do about it, more importantly?

We’re seeing, I think, a groundswell of support now for moving towards an outcome. States realize that they do not have the time or the money to waste on inconclusive deliberations, and so they met to be exploring options on pathways forward, but there’s really not that many options. As has been mentioned, states can talk about international law and the existing rules and how they can apply them and have more transparency there, but I think we’ve moved beyond that.

There’s kind of a couple of possibilities which will be debated. One is political measures, political non-binding declaration. Can we get agreement on some form of principles over human control? That sounds good, but it doesn’t go nearly far enough. We could create new international law. How do we do that in this particular treaty at the Convention on Conventional Weapons? You move to a negotiating mandate, and you set the objective of negotiating a new protocol under the Convention on Conventional Weapons. At the moment, there has been no agreement to move to negotiate new international law, but we’re expecting that to be the main topic of debate at the next meeting because they have to decide now what they’re going to do next year.

For us, the biggest, I think, developments are happening outside of the room right now rather than in Geneva itself. There’s a lot of activity now starting to happen in national capitols by governments to try and figure out what their position is on this, what their policy is on this, but there’s more prodding and questioning and debate starting to happen in national parliaments, and that has to happen in order to determine what the government position is on this and what’s going to happen on it. Then we have the examples of the open letters, the sign-on letters, ethical principles, there’s all sorts of new things that are coming out in recent weeks that I think will be relevant to what the governments are discussing, and we hope will provide them with impetus to move forward with focus and purpose here.

We can’t put a timeline on by when they might create a new international treaty, but we’re saying you can do this quickly if you put your mind to it and you say that this is what you want to try and achieve. We believe that if they move to a negotiating mandate at the end of this year, they could negotiate the treaty next year. Negotiating the treaty is not the part that takes the long time. It’s about getting everybody into the position where they want to create new international law. The actual process of negotiating that law should be relatively swift. If it takes longer than a year or two, then it runs the risk of turning into another set of inconclusive deliberations that don’t produce anything. For us, the goal is absolutely crucial to get in there at the beginning. The goal at the moment has gone from informal talks to formal talks, but, still, with no option or outcome.

Ariel: What is some of the resistance that you’re facing to moving towards a ban? Are governments worried that they’re going to miss out on a great technology, or is there some other reason that they’re resisting?

Mary: Just to say, 85 countries have spoken out on this topic to date. Most of them not at any great length, but just to say, “This is important. We’re concerned. We support the international talks.” We have a majority of countries now who want to move towards negotiating new international law. Who’s the blockages at the moment? At the last round of talks and at the previous ones it was basically Israel, Russia and the United States who were saying it’s premature to decide where these talks should lead. We need to further explore and discuss the issues before we can make any progress. For others, now people are less patient with that position, and it will be interesting to see if those three countries in particular change their minds here.

The particular treaty that we’re at, the Convention on Conventional Weapons, the states there take their decisions by consensus, which means they can’t vote. There’s no voting procedures there. They have to strive for consensus where everybody in the room agrees, or at least does not object with moving forward. That threat of a kind of a blocking of consensus is always there, especially from Russia, but we’ll see. There’s no kind of pro-killer robot state which is saying, “We want these things. We need these things,” right now, at least not in the diplomatic talks. The only countries who have wanted to talk about the potential advantages or benefits are Israel and the United States. All of the other countries who speak about this are more concerned about understanding and coming to grips with all of the challenges that are raised, and then figuring out what the regulatory framework should be.

Ariel: Bonnie, was there anything you wanted to add to that?

Bonnie: I think Mary summarized the key points. I was just going to say that there’s some people who would argue that we should wait and see what the technology would bring, we don’t know where it’ll go. Our argument counter to that is something called the precautionary principle, that even if there’s scientific uncertainty about where a technology will go, if there’s a significant risk of public harm, which there is in this case, that the scientific uncertainty should not stand in the way of action. I think that the growing number of states that have expressed concern about these weapons, and the majority, the almost consensus or the merging around the need for human control show that there is willingness to act at this point. As Mary said, this is not a situation where people are advocating, and I think that in the long run the agreement that there should be human control over the use of force will outweigh any hesitation based on the wait-and-see approach.

Mary: We had a good proposal, or not proposal, but offer from the United Nations Secretary General in this big agenda for disarmament framework that he launched a couple of months ago, saying that he stands ready to support the efforts of UN member states to elaborate new measures on lethal autonomous weapon systems, including legally-binding arrangements. For him, he wants states to ensure that humans remain at all times in control over the use of force. To have that kind of offer of support from the highest level at the United Nations I think is very important.

The other recent pledges and commitments, the one by the 200 technology companies and more than 2600 scientists and AI experts and other individuals committing not to develop lethal autonomous weapons systems, that’s a very powerful message, I think, to the states that these groups and individuals are not going to wait for the regulation. They’re committing not to do it, and this is what they expect the governments to do as well. We also saw the ethical principles issued by Google in recent weeks and this pledge by the company not to design or develop artificial intelligence for use in weapons. All of these efforts and initiatives are very relevant to what states need to do going forward. This is why we in the Campaign to Stop Killer Robots welcome them and encourage them, and want to ensure that we have as much of a broad-based appeal to support the government action that we need taken.

Ariel: Can you talk a little bit about what’s happening with China? Because they’ve sort of supported a ban. They’re listed as supporting a ban, but it’s complicated.

Mary: It’s funny because so many other countries that have come forward and endorsed the call for a ban have not elicited the same amount of attention. I guess it’s obviously interesting, though, for China to do this because everybody knows about the investments that China is making into military applications of artificial intelligence and autonomy. We see the weapons systems that are in development at the moment, including swarms of very small miniature drones, and where will that head?

What China thinks about this issue matters. At the last meeting, China basically endorsed the call for a ban, but said — there’s always a but — that their support was limited to prohibiting use only, and to not address development or production. For us it’s a partial ban, but we put them on the list that the campaign maintains, and they’re the first state to have an asterisk by its entry saying, “Look, China is on the ban list, but it’s not fully committed here.” We needed to acknowledge that because it wasn’t really the first that China had hinted it would support creating new international law. It has been hinting at this in previous papers, including one that found that China’s review of existing international law found so many questions and doubts raised that it does see a need to create international law specific to fully autonomous weapons systems. China gave the example of the blinding lasers protocol at the CCW which prohibits laser weapons that would permanently blind human soldiers.

I think the real news on China is that its position now saying that existing law is insufficient and we need to create new international rules, splits the P5, the permanent five members of the United Nations Security Council. You have Russia and the United States arguing that it’s too early to determine what the outcome should be, and the UK — Richard can explain better exactly what the UK wants — but it seems to be satisfied with the status quo. Then France is pursuing a political declaration, but not legally-binding measures. There’s not unity anymore in that group of five permanent members of the Security Council, and those states do matter because they are some of the ones who are best-placed to be developing and investing in increasingly autonomous weapons systems.

Ariel: Okay. I wanted to also ask, unrelated, right now what you’re trying to do, what we’re trying to do, is get a ban, a preemptive ban on a weapon that doesn’t exist. What are some examples in the past of that having succeeded, as opposed to proving some humanitarian disaster as the result of a weapon?

Bonnie: Well, the main precedent for that is the preemptive ban on blinding lasers, which is a protocol to the Convention on Conventional Weapons. We did some research a few years ago into the motives behind the preemptive ban on blinding lasers, and many of them are the same. They raised concerns about the ethics of permanently blinding someone, whether it’s a combatant or a civilian. They raised concerns about the threat of an arms race. They raised concerns that there be a ban, but that it not impede peaceful development in that area. That ban has been very successful. It has not impeded the peaceful use of lasers for many civilian purposes, but it has created a stigma against and a legally-binding ruling against using blinding lasers. We think that that’s an excellent model for fully autonomous weapons, and it also appeared in the same treaty at which these fully autonomous weapons or lethal autonomous weapon systems are being discussed right now. It’s a good model to look at.

Mary: Bonnie, I really like that paper that you did on the other precedents for retaining human control over weapons systems. The notion that looking at past weapons that have been prohibited and finding that, in many instances, it’s because of the uncontrollable effects that the weapons create, from chemical weapons and biological and toxin ones to antipersonnel landmines where, once deployed, you cannot control them anymore. This is the kind of notion of being able to control the weapon system once it’s activated that has driven those previous negotiations, right?

Bonnie: Correct. There’s precedent for both a preemptive ban, but there’s also precedent for a desire to maintain human control over weapons. As Mary said, there are several treaties, chemical weapons, biological weapons and landmines, all have been banned, in large part because people in governments were concerned about losing control over the weapons system. In essence, it’s the same model here, that by launching fully autonomous weapons you’d be losing control over the use of force. I think there’s a precedent for a ban, and there’s a precedent for a preemptive ban, all of which are applicable in this situation.

Ariel: I talked to Paul Scharre a little bit earlier, and one of the things that he talked about were treaties that were developed as a result of the powers that be, recognizing that the weapon would be too big of a risk for them, and so they agreed to ban a weapon. Then, the other sort of driving force for treaties was usually civil societies and based on sort of the general public saying, “This is not okay.” What role do you see for both of those situations here?

Bonnie: There’s a multitude of reasons of why these weapons should be banned, and I think both the ones you mentioned are valid in this case. From our point of view, the main concern is a humanitarian one, and that’s civil society’s focus. We’re concerned about the risk to civilians. We’re concerned about moral issues, and those matters. That builds on past, what they call humanitarian disarmament treaties, treaties designed to protect humanity through legal norms, and, traditionally, often through bans, bans of landmines, cluster munitions and nuclear weapons.

There have been other treaties, sometimes they overlap, that have been driven more for security reasons. Countries that are concerned about other nations getting their hands on these weapons, and that they feel in the long run it’s better for no one to have them than for others to have them. Certainly, chemical weapons was an example of that. This does not mean that a treaty can’t be motivated for both reasons. That often happens, and I think both reasons are applicable here, but they just have come from slightly different trajectories.

Mary: It’s pretty amazing some of the diplomatic talks that we’ve been on on killer robots where we hear the governments debating the ethics of whether or not a specific weapon system such as fully autonomous weapons should be permitted, should be allowed. It’s rare that that happens. Normally, we are dealing with the aftermath of the consequences of proliferation and of widespread use and widespread production and stockpiling. This is an opportunity to do something in advance here, and it does kind of lead to a little bit of, I’d say, a North-South divide between the kind of military powers who have the resources at their disposal to invest in increasingly autonomous technology and try and push the boundaries, and then the vast majority of countries who are asking, “What’s the point of all of this? Where is the relevance of the UN charter which talks about general and complete disarmament as being the ultimate objective?” They ask, “Have we lost that goal here? Is the ultimate objective to create more and better and more sophisticated weapons systems, or is to end war and deal with the consequences through disarmament of warfare?”

Those are kind of really big-picture questions that are raised in this debate, and ones that we leave to those governments to make, but I think it is indicative of why there is so much interest in this particular concern, and that’s demonstrated by just the sheer number of governments who are participating in the international talks. The international talks, they’re in the setting called a Group of Governmental Experts, but this is not about a dozen guys sitting around the table in a small room. This is a big plenary meeting with more than 80 countries following, engaging, and avidly trying to figure out what to do.

Ariel: In terms of just helping people understand how the UN works, what role does a group like the Campaign to Stop Killer Robots play in the upcoming meeting? If, ultimately, the decision is made by the states and the nations, what is your role?

Mary: Our role is 24/7, all year round. These international meetings only happen a couple of times a year. This will be the second week this year. Most of our work has been this year happening in capitols and in places outside of the diplomatic meetings because that’s where you really make progress, is through the parliamentary initiatives, through reaching the high-level political leadership, through engaging the public, through talking to the media and getting an increased awareness about the challenges here and the need for action. All of those things are what makes things move inside the room with the diplomacy because the diplomats need instructions from capitols in order to really progress.

At the meeting itself, we seek to provide a diverse delegation that’s not just people from Europe and North America, but from around the world because this is a multilateral meeting. We need to ensure that we can reach out and engage with all of the delegates in the room because every country matters on this issue, and every country has questions. Can we answer all those questions? Probably not, but we can talk through them with those states, try and address the concerns, and try and be a valued partner in the deliberations that are happening. It’s the normal way of working for us here at Human Rights Watch, is to work alongside other organizations through coordinated civil society initiatives so that you don’t go to the meeting and have like 50 statements from different NGOs. You have just a few, or just one so that you can be absolutely clear and guiding where you want to see the deliberations go and the outcome that you want.

We’ll be holding side events and other efforts to engage with the delegates in different ways, as well as presenting new research and reports. I think you’ve got something coming out, Bonnie, right?

Bonnie: We’ll be releasing a new report on Martens Clause, which is a provision of international law, the Geneva conventions and other treaties that brings ethics into law. It basically has two prongs, which we’ll elaborate on in the report, but talking about that countries must comply with the principles of humanity and the dictates of public conscience, which, in short, we believe fully autonomous weapons raise concerns over both of those. We believe losing human control will violate basic principles of humanity, and that there’s the groundswell of opposition that’s growing among, not only governments, but also faith leaders, scientists, tech companies, academics, civil society, et cetera, all show that the public conscience is coming out against fully autonomous weapons and for maintaining human control over the use of force.

Ariel: To continue with this idea of the ethical issues surrounding lethal autonomous weapons, we’re joined now by Peter Asaro.

Peter: I’m Peter Asaro. I’m an Associate Professor in the School of Media Studies at the New School University in New York City, and I’m also the co-founder and vice chair of the International Committee for Robot Arms Control, which is part of the leadership steering committee of the Campaign to Stop Killer Robots, which is a coalition of NGOs that’s working at the UN to ban fully autonomous weapons.

Ariel: Could you tell us a little bit about how you got involved with this and what first gave you cause for concern?

Peter: My background is in philosophy and computer science, and I did a lot of work in artificial intelligence and in the philosophy of artificial intelligence as well as the history of science and early computing and the development of neural networks and the sort of mathematical and computational theories behind all of that. In the 1930s, ’40s, ’50s, and ’60s was my graduate work, and as part of that, I got really interested in the kind of modern or contemporary applications of both artificial intelligence and robotics, and specifically the kind of embodied forms of artificial intelligence, which are robotic in various ways, and got really interested in not just intelligence, but social interaction.

That sort of snowballed into thinking about robot ethics and what seems the most pressing issue within robot ethics was the use of violence, the use of force, and whether we would allow robots to kill people, and of course the first place that that was gonna happen would be the military. So, I’d been thinking a lot about the ethics of military robotics form the perspective of just war theory, but also a broad range of philosophical legal perspectives as well.

That got me involved with Noel Sharkey and some other people who were interested in this from a policy perspective and we launched the International Committee for Robot Arms Control back in 2009, and then in 2012, we got together with Human Rights Watch and a number of other NGOs to form the Campaign to Stop Killer Robots.

Ariel: That leads into the next question I have for you, and it’s very broad. Can you talk a little bit about what some of the ethical issues are surrounding robots and more specifically autonomous weapons in warfare?

Peter: I think of course there’s a whole host of ethical issues around robotics in general and privacy, safety, sort of the big ones, but all sorts of more complicated ones as well, job displacement, how we treat them, and the impacts on society and things like that. Within the military context, I think the issues are sort of clearer in some sense, because it’s mostly around the use autonomous systems in a lethal force.

So the primary question is should we allow autonomous weapons systems to make lethal decisions independently of human control or human judgment, however you frame that. And then sort of subsidiary to that, some would argue does the programming within a system constitute that kind of human control or decision making. From my perspective, pre-programming doesn’t really do that, and that’s because I come from a philosophical background and so we look at just war theory and you look at ethics, especially Kantian ethics, and the requirements for the morality of killing. So, killing is generally speaking immoral, but there are certain exceptions, and those are generally self-defense or collective self-defense in the case of war, but in order to justify that killing, you need reasons and justifications. And machines, and computational reasoning, at least at this stage of development, is not the type of system that has reasons. It follows rules and if certain conditions are met and a rule is applied and a result is obtained, but making a reasoned judgment about whether to use lethal force or whether to take a human life depends on a deeper understanding of reason, and I think that’s a sort of moral agency, it’s a moral decision making, and moral judgment that requires capacities that automated decision making systems just don’t have.

Maybe down the road in the future, machines will become conscious, machines will understand the meaning of life, machines will understand what it means to take a life, machines will be able to recognize human beings as humans who deserve rights that need to be respected, and systems may understand what it means to have a duty to respect the rights of others. But simply programming rules into machines doesn’t really do that. So, from a legal perspective as well, there’s no real accountability for these sorts of systems because they’re not legal agents, they’re not moral agents, you cannot sue a computer or a robot. You cannot charge them with crimes and put them in jail and things like that.

So, we have an entire legal system as well as a moral framework that assumes that humans are the responsible agents and the ones making decisions, and as soon as you start replacing that decision making with automated systems, you start to create significant problems for the regulation of these systems and for accountability and for justice. And then that leads directly to problems of safety and control, and what kinds of systems are gonna be fielded, what are gonna be the implications of that for international stability, who’s gonna have access to that, what are the implications for civilians and civilian infrastructures that might be targeted by these systems.

Ariel: I had wanted to go into some of this legality and liability stuff that you’ve brought up and you sort of given a nice overview of it as it is, but I was hoping you could expand a little bit on how this becomes a liability issue, and also … This is probably sort of an obvious question, but if you could touch a little on just how complicated it is to change the laws so that they would apply to autonomous systems as opposed to humans.

Peter: A lot of the work I’ve been doing under a grant for the Future of Life Institute, looks at liability in increasingly autonomous systems. I know within civilian domestic application, of course the big application that everybody’s looking at at the moment is the self-driving car, so you can ask this question, who’s responsible when the self-driving car creates an accident. And the way that liability law works, of course somebody somewhere is always going to wind up being responsible. The law will find a way to hold somebody responsible. The question is whether existing precedence and the ways of doing things under current legal frameworks is really just or is really the best way going forward as we have these kinds of increasingly autonomous systems.

So, in terms of holding persons responsible and liable, so under tort law, if you have an accident, then you can sue somebody. This isn’t criminal law, this is the law of torts, and under that, then you sort of receive monetary compensation for damages done. But ideally, the person, or agents, or company or what have you that causes the harm is the one that should pay. Of course, that’s not always true, and the way that liability works, does things like joint and several liability in which, even though one party only had a small hand in causing a harm, they may have lots of money, like a government or a state, or a city, or something like that, and so they may actually wind up paying far more as a share of damages than they actually contributed to a problem.

You also have situations of strict liability such that even if your agency in causing a problem was very limited, you can still be held fully responsible for the implications. There’s some interesting parallels here with the keeping of animals, which are kind of autonomous systems in a sense. They have their minds of their own, they sort of do things. On the other hand, we expect them to be well behaved and well trained, at least for domestic animals. So generally speaking, you have liability for harms caused by your dog or your horse and so forth as a domesticated animal, but you don’t have strict liability. So, you actually have to show that maybe you’ve trained your dog to attack or you’ve failed to properly train your horse or keep in a stable or what have you, whereas if you keep a tiger or something like that and it gets out and causes harm, then you’re strictly liable.

So the question is for a robot, should you be strictly liable for the robots that you create or the robots that you own? Should corporations that manufacture these systems be strictly liable for all of the accidents of self-driving cars? And while that seems like a good policy from the perspective of the public, because all the harms that are caused by these systems will be compensated, that could also stifle innovation. In the car sector, that doesn’t seem to be a problem. As it turns out, the president of Volvo said that they will accept strict liability for all of their self-driving cars. Tesla Motors has released a number of autopilot systems for their cars and more or less accepted the liability for that, although there’s only been a few accidents, so the actual jurisprudence or case law is still really emerging around that.

But those are, I think, a technology where the cars are very expensive, there’s a lot of money to be made in self-driving cars, and so the expectation of the car companies is that there will be very few accidents and that they can really afford to pay the damages for all those accidents. Now, is that gonna be true for personal robots? So, if you have a personal assistant, sort of butler robot who maybe goes on shopping errands and things like that for you, there’s a potential for them to cause significant economic damage. They’re probably not gonna be nearly as expensive as cars, hopefully, and it’s not clear that the market for them is going to be as big, and it’s not clear that companies would be able to absorb the cost of strict liability. So, there’s a question of whether that’s really the best policy for those kinds of systems.

Then there’s also questions of ability of people to modify their systems, so if you’re holding companies strictly responsible for their products, then those companies are not going to allow consumers to modify those products in any way, because that would affect their ability to control them. If you want a kind of DIY culture around autonomous systems of robotics, then you’re gonna see a lot of people modifying these systems, reprogramming these systems. So you also want, I think, a kind of strict liability around anybody who does those kinds of modifications rather than the manufacturer, and that’s to sort of break the seal and you accept all the responsibility for what happens.

And I think that’s sort of one side of it now and the military side of it, you don’t really have torts in the same way. There’s of course a couple of extreme issues around torts in war, but generally speaking, militaries do not pay monetary damages when they make mistakes. If they accidentally blow up the wrong building, they don’t pay to build a new building. That’s just considered a casualty of war and an accident, and it’s not even necessarily a war crime or anything else, because you don’t have these kind of mechanisms where you can sue an invading army for dropping a bomb in the wrong place.

The idea that liability is going to act as an accountability measure on autonomous system is just silly, I think, in warfare, because you just, you can’t sue people in war, basically. There’s a few exceptions and the governments that purchase weapons systems can sue the manufacturers, and that’s the sort of sense in which there is an ability to do that, but even most of those cases have been largely unsuccessful. Generally, those kinds of lawsuits are based on contracts and not the actual performance or damages caused by an actual system. So, you don’t really have that entire regulatory mechanism, so if you have a government that’s concerned about not harming civilians and not bombing the wrong buildings and things like that, of course, then they’re incentivized to put pressure on manufacturers to build systems that perform well, and that’s one of the sort of drivers of that technology.

But it’s a much weaker force if you think about what the engineers in a car company are thinking about in terms of safety and the kind of bottom line for their company if they make a product that causes accidents versus how that’s thought about in a defense company, where certainly they’re trying to protect civilians and ensure that systems work correctly, but they don’t have that enormously powerful economic concern about lawsuits in the future. The idea that the technology is going to be driven by similar forces, it doesn’t really apply. So that’s a big concern, I think, for the development of autonomous systems in the military sphere.

Ariel: Is there a worry or a risk that this sort of — I don’t know if it’s lack of liability, maybe it’s just whether or not we can trust the systems that are being built — but is there an increased risk of war crimes as a result of autonomous weapons, either intentionally or accidentally?

Peter: Yeah, I mean, the idea that there’s an increased risk of war crimes is kind of an interesting question, because the answer is simultaneously yes and no. What these autonomous systems actually do is diminish or remove, or put a distance between accountability of humans and their actions, or the consequences of their actions. So if you think of the autonomous system as a sort of intermediary between humans and the effects of their actions, there’s this sort of accountability gap that gets created. A system could go and do some horrendous act, like devastate a village and all the civilians in the village, and then we say, “Ah, is this a war crime?” And under international law as it stands, you’d have to prove intention, which is usually the most difficult part of war crimes tribunals, being able to actually demonstrate in court that a commander had the intention of committing some genocidal act or some war crime.

And you can build various forms of evidence for that. Now, if you send out an autonomous system, and you may not even know what that system is really gonna do and you don’t need to know exactly what it’s going to do when you give its orders, it becomes very easy to sort of distance yourself legally from what that system does in the field. Maybe you suspect it might do something terrible, and that’s what you really want, but it would be very easy then to sort of cover up your true intentions using these kinds of systems.

On the one hand, it would be much easier to commit war crimes. On the other hand, it’ll be much more difficult to prosecute or hold anybody accountable for war crimes that would be committed by autonomous weapons.

Ariel: You’ve also been producing some open letters this summer. There was one for academics calling on Google to stop work on Project Maven and … I’m sorry, you had another one… what was that one about?

Peter: The Amazon face recognition.

Ariel: Right. Right. Yeah. I was hoping you could talk a little bit about what you see as the role of academics and corporations and civil society in general in this debate about lethal autonomous weapons.

Peter: I think in terms of the debate of lethal autonomous weapons, civil society has a crucial role to play. I think in a broad range of humanitarian disarmament issues, and in the case of autonomous weapons, it’s really, it’s a technology that’s moving very quickly, and militaries are still a little bit unsure of exactly how they’re going to use it, but they’re very excited about it and they’re putting lots of research investment into new applications and trying to find new ways of using it. And I think that’s exciting from a research perspective, but it’s very concerning from a humanitarian and human rights perspective, because again, it’s not clear what kind of legal accountability will be around these systems. It’s not clear what kind of safety, control, and testing might be imposed on these systems, and it also seems quite clear that these systems are ready made for arms races and global and regional military destabilizations, where competitors are acquiring these systems and that has a potential to lead to conflict because of that destabilization itself. Then of course, the rapid proliferation.

So, in terms of civil society’s role, I think what we’ve been doing primarily is voicing of the general concern, I think, of the broad public globally and within specific countries that we’ve surveyed are largely opposed to these systems. Of course, the proponents say that’s just because they’ve seen too many sci fi movies and these things are gonna be just fine, but I don’t think that’s really the case. I think there’s some genuine fears and concerns that need to be addressed. So, we’ve also seen the involvement of a number of tech companies that are developing artificial intelligence, machine learning, robotics, and things like that.

And I think their interest and concern in this issue is twofold. We have companies like Clearpath Robotics, which is the largest robotics company in Canada, and also the largest supplier of robots to the Canadian military, whose engineers organized together to say that they do not want their systems to be used for autonomous weapons platforms, and they will not build them, but they also want to support the international campaign to ensure that governments don’t acquire their robots and then weaponize them. And they’re doing search and rescue robots and bomb disposal robots. This similar movement amongst academics and artificial intelligence and robotics who have spent really their life work developing these fundamental technologies who are then deeply concerned that the first and perhaps last application of this is going to be autonomous weapons, and the public will turn against artificial intelligence and robotics because of that, and then that these systems are genuinely scary and that we shouldn’t really be entrusting human lives or the decision to take human lives to these automated systems.

They have all kinds of great practical social applications and we should be pursuing those and just leave aside and really prohibit the use of these systems in the military context for autonomous targeting. And now I think we’re seeing more movement from the big companies, particularly this open letter that we’re a part of with Google, and their Project Maven. And Project Maven is a Pentagon project that aims at analyzing all the many thousands of hours of drone footage that the US military drones are collecting over Afghanistan and Iraq and various places where they’re operating. And to try to automate, using machine learning, to identify objects of interest, to kind of save time for human sensor analysts who have to pour through these images and then try to determine what that is.

And that in and of itself, that doesn’t seem too terrible, right? You’re just scanning through this imagery. But of course, this is really the first step to an automated targeted recognition system for drones, so if you wanted to fully automate drones, which currently require human operators to interpret the imagery to decide that this is something that should be targeted with a weapon and then to actually target and fire a weapon, that whole process is still controlled by humans. But if you wanted to automate it, the first thing you’d have to do is automate that visual analysis piece. So, Project Maven is trying to do exactly that, and to do that on a really big scale.

The other kind of issue from the perspective of a labor and research organization is that the Pentagon really has trouble, I think, attracting talent. There’s a really strong demand for artificial intelligence researchers and developers right now, because there’s so many applications and there’s so much business opportunity around it. It actually turns out the military opportunities are not nearly as lucrative as a lot of the other business applications. Google, and Amazon, and Facebook, and Microsoft can offer enormous salaries to people with PhDs in machine learning or even just masters degrees or some experience in systems development. And the Pentagon can’t compete with that on government salaries, and I think they’re even having trouble getting certain contracts with these companies. But when they get a contract with a company like Google, then they’re able to get access to really the top talent in artificial intelligence and their Cloud research groups and engineering, and also the sort of enormous capacity computationally of Google that has these massive data centers and processing capabilities.

And then you’re also getting … in some ways, Google is a company that collects data about people all over the world every day, all the time. Every Google search that you do, and there’s millions of Google searches per second or something in the world, so they have also the potential of applying the data that’s collected on the public in all these complicated ways. It’s really kind of a unique company in these respects. I think as a company that collects that kind of private data, they also have a certain obligation to society to ensure that that data isn’t used in detrimental ways, and siding with the single military in the world and using data that might be coming from users in countries where that military is operating, I think that’s deeply problematic.

We as academics kind of lined up with the engineers and researchers at Google who were already protesting Google’s involvement in this project. They were concerned about their involvement in the drone program. They were concerned about how this could be applied to autonomous weapons systems in the future. And they were just generally concerned with Google’s attempts to become a major military contractor and not just selling a simple service, like a word processor or a search, which they do anyway, but actually developing customized systems to do military operations, analyze these systems and apply their engineering skills and resources to that.

So, we really joined together as academics to support those workers. The workers passed around an open letter and then we passed around our letter, so the Google employees letter received over 4000 signatures and our letter from academics received almost 1200, a few shy. So, we really got a lot of mobilization and awareness, and then Google agreed to not renew that contract. So, they’re not dropping it, they’re gonna continue it till the end of the year, but they have said that they will not renew it in the future.

Ariel: Is there anything else that you think is important to mention?

Peter: I wrote a piece last night for a report on human dignity. So, I can just give you a little blurb about human dignity. I think the other kind of interesting ethical question around autonomous systems is this question of the right to human dignity and whether autonomous weapons or allowing robots to kill people would violate human dignity. I think some people have a very simplistic notion of human dignity, that it’s just some sort of aura or something of property that hangs around people and can be violated, but in fact I believe human dignity is a relation between people and this is a more Kantian view that human dignity means that you’re respected by others as a human. Others respect your rights, which doesn’t mean they can never violate them, but they have to have reasons and justifications that are sound in order to override your rights.

And in the case of human dignity, of course you can die in many terrible ways on a battlefield, but the question is whether the decision to kill you is justified and if it’s not, then it’s sort of an arbitrary killing. That means there’s no reasons for it, and I think if you look at the writings of the Special Rapporteur on extrajudicial summary on arbitrary executions, he’s written some interesting papers on this, which is essentially that all killing by autonomous weapons would be arbitrary in this kind of legal sense, because these systems don’t have access to reasons for killing you to know that it’s actually justified to use lethal force in a given situation.

And that’s because they’re not reasoning in the same way that we are, but it’s also because they’re not human moral agents, and it’s important in a sense that they be human, because human dignity is something that we all lose when it’s violated. So, if you look at slavery or you look at torture, it’s not simply the person who’s being tortured or enslaved who is suffering, though of course they are, but it is in fact all of us who lose a certain value of human life and human dignity by the very existence of slavery or torture, and the acceptance of that.

In a similar way, if we accept the killing of humans by machines, then we’re really diminishing the nature of human dignity and the value of human life, in a broad sense that affects everybody, and I think that’s really true, and I think we really have to think about what it means to have human control over these systems to ensure that we’re not violating the rights and dignity of people when we’re engaged in armed conflict.

Ariel: Excellent. I think that was a nice addition. Thank you so much for taking the time to do this today.

We covered a lot of ground in these interviews, and yet we still only scratched the surface of what’s going on in the debate on lethal autonomous weapons. If you want to learn more, please visit autonomousweapons.org and visit the research and reports page. On the FLI site, we’ve also addressed some of the common arguments we hear in favor of lethal autonomous weapons, and we explain why we don’t find those arguments convincing. And if you want to learn even more, of course there’s the Campaign to Stop Killer Robots website, ICRAC has a lot of useful information on their site, and Article 36 has good information, including their report on meaningful human control. And if you’re also concerned about a future with lethal autonomous weapons, please take a moment to sign the pledge. You can find links to the pledge and everything else we’ve talked about on the FLI page for this podcast.

I want to again thank Paul, Toby, Richard, Mary, Bonnie and Peter for taking the time to talk about their work with LAWS.

If you enjoyed this show, please take a moment to like it, share it and maybe even give it a good review. I’ll be back again at the end of next month discussing global AI policy. And don’t forget that Lucas Perry has a new podcast on AI value alignment, and a new episode from him will go live in the middle of the month.

[end of recorded material]

Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman

From Uber’s advanced computer vision system to Netflix’s innovative recommendation algorithm, machine learning technologies are nearly omnipresent in our society. They filter our emails, personalize our newsfeeds, update our GPS systems, and drive our personal assistants. However, despite the fact that such technologies are leading a revolution in artificial intelligence, some would contend that these machine learning systems aren’t truly intelligent.

The argument, in its most basic sense, centers on the fact that machine learning evolved from theories of pattern recognition and, as such, the capabilities of such systems generally extend to just one task and are centered on making predictions from existing data sets. AI researchers like Rodney Brooks, a former professor of Robotics at MIT, argue that true reasoning, and true intelligence, is several steps beyond these kinds of learning systems.

But if we already have machines that are proficient at learning through pattern recognition, how long will it be until we have machines that are capable of true reasoning, and how will AI evolve once it reaches this point?

Understanding the pace and path that artificial reasoning will follow over the coming decades is an important part of ensuring that AI is safe, and that it does not pose a threat to humanity; however, before it is possible to understand the feasibility of machine reasoning across different categories of cognition, and the path that artificial intelligences will likely follow as they continue their evolution, it is necessary to first define exactly what is meant by the term “reasoning.”

 

Understanding Intellect

Bart Selman is a professor of Computer Science at Cornell University. His research is dedicated to understanding the evolution of machine reasoning. According to his methodology, reasoning is described as taking pieces of information, combining them together, and using the fragments to draw logical conclusions or devise new information.

Sports provide a ready example of expounding what machine reasoning is really all about. When humans see soccer players on a field kicking a ball about, they can, with very little difficulty, ascertain that these individuals are soccer players. Today’s AI can also make this determination. However, humans can also see a person in a soccer outfit riding a bike down a city street, and they would still be able to infer that the person is a soccer player. Today’s AIs probably wouldn’t be able to make this connection.

This process— of taking information that is known, uniting it with background knowledge, and making inferences regarding information that is unknown or uncertain — is a reasoning process. To this end, Selman notes that machine reasoning is not about making predictions, it’s about using logical techniques (like the abductive process mentioned above) to answer a question or form an inference.

Since humans do not typically reason through pattern recognition and synthesis, but by using logical processes like induction, deduction, and abduction, Selman asserts that machine reasoning is a form of intelligence that is more like human intelligence. He continues by noting that the creation of machines that are endowed with more human-like reasoning processes, and breaking away from traditional pattern recognition approaches, is the key to making systems that not only predict outcomes but also understand and explain their solutions. However, Selman notes that making human-level AI is also the first step to attaining super-human levels of cognition.

And due to the existential threat this could pose to humanity, it is necessary to understand exactly how this evolution will unfold.

 

The Making of a (super)Mind

It may seem like truly intelligent AI are a problem for future generations. Yet, when it comes to machines, the consensus among AI experts is that rapid progress is already being made in machine reasoning. In fact, many researchers assert that human-level cognition will be achieved across a number of metrics in the next few decades. Yet, questions remain regarding how AI systems will advance once artificial general intelligence is realized. A key question is whether these advances can accelerate farther and scale-up to super-human intelligence.

This process is something that Selman has devoted his life to studying. Specifically, he researches the pace of AI scalability across different categories of cognition and the feasibility of super-human levels of cognition in machines.

Selman states that attempting to make blanket statements about when and how machines will surpass humans is a difficult task, as machine cognition is disjointed and does not draw a perfect parallel with human cognition. “In some ways, machines are far beyond what humans can do,” Selman explains, “for example, when it comes to certain areas in mathematics, machines can take billions of reasoning steps and see the truth of a statement in a fraction of a second. The human has no ability to do that kind of reasoning.”

However, when it comes to the kind of reasoning mentioned above, where meaning is derived from deductive or inductive processes that are based on the integration of new data, Selman says that computers are somewhat lacking. “In terms of the standard reasoning that humans are good at, they are not there yet,” he explains. Today’s systems are very good at some tasks, sometimes far better than humans, but only in a very narrow range of applications.

Given these variances, how can we determine how AI will evolve in various areas and understand how they will accelerate after general human level AI is achieved?

For his work, Selman relies on computational complexity theory, which has two primary functions. First, it can be used to characterize the efficiency of an algorithm used for solving instances of a problem. As Johns Hopkins’ Leslie Hall notes, “broadly stated, the computational complexity of an algorithm is a measure of how many steps the algorithm will require in the worst case for an instance [of a problem] of a given size.” Second, it is a method of classifying tasks (computational problems) according to their inherent difficulty. These two features provide us with a way of determining how artificial intelligences will likely evolve by offering a formal method of determining the easiest, and therefore most probable, areas of advancement. It also provides key insights into the speed of this scalability.

Ultimately, this work is important, as the abilities of our machines are fast-changing. As Selman notes, “The way that we measure the capabilities of programs that do reasoning is by looking at the number of facts that they can combine quickly. About 25 years ago, the best reasoning engines could combine approximately 200 or 300 facts and deduce new information from that. The current reasoning engines can combine millions of facts.” This exponential growth has great significance when it comes to the scale-up to human levels of machine reasoning.

As Selman explains, given the present abilities of our AI systems, it may seem like machines with true reasoning capabi