Skip to content
All Podcast Episodes

Podcast: Six Experts Explain the Killer Robots Debate

Published
July 31, 2018

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated.

In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of lethal autonomous weapons systems (LAWS), where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founder Mary Wareham and Bonnie Docherty of Human Rights Watch (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39).

Topics discussed in this episode include:

  • the history of semi-autonomous weaponry in World War II and the Cold War (including the Tomahawk Anti-Ship Missile)
  • how major military powers like China, Russia, and the US are imbuing AI in weapons today
  • why it's so difficult to define LAWS and draw a line in the sand
  • the relationship between LAWS proliferation and war crimes
  • FLI's recent pledge, where over 200 organizations and over 2800 individuals pledged not to assist in developing or using LAWS
  • comparing LAWS to blinding lasers and chemical weapons
  • why there is hope for the UN to address this issue

Publications discussed in this episode include:

You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

If you work with artificial intelligence in any way, and if you believe that the final decision to take a life should remain a human responsibility rather than falling to a machine, then please consider signing this pledge, either as an individual or on behalf of your organization.

Transcript

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. As you may have seen, this month we announced a pledge against lethal autonomous weapons. The pledge calls upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. But in the meantime signatories agree that they they will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. At the time of this recording, over 220 AI-related organizations and over 2800 individuals have signed. Signatories include Google DeepMind and its founders, University College London, the XPRIZE Foundation, Clearpath Robotics, Silicon Valley Robotics, the European Association for Artificial Intelligence -- and many other AI societies and organizations from around the world. Additionally, people who signed include Elon Musk, Google’s head of research and machine learning Jeff Dean, many other prominent AI researchers, such as Stuart Russell, Toby Walsh, Meredith Whitaker, Anca Dragan, Yoshua Bengio, and even politicians, like British MP Alex Sobel.

But why? We’ve all seen the movies and read the books about AI gone wrong, and yet most of the signatories agree that the last thing they’re worried about is malicious AI. No one thinks the Terminator is in our future. So why are so many people in the world of AI so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. For the longer answer, we have this podcast.

For this podcast, I spoke with six of the leading experts in autonomous weapons. You’ll hear from defense expert Paul Scharre, who recently released the book Army of None: Autonomous Weapons and the Future of War. We discuss the history of autonomous and semi-autonomous weaponry, which dates back to WWII, as well as some of the more nuanced issues today that often come up for debate. AI researcher Toby Walsh looks at lethal autonomous weapons from a more technical perspective, considering the impact of autonomous weapons on society, and also the negative effects they could have for AI researchers if AI technology is used to kill people. Richard Moyes, with Article 36, coined the phrase meaningful human control, which is what much of the lethal autonomous weapons debate at the United Nations now focuses on. He describes what that means and why it’s important. Mary Wareham and Bonnie Docherty joined from Human Rights Watch, and they’re also with the Campaign to Stop Killer Robots. They talk about the humanitarian impact of lethal autonomous weapons and they explain the process going on at the United Nations today as efforts move toward a ban. Finally, my interviews end with Peter Asaro with the International Committee for Robot Arms Control and also the Campaign to Stop Killer Robots. Peter considers the issue of lethal autonomous weapons from an ethical and legal standpoint, looking at the impact killer robots could have on everything from human dignity to war crimes.

But I’ll let each of them introduce themselves better when their interviews begin. And because this podcast is so long, in the description, we’ve included the times that each interview starts, so that you can more easily jump around or listen to sections as you have time.

One quick, final point to mention is that everyone was kind enough to join at the last minute, which means not all of the audio is perfect. Most of it is fine, but please bear with us if you can hear people chattering in the background or any other similar imperfections.

And now for the first interview with Paul Scharre.

Paul: I'm Paul Scharre. I'm a senior fellow and director of the Technology and National Security Program at the Center for a New American Security. We're a Washington, D.C.-based national security think tank that's an independent bipartisan research organization.

Ariel: You have a background in weaponry. You were in the military, correct?

Paul: Yeah. I served about five and a half years in the US Army as a Ranger and a civil affairs team leader. I did multiple tours to Iraq and Afghanistan, and then I worked for several years after that in the Pentagon in the Office of the Secretary of Defense, where I actually worked on policy issues for emerging weapons technologies, including autonomous weapons.

Ariel: Okay. One of the very first questions that I want to start with is, how do you define an autonomous weapon?

Paul: That's sort of the million-dollar question in a lot of ways. I don't want to imply that all of the debate around autonomous weapons is a misunderstanding of semantics. That's not true at all. There are clearly people who have very different views on what to do about the technology, but it is a big complicating factor because I have certainly seen, especially at the United Nations, very heated disagreements where it's clear that people are just talking past each other in terms of what they're envisioning.

When you say the term "autonomous weapon," it conjures all sorts of different ideas in people's minds, some people envisioning super advanced intelligent machines that have human-like or superhuman intelligence, something like a Terminator or Cylon from science fiction. The other people are envisioning something that might be very simple and doable today, like a Roomba with a gun on it.

Both of those things are probably really bad ideas but for very different kinds of reasons. And I think that that's a complicating factor. So one of the dimensions of autonomy that people tend to get fixated on is how smart the weapon system is. I actually don't think that that's a useful way to define an autonomous weapon. Sometimes I'll hear people say things like, "Well, this is not an autonomous weapon. This is an automated weapon because of the level of sophistication." I don't think that's very helpful.

I think it's much better, actually, to focus on the functions that the weapon is performing on its own. This is similar to the approach that the International Committee of the Red Cross has, which focuses on critical functions in weapons systems. The way that I define it in my book is I basically define an autonomous weapon as one that can complete an entire engagement cycle on its own. That is to say, it has all of the functionality needed to search for targets, to identify them, to make a decision about whether or not to attack them, and then to start the engagement and carry through the engagement all by itself.

So there's no human in this loop, this cognitive loop, of sensing and deciding and acting out on the battlefield all by itself. That defines it in such a way that there are some things -- and this is where it gets into some of the tricky definitional issues -- there are weapons that have been around since World War II that I would call semi-autonomous weapons that have some degree of autonomy, that have some sensors on board. They can detect the enemy, and they can make some rudimentary kinds of actions, like maneuvering towards the enemy.

Militaries generally call these "homing munitions." They're torpedoes or air-to-air missiles or surface-to-air, air-to-ground missiles. They have sensors on them that might use sonar or radar or acoustic signatures. They can sense that the enemy is there, and then they use those sensors to maneuver towards the enemy to strike the target. These are generally launched by people at targets where the human knows there's a target there.

These were originally invented in World War II by the Germans to hit Allied ships in the submarine wars in the Atlantic in World War II. You can imagine there's a technical challenge trying to hit a moving target of a ship that's moving. In a submarine, you're trying to fire a torpedo at it and you might miss. So the first versions of these had microphones that could listen to the sound of the propellers from Allied ships and then steer towards where the sound was greatest so they could hit the ship.

In those cases -- and this is still the case in the ones that are used today -- humans see the target or have some indication of the target, maybe from a radar or sonar signature. And humans say, "There's something out there. I want to launch this weapon to go attack it." Those have been around for 70 years or so. I bring them up because there are some people who sometimes say, "Well, look. These autonomous weapons already exist. This is all a bunch of hullaballoo about nothing."

I don't think that's really true. I think that a lot of the weapons systems that you see concern about going forward, would be things that will be quite qualitatively different, things that are going out over a wide area and searching for targets on their own, where humans don't necessarily know where the enemy is. They might have some suspicion that the enemy might be in this area at this point in time, but they don't know, and they launch the weapon to then find the enemy. And then, without radioing back to a human for approval, that weapon is delegated the authority to attack on its own.

By and large, we don't see weapons like this in existence today. There are some exceptions. The Israeli Harpy drone or loitering munition is an exception. There were a couple experimental US systems in the '80s and '90s that are no longer in service. But this isn't something that is in widespread use. So I do think that the debate about where we're going in the future is at least a very valid one, and we are on the cusp of, potentially, things that will be quite different than anything we've seen before in warfare.

Ariel: I want to ask a quick question about the Harpy and any other type of weapon similar to that. Have those actually been used to kill anyone yet, to actually identify a target and kill some enemy? Or are they still just being used for identifying and potentially targeting people, but it's still a human who is making the final decision?

Paul: That's a great question. To the best of my knowledge, the Israeli Harpy has not been used in its fully autonomous mode in combat. So a couple things about how the Harpy functions. First of all, it doesn't target people per se; it targets radars. Now, having said that, if a person is standing next to a radar that it targets, you're probably going to be killed. But it's not looking for individual persons. It's looking for radar signatures and then zeroing in on them.

I mention that as important for two reasons. One, sometimes in some of the concerns that people raise about autonomous weapons, it can sometimes be unclear, at least to a listener, whether they are concerned about specifically weapons that would target humans or any weapon that might target anything on the battlefield. So that's one consideration.

But, also, from sort of a practicality standpoint, it is easier to identify radar signatures more accurately than people who, of course, in many modern conflicts are not wearing uniforms or insignia or the things that might clearly identify them as a combatant. So a lot of the issues around distinction and accurately discriminating between combatants and noncombatants are harder for weapons that would target people.

But the answer to the question is a little bit tricky because there was an incident a couple years ago where a second-generation version of the Harpy called the Harop, or Harpy II, was used in the Nagorno-Karabakh region in the conflict there between Azerbaijan and Armenia. I think it was used by Azerbaijan and used to attack what looked like -- I believe it was a bus full of fighters.

Now, by all accounts, the incident was one of actual militants being targeted -- combatants -- not civilians. But here was a case where it was clearly not a radar. It was a bus that would not have been emitting radar signatures. Based on my understanding of how the technology works, the Harop, the Harpy II, has a human-in-the-loop mode. The first-generation Harpy, as far as I understand, is all autonomous. The second-generation version definitely has a human-in-the-loop mode. It looks like it's not clear whether it also has an autonomous version.

In writing the book, I reached out to the manufacturer for more details on this, and they were not particularly forthcoming. But in that instance, it looks like it was probably directed by a human, that attack, because as far as we know, the weapon does not have the ability to autonomously target something like a bus.

Ariel: Okay.

Paul: That's a really long-winded answer. This is what actually makes this issue super hard sometimes because they depend a lot on the technical specifications of the weapon, which a) are complicated and b) are not always very transparent. Companies are not always very transparent publicly about how their weapons systems function.

One can understand why that is. They don't want adversaries to come up with methods of fooling them and countermeasures. On the other hand, for people who are interested in understanding how companies are pushing the bounds of autonomy, that can be very frustrating.

Ariel: One of the things that I really like about the way you think is that it is very nuanced and takes into account a lot of these different issues. I think it's tempting and easy and, I don't want to make it sound like I'm being lazy, because I personally support banning lethal autonomous weapons. But I think it's a really complicated issue, and so I'd like to know more about What are your thoughts on a ban?

Paul: There are two areas on this topic that I think is where it gets really complicated and really tricky. If you start with a broad principle that someone might have of something like, "Humans should be making decisions about lethal force," or, "Only humans should be deciding to take human life." There's two areas where you try to … How do I put them into practice? And then you really run into some serious challenges.

And I’m not saying that makes it impossible because difficult answers you have to really sort of roll up your sleeves and get into some of the details of the issue. One is, how do you translate a broad concept like that into technical specifications of a weapon? If you start with an idea and say, "Well, only humans should be responsible for taking human life," that seems like a reasonable idea.

How do you translate that into technical guidance that you give weapons developers over what they can and cannot build? That's actually really hard, and I say that as having done this when I worked at the Pentagon and we tried to write guidance that was really designed to be internal to the US Defense Department and to give guidance to defense companies and to military researchers on what they could build.

It was hard to translate some of these abstract concepts like, "Humans should decide the targets," to technical ideas. Well, what does that mean for how long the weapon can loiter over a target area or how big its sensor field should be or how long it can search for? You have to try to figure out how to put those technical characteristics into practice.

Let me give you two examples of a weapon to illustrate how this can be challenging. You might imagine a weapon today where a human says, "Ah, here's an enemy target. I want to take that target out." They launch a missile, and the missile flies towards the target. Let's say it's a tank. The missile uses a millimeter-wave seeker on the tank. It's an active seeker, sends out millimeter-wave radar signatures to see the tank and illustrate it and sort of highlight it from the background and then zero in on the tank, because the tank's moving and they need to have the sensor to hit the moving tank.

If the weapon and the sensor can only search for a very limited space in time and geography, then you've constrained the autonomy enough that the human is still in control of what it's targeting. But as you start to open that aperture up, and maybe it's no longer that it's searching for one minute in a one-kilometer area, it's now searching for eight hours over 1,000 kilometers, now you have a completely different kind of weapon system. Now it's one that's much more like ... I make the analogy in the book of the difference between a police dog that might be set loose to go chase down a suspect, where the human says, "There's the suspect. Dog, go get them," versus a mad dog roaming the streets attacking anyone at will.

You have two different paradigms, but where do you draw the line in between? And where do you say, "Well, is 1 minute of loiter time, is it 2 minutes, is it 10 minutes, is it 20 minutes? What's the geographic area?" It's going to depend a lot on the target, the environment, what kind of clutter is in the environment. What might be an appropriate answer for tanks in an urban combat setting might be very different than naval ships on the high seas or submarines underwater or some other target in a different environment.

So that's one challenge, and then the other challenge, of course, which is even more contested, is just sort of, "What's the feasibility of a ban and getting countries to come together to actually agree to things?" because, ultimately, countries have militaries because they don't trust each other. They don't trust international law to constrain other countries from aggressive action. So regardless of whether you favor one country or another, you consider yourself an American or a Russian or a Chinese or a French or Israeli or Guinean or someone else, countries in general, they have militaries because they don't trust others.

That makes ... Even if you get countries to sign up to a ban, that's a major challenge in getting people to actually adhere to, then, because countries are always fearful about others breaking these rules and cheating and getting the upper hand.

Ariel: We have had other bans. We've banned biological weapons, chemical weapons, landmines, space weapons. Do you see this as different somehow?

Paul: Yeah. So one of the things I go through in my book is, as comprehensive as I can come up with, a list of all of the attempts to regulate and control emerging technologies dating back to antiquity, dating back to ancient Indian prohibitions and Hindu Laws of Manu or the Mahabharata on poisoned and barbed arrows and fire-tip weapons.

It's really a mixed bag. I like to say that there's sort of enough examples of both successes and failures for people to pick whichever examples they want for whatever side they're arguing for because there are many examples of successful bans. And I would say they're largely successful. There are some examples of isolated incidences of people not adhering to them. Very few bans are universally adhered to. We certainly have Bashar al-Assad using chemical weapons in Syria today.

But bans that have been largely successful and that they've at least had a major effect in reducing these weapons include landmines, cluster munitions, blinding lasers, biological weapons, chemical weapons, using the environment as a weapon, placing nuclear weapons on the seabed or in orbit, placing any weapons of any kind on the moon or Antarctica, various regulations during the Cold War, anti-ballistic missile systems, intermediate-range nuclear ground-launch missiles, and then, of course, regulations on a number of nuclear weapons.

So there are a lot of successful examples. Now, on the other side of the coin, there are failed attempts to ban, famously, the crossbow, and that's often brought up in these conversations. But in more recent memory, attempts of the 20th century to ban and regulate aircraft and air-delivered weapons, submarine warfare, of course the failure of attempts to ban poison gas in World War I. So there are examples on other sides of the ledger as well.

One of the things that I try to do in my book is get beyond sort of just picking examples that people like, and say, "Well, is there a pattern here? Are there some common conditions that make certain bans more likely to succeed or fail?" There's been great scholarship done by some others before me that I was able to build on. Rebecca Crootof and Sean Welsh have done work on this trying to identify some common patterns.

I think that that's a ... If you want to look at this analytically, that's a fruitful place to start, is to say, "Why do some bans succeed and some fail?" And then, when you're looking at any new technology, whether it's autonomous weapons or something else, where do they fall on this spectrum, and what does that suggest about the feasibility of certain attempts at regulation versus others?

Ariel: Can you expand on that a little bit? What have you found, or what have they found in terms of patterns for success versus failure for a ban?

Paul: I think there's a couple criteria that seem to matter. One is the clarity of a ban is really crucial. Everyone needs to have a clear agreement on what is in and what is out. The simpler and clearer the definition is, the better. In some cases, this principle is actually baked into the way that certain treaties are written. I think the ban on cluster munitions is a great example of this, where the Cluster Munition Convention has a very, very simple principle in the treaty. It says, "Cluster munitions are banned," full stop.

Now, if you go into the definition, now there's all sorts of nuance about what constitutes a cluster munition or not. That's where they get into some of the horse trading with countries ahead of time. But sort of the principle is no cluster munitions. The archetype of this importance of clarity comes in the success of restraint among European powers in using chemical weapons against each other in World War II. All sides had them. They didn't use them on the battlefield against each other. Of course, Germany used them in the Holocaust and there were some other isolated incidences in World War II of use against others who didn't have them.

But the European powers all had tens of thousands of tons of mustard gas stockpiled, and they didn't use it against each other. At the outset of World War II, there were also attempts to restrain aerial bombing of cities. It was widely viewed as reprehensible. It was also illegal under international law at the time, and there were attempts on all sides to refrain from that. At the outset of the war, in fact, they did, and Hitler actually put a directive to the Luftwaffe. I talk about this a little bit in the book, although unfortunately, a lot of the detail on some of this stuff got cut for space, which I was disappointed by.

Hitler put a directive to the Luftwaffe saying that they were not to engage in bombing of civilian targets, a terror bombing, in Britain, they were only to engage in bombing military targets, not because he was a humanitarian, because he was concerned about Britain retaliating. This attempt at restraint failed when, in the middle of the night, a German bomber strayed off course and bombed central London by mistake. In retaliation, Churchill ordered the bombing of Berlin. Hitler was incensed, gave a speech the following day announcing the launch of the London Blitz.

So here's an example where there was some slippage in the principle of what was allowed and what was not, and so you had a little bit of accidental crossing of the line in conflict. So the sharper and clearer this line is, the better. You could extrapolate from that and say it's likely that if, for example, what World War II powers had agreed to in World War II was that they could only use poison gas against military targets but not against civilian targets, that it would have quickly escalated to civilian targets as well.

In the context of autonomous weapons, that's one of the arguments why you've see some advocates of a ban say that they don't support what is sometimes called a partition treaty, which is something that would create a geographic partition that would say you could only use autonomous weapons outside of populated areas. What some advocates of a ban have said is, "Look, that's never going to hold in combat." That sounds good. I've heard some international humanitarian lawyers say that, "Oh, well, this is how we solve this problem." But in practice, I agree that's not likely to be very feasible.

So clarity's important. Another factor is the relative value of, the military value of a weapon, versus its perceived horribleness. I think, again, a good case in point here is the difference in the International Committee's success in largely getting most countries to give up chemical weapons, but the lack of success on nuclear weapons. Nuclear weapons by any reasonable measure are far more terrible in terms of their immediate and long-lasting effects on human life and the environment, but they have much more military value, at least perceived military value. So countries are much more reluctant to give them up.

So that's another factor, and then there are some other ones that I think are fairly straightforward but also matter, things like the access to the weapon and the number of actors that are needed to get agreement. If only two countries have the technology, it's easier to get them on board than if it's widely available and everyone needs to agree. But I think those are some really important factors that are significant.

One of the things that actually doesn't matter that much is the legality of a weapons treaty. I'm not saying it doesn't matter at all, but you see plenty of examples of legally binding treaties that are violated in wartime, and you see some examples, not a ton, but some examples of mutual restraint among countries when there is no legally binding agreement or sometimes no agreement at all, no written agreement. It's sort of a tacit agreement to refrain from certain types of competition or uses of weapons.

All of those, I think, are really important factors when you think about the likelihood of a ban actually succeeding on any weapons -- not just autonomous weapons, any weapons -- but the likelihood of a ban actually succeeding in wartime.

Ariel: I'm probably going to want to come back to this, but you mentioned something that reminded me of another question that I had for you. And that is, in your book, you mentioned ... I don't remember what the weapon was, but it was essentially an autonomous weapon that the military chose not to use and then ended up giving up because it was so costly, and ultimately they didn't trust it to make the right decisions.

I'm interested in this idea of the extent to which we trust the weapons to do whatever it is that they're tasked with if they're in some sort of autonomous mode, and I guess where we stand today with various weapons and whether military will have increasing trust in their weapons in the future.

Paul: The case study I think you're referring to was an anti-ship missile called the Tomahawk anti-ship missile, or TASM, that was in service by the US Navy in the 1980s. That I would classify as an autonomous weapon. It was designed to go over the horizon to attack Soviet ships, and it could fly a search pattern. I think, actually, in the book I included the graphic of the search pattern that it would fly to look for Soviet ships.

The concern was that the way this would work in anti-surface warfare is the navy would send out patrol aircraft because they're much faster. They have much longer range than ships. And they would scout for other enemy ships. The principle in a wartime environment is patrol aircraft would find a Soviet ship and then radio back to a destroyer the Soviet ship's location, and the destroyer would launch a missile.

Now, the problem was, by the time the missile got there, the ship would have moved. So the ship would now have what the military would call an area of uncertainty that the ship might be in. They wouldn't have the ability to continuously track the ship, and so what they basically would do was the missile would fly a search pattern over this area of uncertainty, and when it found the ship, it would attack it.

Now, at the time in the 1980s, the technology was not particularly advanced and it wasn't very good at discriminating between different kinds of ships. So one of the concerns was that if there happened to be another kind of ship in the area that was not an enemy combatant, it still might attack it if it was within this search pattern area. Again, it's originally cued by a human that had some indication of something there, but there was enough uncertainty that it flies this pattern on its own. And I only for that reason call it autonomous weapon because there was a great amount of uncertainty about sort of what it might hit and whether it might do so accurately. And it could, once launched, it would sort of find and attack all on its own.

So it was never used, and there was great hesitance about it being used. I interview a retired US Navy officer who was familiar with it at the time, and he talks about that they didn't trust that its targeting was good enough that once they let it loose, that it might hit the right target. Moreover, there was the secondary problem, which is it might hit the wrong target, sort of a false positive, if you will, but it also might miss the Soviet ship, in which case they would have simply wasted a weapons system.

That's another problem that militaries have, which is missiles are costly. They don't have very many of them in their inventory. Particularly if it's something like a ship or an aircraft, there's only so many that they can carry physically on board. So they don't want to waste them for no good reason, which is another practical to an operational consideration. So eventually it was taken out of service for what I understand to be all of these reasons, and that's a little bit of guesswork, I should say, as to why it was taken out of service. I don't have any official documentation saying that, but that's at least, I think, a reasonable assumption about some of the motivating factors based on talking to people who were familiar with it at the time.

One of the things that I think is an important dynamic that I talk about in the book, which is that, that is really an acute problem, the wasting the weapon problem for missiles that are not recoverable. You launch it, you're not going to get it back. If the enemy's not there, then you've just wasted this thing. That changes dramatically if you have a drone that can return back. Now, all of the concerns about it hitting the wrong target and civilian casualties, those still exist and those are very much on the minds of at least Western military professionals who are concerned about civilian casualties and countries that care about the rule of law more broadly.

But this issue of wasting the weapon is less of an issue when you have something that's recoverable and you can send it out on patrol. So I think it's possible, and this is a hypothesis, but it's possible that as we see more drones and combat drones in particular being put into service and intended to be used in contested areas where they may have jammed communications, that we start to see that dynamic change.

To your question about trust, I guess I'd say that there is a lot of concern at least among the military professionals that I talk to in the United States and in other Allied countries, NATO countries or Australia or Japan, that there was a lot of concern about trust in these systems, and in fact, I see much more confidence ... I'm going to make a broad generalization here, okay? So forgive me, but in general I would say that I see much more confidence in the technology coming from the engineers who are building them at military research labs or at defense companies, than in the military professionals in uniform who have to push the button and use them, that they're a little bit more skeptical of wanting to actually trust these and delegate, what they see as their responsibility, to this machine.

Ariel: What do you envision, sort of if we go down current trajectories, as the future of weaponry specifically as it relates to autonomous weaponry and potentially lethal autonomous weaponry? And to what extent do you think that international agreements could change that trajectory? And maybe, even, to what extent to you think countries might possibly even appreciate having guidelines to work within?

Paul: I'll answer that, but let me first make an observation about most of the dialogue in the space. There's sort of two different questions wrapped up in there. What is the likely outcome of a future of autonomous weapons? Is it a good future or a bad future? And then another one is, what is the feasibility of some kind of international attention to control or regulate or limit these weapons? Is that possible or unlikely to succeed?

What I tend to hear is that people on all sides of this issue tend to cluster into two camps. They tend to either say, "Look, autonomous weapons are horrible and they're going to cause all these terrible effects. But if we just all get together, we can ban them. All we need to do is just ... I don't know what's wrong with countries. We need to sit down. We need to sign a treaty and we'll get rid of these things and our problems will be solved."

Other people in the opposite camp say, "Bans don't work, and anyways, autonomous weapons would be great. Wouldn't they be wonderful? They could make war so great, and humans wouldn't make mistakes anymore, and no innocent people would be killed, and war would be safe and humane and pristine." Those things don't necessarily go together. So it's entirely possible ... Like if you sort of imagine a two-by-two matrix. It's really convenient that everybody's views fit into those boxes very harmoniously, but it may not be possible.

I suspect that, on the whole, autonomous weapons that have no human control over targeting are not likely to make war better. It's hard for me to say that would be a better thing. I can see why militaries might want them in some instances. I think some of the claims about the military values might be overblown, but there are certainly some in situations where you can imagine they'd be valuable. I think it kind of remains to be seen how valuable and what context, but you can imagine that.

But in general, I think that humans add a lot of value to making decisions about lethal force, and we should be very hesitant to take humans away. I also am somewhat skeptical of the feasibility of actually achieving restraint on these topics. I think it's very unlikely the way the current international dynamics are unfolding, which is largely focused on humanitarian concerns and berating countries and telling them that they are not going to build weapons that comply with international humanitarian law.

I just don't think that's a winning argument. I don't think that resonates with most of the major military powers. So I think that when you look at, actually, historical attempts to ban weapons, that right now what we're seeing is a continuation of the most recent historical playbook, which is that elements of civil society have kind of put pressure on countries to ban certain weapons for humanitarian reasons. I think it's actually unusual when you look at the broader historical arc. Most attempts to ban weapons were driven by great powers and not by outsiders, and most of them centered on strategic concerns, concerns about someone getting an unfair military advantage, or weapons making war more challenging for militaries themselves or making life more challenging for combatants themselves.

Ariel: When you say that it was driven by powers, do you mean you'd have, say, two powerful countries and they're each worried that the other will get an advantage, and so they agree to just ban something in advance to avoid that?

Paul: Yeah. There's a couple time periods that kind of seem most relevant here. One would be a flurry of attempts to control weapons that came out of the Industrial Revolution around the dawn of the 20th century. These included air balloons, or basically air-delivered weapons from balloons or airplanes, submarines, poison gas, what was called fulminating projectiles. You could think of projectiles or bullets that have fire in them or are burning, or exploding bullets, sawback bayonets. There was some restraint on their use in World War I, although it wasn't ever written down, but there seems to be a historical record of some constraint there.

That was one time period, and at the time, that was all driven by the great powers at the time. So these were generally driven by the major European powers and then Japan as Japan sort of came rising on the international stage and particularly was involved as a naval power in the naval treaties. The Washington Naval Treaty is another example of this that attempts to control a naval arms race.

And then, of course, there were a flurry of arms control treaties during the Cold War driven by the US and the USSR. Some of them were bilateral. Many of them were multilateral but driven principally by those two powers. So that's not to say there's anything wrong with the current models of NGOs in civil society pushing for bans, because it's worked and it's worked in landmines and cluster munitions. I'm not sure that the same conditions apply in this instance, in large part because in those cases, there was real humanitarian harm that was demonstrated.

So you could really, I think, fairly criticize countries for not taking action because people were being literally maimed and killed every day by landmines and cluster munitions, whereas here it's more hypothetical, and so you see people sort of extrapolating to all sorts of possible futures and some people saying, "Well, this going to be terrible," but other people saying, "Oh, wouldn’t it be great," and some say it’d be wonderful.

I'm just not sure that the current playbook that some people are using, which is to sort of generate public pressure, will work when the weapons are still hypothetical. And, frankly, they sound like science fiction. There was this recent open letter that FLI was involved in, and I was sitting in the break room at CNN before doing a short bit on this and talking to someone about this. They said, "Well, what are you going on about?" I said, "Well, some AI scientists wrote a letter saying they weren't going to build killer robots."

I think to many people it just doesn't sound like a near-term problem. That's not to say that it's not a good thing that people are leading into the issue. I think it's great that we're seeing people pay attention to the issue and anticipate it and not wait until it happens. But I'm also just not sure that the public sentiment to put pressure on countries will manifest. Maybe it will. It's hard to say, but I don't think we've seen it yet.

Ariel: Do you think in terms of considering this to be more near term or farther away, are military personnel also in that camp of thinking that it's still farther away, or within militaries is it considered a more feasible technology in the near term?

Paul: I think it depends a little bit on how someone defines the problem. If they define an autonomous weapon as human-level intelligence, then I think there's a wide agreement. Well, at least within military circles. I can't say wide agreement. There's probably a lot of people on the podcast who might, maybe, have varying degrees of where they think that might be in terms of listeners.

But in military circles, I think there's a perception that that's just not a problem in the near term at all. If what you mean is something that is relatively simple but can go over a wide area and identify targets and attack them, I think many military professionals would say that the technology is very doable today.

Ariel: Have you seen militaries striving to create that type of weaponry? Are we moving in that direction, or do you see this as something that militaries are still hesitating to move towards?

Paul: That's a tricky question. I'll give you my best shot at understanding the answer to that because I think it's a really important one, and part of it is I just don't know because there's not great transparency in what a lot of countries are doing. I have a fairly reasonable understanding of what's going on in the United States but much less so in other places, and certainly in countries like authoritarian regimes like Russia and China, it's very hard to glean from the outside what they're doing or how they're thinking about some of these issues.

I'd say that almost all major military powers are racing forward to invest in more robotics and autonomous artificial intelligence. I think for many of them, they have not yet made a decision whether they will cross the line to weapons that actually choose their own targets, to what I would call an autonomous weapon. I think for a lot of Western countries, they would agree that there's a meaningful line there. They might parse it in different ways.

The only two countries that have really put any public guidance out on this are the United States and the United Kingdom, and they actually define autonomous weapon in quite different ways. So it's not clear from that to interpret sort of how they will treat that going forward. US defense leaders have said publicly on numerous occasions that their intention is to keep a human in the loop, but then they also will often caveat that and say, "Well, look. If other countries don't, we might be forced to follow suit."

So it's sort of in the loop for now, but it's not clear how long "for now" might be. I think it's not clear to me whether countries like Russia and China even see the issue in the same light, whether they even see a line in the same place. And at least some of the public statements out of Russia, for example, talking about fully roboticized units or some Russian defense contractors claiming to have built autonomous weapons that can do targeting on their own, it would suggest that they may not even see the light in the same way.

In fairness, that is a view that I hear among some military professionals and technologists. I don't want to say that's the majority view, but it is at least a significant viewpoint where people will say, "Look, there's no difference between that weapon, an autonomous weapon that can choose its own targets, and a missile today. It's the same thing, and we're already there." Again, I don't totally agree, but that is a viewpoint that's out there.

Ariel: Do you think that the fact that countries have these differing viewpoints is a good reason to put more international pressure on developing some sort of regulations to try to bring countries in line, bring everyone onto the same page?

Paul: Yeah. I'm a huge supporter of the process that's been going on with the United Nations. I'm frustrated, as many are, about the slowness of the progress. Part of this is a function of diplomacy, but part of this is just that they haven't been meeting very often. When you add up all of the times over the last five years, it's maybe five or six weeks of meetings. It's just not very much time they spend together.

Part of it is, of course ... Let's be honest. It's deliberate obstinacy on the part of many nations who want to slow the progress of talks. But I do think it would be beneficial if countries could come to some sort of agreement about rules of the road, about what they would see as appropriate in terms of where to go forward.

My view is that we've gotten the whole conversation off on the wrong foot by focusing on this question of whether or not to have a legally binding treaty, whether or not to have a ban. If this was me, that's not how I would have framed the discussion from the get-go, because what happens is that many countries dig in their heels because they don't want to sign to a treaty. So they're just like they start off on a position of, "I'm opposed." They don't even know what they're opposed to. They're just opposed because they don't want to sign a ban.

I think a better conversation to have would be to say, "Let's talk about the role of autonomy and machines and humans in lethal decision-making in war going forward. Let's talk about the technology. Let's talk about what it can do, what it can't do. Let's talk about what humans are good at and what they're not good at. Let's think about the role that we want humans to play in these kinds of decisions on the battlefield. Let's come up with a view of what we think 'right' looks like, and then we can figure out what kind of piece of paper we write it down on, whether it's a piece of paper that's legally binding or not."

Ariel: Talking about what the technology actually is and what it can do is incredibly important, and in my next interview with Toby Walsh, we try to do just that.

Toby: I'm Toby Walsh, I'm a Scientia Professor of Artificial Intelligence at the University of New South Wales, which is in Sydney, Australia. I'm a bit of an accidental activist, in the sense that I've been drawn in, as a responsible scientist, to the conversation about the challenges, the opportunities, the risks that artificial intelligence pose in fighting war. And there's many good things that AI's going to do in terms of reducing casualties and saving lives, but equally, I'm very concerned, like many of my colleagues are, about the risks that it poses, especially when we hand over full control to computers and remove humans from the loop.

Ariel: So that will segue nicely into the first question I had for you, and that was what first got you thinking about lethal autonomous weapons? What first gave you reason for concern?

Toby: What gave me concern about the development of lethal autonomous weapons was to see prototype weapons being developed. And knowing the challenges that AI poses -- we're still a long way away from having machines that are as intelligent as humans, and knowing the limitations, and being very concerned that we were handing over control to machines that weren't technically capable, and certainly weren't morally capable, of making the right choices. And therefore, too, I felt a responsibility, as any scientist, that we want AI to be used for good and not for bad purposes. Unfortunately, like many technologies, it's completely dual use. They're pretty much the same algorithms that are going to go into your autonomous car, that are going to identify, track, and avoid pedestrians and cyclists, are going to go into autonomous drones that are going to identify combatants, track them, and kill them. It's a very small change to turn one algorithm into the other. And we're going to want autonomous cars, they're going to bring great benefits to our lives, save lots of lives, give mobility to the elderly, to the young, to the disabled. So there can be great benefits for those algorithms, but equally, the same algorithms can be repositioned and used to make warfare much more terrible and much more terrifying.

Ariel: And with AI, we've seen some breakthroughs in recent years, just generally speaking. Do any of those give you reason to worry that lethal autonomous weapons are closer than maybe we thought they might have been five or ten years ago? Or has the trajectory been consistent?

Toby: The recent breakthroughs have to be put into the context and that they've been in things like games, like the game of Go, very narrow-focus task without uncertainty. The real world doesn't interfere when you're playing a game of Go, it's very precise rules and very constrained actions that you need to do and things that you need to think about. And so to us it's good to see progress in these narrow domains. We're still not making much progress, there's still a huge amount to be done to build machines that are as intelligent as us. But it's not machines as intelligent as us that I'm very worried about, although that will be in 50 or 100 years time, when we have them, that will be something that we'll have to think about then.

It's actually stupid AI, the fact that we're already thinking about giving responsibility to quite stupid algorithms that really cannot make the right distinctions, either in a technical sense, in terms of being able to distinguish combatants and civilians as required by international humanitarian law. And also from a moral ground, that they really can't decide things like proportionality, they can't make the moral distinctions that humans have. They don't have any of the things like empathy and consciousness that allow us to make those difficult decisions that are made in the battlefield.

Ariel: If we do continue on our current path and we aren't able to get a ban on these weapons, what concerns do you have? What do you fear will happen? Or what do you anticipate? What type of weapons?

Toby: The problem is, I think with the debate, is that people try and conflate the concerns that we have into just one concern. And there's different concerns at different points in time and different developments of the technology.

So the concerns I have in the next 10 years or so are definitely concerns I would have in 50 years time. Now the concerns I would have in the next 10 years or so is largely around incompetence. The machines would not be capable of making the right distinctions. And later on, there are concerns that come, as the machines become more competent, different concerns. They would actually now change the speed, the duration, the accuracy of war. And they would be very terrible weapons that any ethical safeguards that we could, at that point, build in, might be removed by bad actors. Sadly, plenty of bad actors out there who would be willing to remove any of the ethical safeguards that we might build in. So there's not one concern. I think, unfortunately, when you hear the discussion, often it's people try and distill it down to just a single concern at a single point in time. And depending on the state of the technology, there are different concerns as the technology gets more sophisticated and more mature. But it's only to begin with, I would be very concerned that we will introduce a rather stupid algorithm into battlefield and they couldn't make the right moral and right technical distinctions that are required until IHL.

Ariel: Have you been keeping track at all of what sorts of developments have been coming out of different countries?

Toby: You can see, if you just go into YouTube, you can see there are prototype weapons. Pretty much in every theater of battle -- in the air, there are autonomous drones and PA systems have autonomous drones that's now been under development for a number of years. And on the sea, the US Navy's launched, more than a year ago now, it's first fully autonomous ship. And interestingly, when it was launched, they said it would just have defensive measures that we can use, hunting for mines, hunting for submarines, and now they're talking about putting weapons on it. Under the sea, we have an autonomous submarine, an autonomous submarine the size of a bus that's believed to be halfway across the Pacific, fully autonomously. And on land there are a number of different autonomous weapons. Certainly there are prototypes of autonomous tanks, autonomous sentry robots, and the like. So there is a bit of an arms race happening and it's certainly very worrying to see that we're sort of locked into one of these bad equilibria, where everyone is racing to develop these weapons, in part just because the other side is.

China is definitely one of the countries to be worried about. It's made very clear its ambitions to seek economic military dominance through the use, in large part, in technologies like artificial intelligence and it's investing very heavily to do that. The military and commercial companies are very tightly close together. It will give it quite a unique position, perhaps even some technical advantages to the development of AI, especially in the battlefield. So it was quite surprising, all of us at the UN meeting in April were pretty surprised when China came out and called for a ban on the deployment of autonomous weapons. It didn't say anything about development of autonomous weapons, so that's probably not as far as I would like countries to go because if they're developed, then you still run the risk that they will be used, accidentally or otherwise. The world is still not as safe as if they're not actually out there with their triggers waiting to go. But it's interesting to see that they made that call. It's hard to know whether they're just being disruptive or whether they really do see the serious concern we have.

I've talked to my colleagues, academic researchers in China around, and they've been, certainly in private, sympathetic to the cause of regulating autonomous weapons. Of course, unfortunately, China is a country in which it's not possible, in many respects, to talk freely. And so they've made it very clear that it would be a career-killing move for them, perhaps, to speak publicly like scientists in the West have done about these issues. Nevertheless, we have drawn signatures from Hong Kong, where it is possible to speak a bit more freely, which I think demonstrates that, within the scientific community internationally, across nations, there is actually broad support for these sorts of actions. But the local politics may prevent scientists from speaking out in their home country.

Ariel: A lot of the discussion around lethal autonomous weapons focuses on the humanitarian impact, but I was wondering if you could speak at all to the potential destabilizing effect that they could have for countries?

Toby: One of the aspects of autonomous weapons that I don't think is discussed enough is quite how destabilizing they will be as a technology. They will be relatively easy, certainly cheap to get your hands on. As I was saying when I was in Korea most recently to the Koreans, the presence of autonomous weapons would make South Korea even less safe than it is today. A country like North Korea has demonstrated it's willing to go to great lengths to attain atomic weapons. And it would be much easier for them to obtain autonomous weapons and that would put South Korea in a very difficult situation because if they were attacked by autonomous weapons and they weren't able to defend themselves adequately, then that would escalate and we might well find ourselves in a nuclear conflict. One that, of course, none of us would like to see. So they will be rather destabilizing, like the weapons that fall into the wrong hands, they'll be used not just by the superpowers, they'll be used by smaller nations, even rogue states. Potentially, they might even be used by terrorist organizations.

And then another final aspect that makes them very destabilizing is one of attribution. If someone attacks you with autonomous weapons, then it's going to be very hard to know who's attacked you. It's not like you can bring one of the weapons down, you can open it up and look inside it. It's not going to tell you who launched it. There's not a radio signal you can follow back to a base to find out who's actually controlling this. So it's going to be very hard to work out who's attacking you and the countries will deny, vehemently, that it's them, even if they went and attacked you. So they will be perfect weapons of terror, perfect weapons for troubling nations to do their troubling with.

One other concern that I have as a scientist is the risk of the field receiving a bad reputation by the misuse of the technology. We've seen this in areas like genetically modified crops. The great benefits that we might have had by that technology -- making crops more disease-resistant, more climate-resistant, and that we need, in fact, to deal with the pressing problems that climate change and growing population's put on our planet -- have been negated by the fact that people were distrustful of the technology. And we run a similar sort of risk, I think, with artificial intelligence. That if people see the AI being used to fight terrible wars and to be used against civilians and other people, that the technology will have a stain on it. And all the many good uses and the great potential of the technology might be at risk because people will turn against all sorts of developments of artificial intelligence. And so that's another risk and another reason many of my colleagues feel that we have to speak out very vocally to ensure that we get the benefits and that the public doesn't turn against the whole idea of AI being used to improve the planet.

Ariel: Can you talk about the different between an AI weapon and an autonomous weapon?

Toby: Sure. There's plenty of good things that the military can use artificial intelligence for. In fact, the U.S. military has historically been one of the greatest funders of AI research. There's lots of good things you can use artificial intelligence for, in the battlefield and elsewhere. No one should risk a life or limb clearing a minefield, a perfect job for a robot because it could go rogue and blow up the robot and you can replace the robot easily. Equally, filtering through all the information coming at you, making sure that you can work out who are combatants and who are civilians, using AI to help you in a situation, once again, that's a perfect job that will actually save lives, stop some of the mistakes that inevitably happen in the fog of war. And in lots of other areas in logistics and so on, there's lots of good things in humanitarian aid that AI will be used for.

So I'm not against the use of AI in militaries, I think I can see great potential for it to save lives, to make war a little less dangerous. But there is a complete difference when we look at removing humans completely from the decision loop in a weapon and ending up with a fully autonomous weapon where it is the machine that is making the final decision as to who lives and who dies. And as I said before, that raises many technical, moral, and legal questions that we shouldn't go down that line. And ultimately, I think there's a very big moral argument, which is that we shouldn't hand over those sorts of decisions, that would be taking us into a completely new moral territory that we've never seen before in our lives. Warfare is a terrible thing and we sanction it, and in part because we're risking our own lives and it should be a matter of last resort, not something that we hand over easily to machines.

Ariel: Is there anything else that you think we should talk about?

Toby: I think we'd want to talk about whether regulating autonomous weapons, regulating AI, would hinder the benefits for peaceful or non-military uses. I'm very unconcerned, as many of my colleagues, that if we regulate autonomous weapons that that will actually hinder the development, in any way at all, of the peaceful and the good uses of AI. In fact, as I had mentioned earlier, I'm actually much more fearful that if we don't regulate, there will be a backlash against the technology as a whole and that will actually hinder the good uses of AI. So I'm completely unconcerned, just like the bans on chemical weapons have not held back chemistry, the bans on biological weapons have not held back biology, the bans on nuclear weapons have not held back the development of peaceful uses of nuclear power. So I'm completely unconcerned, as many of my colleagues are, that regulating autonomous weapons will actually hold back the field in any way at all, in fact quite the opposite.

Ariel: Regulations for lethal autonomous weapons will be more effective if the debate is framed in a more meaningful way, so I’m happy Richard Moyes could talk about how the concept of meaningful human control has helped move the debate in a more focused direction.

Richard: I'm Richard Moyes, and I am Managing Director of Article 36, which is a non-governmental organization which focuses on issues of weapons policy and weapons law internationally.

Ariel: To start, you have done a lot of work, I think you're credited with coining the phrase “meaningful human control.” So I was hoping you could talk a little bit about first, what are some of the complications around defining whether or not a human is involved and in control, and maybe if you could explain some of the human in the loop and on the loop ideas a little bit.

Richard: We developed and started using the term meaningful human control really as an effort to try and get the debate on autonomous weapons focused on the human element, the form and nature of human engagement that we want to retain as autonomy develops in different aspects of weapons function. First of all, that's a term that's designed to try and structure the debate towards thinking about that human element.

I suppose, the most simple question that we raised early on when proposing this term was really a recognition that I think everybody realizes that some form of human control would be needed over new weapon technologies. Nobody is really proposing weapon systems that operate without any human control whatsoever. At the same time, I think people could also recognize that simply having a human being pressing a button when they're told to do so by a computer screen, without really having any understanding of what the situation is that they're responding to, having a human simply pressing a button without understanding of the context, also doesn't really involve human control. So even though in that latter situation, you might have a human in the loop, as that phrase goes, unless that human has some substantial understanding of what the context is and what the implications of their actions are, then simply a pro forma human engagement doesn't seem sufficient either.

So, in a way, the term meaningful human control was put forward as a way of shifting the debate onto that human element, but also putting on the table this question of, well, what's the quality of human engagement that we really need to see in these interactions in order to feel that our humanity is being retained in the use of force.

Ariel: Has that been successful in helping to frame the debate?

Richard: I think this sort of terminology, of course, different actors use different terms. Some people talk about necessary human control, or sufficient human control, or necessary human judgment. There's different word choices there. I think there are pros and cons to those different choices, but we don't tend to get too hung up on the specific wording that's chosen there. The key thing is that these are seen bundled together as being a critical area now for discussion among states and other actors in multilateral diplomatic conversation about where the limits of autonomy in weapon systems lie.

I think coming out of the Group of Governmental Experts meeting of the Convention on Conventional Weapons that took place earlier this year, I think the conclusion of that meeting was more or less that this human element really does now need to be the focus of discussion and negotiation. So one way or another, I think the debate has shifted quite effectively onto this issue of the human element.

Ariel: What are you hoping for in this upcoming meeting?

Richard: Perhaps what I'm hoping for and what we're going to get, or what we're likely to get, might be rather different things. I would say I'd be hoping for states to start to put forward more substantial elaborations of what they consider the necessary human control, human element in the use of force to be. More substance on that policy side would be a helpful start, to give us material where we can start to see the differences and the similarities in states' positions.

However, I suspect that the meeting in August is going to focus mainly on procedural issues around the adoption of the chair's report, and the framing of what's called the mandate for future work of the Group of Governmental Experts. That probably means that, rather than so much focus on the substance, we're going to hear a lot of procedural talk in the room.

That said, in the margins, I think there's still a very good opportunity for us to start to build confidence and a sense of partnership amongst states and non-governmental organizations and other actors who are keen to work towards the negotiation of an instrument on autonomous weapon systems. I think building that partnership between sort of progressive states and civil society actors and perhaps others from the corporate sector, building that partnership is going to be critical to developing a political dynamic for the period ahead.

Ariel: I'd like to go back, quickly, to this idea of human control. A while back, I talked with Heather Roff, and she gave this example, I think it was the empty hanger problem. Essentially what it is is no one expects some military leader to walk down to the airplane hangar and discover that the planes have all gone off to war without anyone saying something.

I think that gets at some of the confusion as to what human control looks like. You'd mentioned briefly the idea that a computer tells a human to push a button, and the human does that, but even in fully autonomous weapon systems, I think there would still be humans somewhere in the picture. So I was wondering if you could elaborate a little bit more on maybe some specifics of what it looks like for a human to have control or maybe where it starts to get fuzzy.

Richard: I think that we recognize that in the development of weapon technologies, already we see significant levels of automation, and a degree of handing over certain functions to sensors and to assistance from algorithms and the like. There are a number of areas that I think are of particular concern to us. I think, in a way, this is to recognize that a commander needs to have a sufficient contextual understanding of where it is that actual applications of force are likely to occur.

Already, we have weapon systems that might be projected over a relatively small area, and within that area, they will identify the heat shape of an armored fighting vehicle for example, and they may direct force against that object. That's relatively accepted in current practice, but I think it's accepted so long as we recognize that the area over which any application of force may occur is actually relatively bounded, and it's occurring relatively shortly after a commander has initiated that mission.

Where I think my concerns, our concerns, lie is that that model of operation could be expanded over a greater area of space on the ground, and over a longer period of time. As that period of time and that area of space on the ground increase, then the ability of a commander to actually make an informed assessment about the likely implications of the specific applications of force that take place within that envelope becomes significantly diluted, to the point of being more or less meaningless.

For us, this is linked also to the concept of attacks as a term in international law. There's a legal obligation that bears on human commanders at their unit of the attack, so there are certain legal obligations that a human has to fulfill for an attack. Now an attack doesn't mean firing one bullet. An attack could retain a number of applications of actual force, but it seems to us that if you simply expand the space and the time over which an individual weapon systems can identify target objects for itself, ultimately you're eroding that notion of an attack, which is actually a fundamental building block of the structure of the law. You're diluting that legal framework to the point of it arguably being meaningless.

We want to see a reasonably constrained period of, say, let's call it independence of operation for a system, it may not be fully independent, but where a commander has the ability to sufficiently understand the contextual parameters within which that operation is occurring.

Ariel: Can you speak at all, since you live in the UK, on what the UK stance is on autonomous weapons right now?

Richard: I would say the UK has, so far, been a somewhat reluctant dance partner on the issue of autonomous weapons. I do see some, I think, positive signs of movement in the UK's policy articulations recently. One of the main problems they've had in the past is that they adopted a definition of lethal autonomous weapon systems, which is the terminology used in the CCW. It's undetermined what this term lethal autonomous weapon systems means. That's a sort of moving target in the debate, which makes the discussion quite complicated.

But the UK adopted a definition of that term which was somewhat in the realm of science fiction as far as we're concerned. They describe lethal autonomous weapon systems as having the ability to understand a commander's intent. I think, in doing so, they were suggesting an almost human-like intelligence within the system, which is a long way away, if even possible. It's certainly a long way away from where we are now, and where already developments of autonomy in weapon systems are causing legal and practical management problems. By adopting that sort of futuristic definition, they a little bit ruled themselves out of being able to make constructive contributions to the actual debate about how much human control should there be in the use of force.

Now recently in certain publications, the UK has slightly opened up some space to recognize that that definition might actually not be so helpful, and maybe this focus on the human control element that needs to be retained is actually the most productive way forward. Now how positive the UK will be, from my perspective, in that discussion, and then talking about the level of human control that needs to be retained? I think that remains to be seen, but I think at least they're engaging with some recognition that that's the area where there needs to be more policy substance. So finger's crossed.

Ariel: I’d asked Richard about the UK’s stance on autonomous weapons, but this is a global issue. I turned to Mary Wareham and Bonnie Docherty for more in-depth information about international efforts at the United Nations to ban lethal autonomous weapons.

Bonnie: My name's Bonnie Docherty. I'm a senior researcher at Human Rights Watch, and also the director of Armed Conflict and Civilian Protection at Harvard Law School's International Human Rights Clinic. I've been working on fully autonomous weapons since the beginning of the campaign doing most of the research and writing regarding the issue for Human Rights Watch and Harvard.

Mary: This is Mary Wareham. I'm the advocacy director of the Arms Division at Human Rights Watch. I serve as the global coordinator of the Campaign to Stop Killer Robots. This is the coalition of non-governmental organizations that we co-founded towards the end of 2012 and launched in April 2013.

Ariel: What prompted the formation of the Campaign to Stop Killer Robots?

Bonnie: Well, Human Rights Watch picked up this issue, we published our first report in 2012. Our concern was the development of this new technology that raised a host of concerns, legal concerns, compliance with international and humanitarian law and human rights law, moral concerns, accountability concerns, scientific concerns and so forth. We launched a report that was an initial foray into the issues, trying to preempt the development of these weapons before they came into existence because the genie's out of the bottle, it's hard to put it back in, hard to get countries to give up a new technology.

Mary: Maybe I can follow up there just to establish the Campaign to Stop Killer Robots. I did a lot of leg work in 2011, 2012 talking to a lot of the people that Bonnie was talking to for the preparation of the report. My questions were more about what should we do once we launch this report? Do you share the same concerns that we have at Human Rights Watch, and, if so, is there a need for a coordinated international civil society coalition to organize us going forward and to present a united voice and position to governments who we want to take action on this? For us, working that way in a coalition with other non-governmental organizations is what we do. We've been doing it for the two last decades on other humanitarian disarmament issues, the International Campaign to Ban Landmines, the Cluster Munition Coalition. We find it's more effective when we all try to work together and provide a coordinated civil society voice. There was strong interest, and therefore, we co-founded the Campaign to Stop Killer Robots.

Ariel: What prompted you to consider a ban versus your trying to ... I guess I don't know other options there might have been.

Bonnie: We felt from the beginning that what was needed to address fully autonomous weapons is a preemptive ban on development, production and use. Some people have argued that existing law is adequate. Some people have argued you only need to regulate it, to limit it to certain circumstances, but in our mind a ban is essential, and that draws on past work on other conventional weapons such as landmines and cluster munitions, and more recently nuclear weapons.

The reason for a ban is that if you allow these weapons to exist, even to come into being, to be in countries' arsenals, they will inevitably get in the hands of dictators or rogue actors that will use them against the law and against the rules of morality. They will harm combatants as well as civilians. It's impossible once a weapon exists to restrict it to a certain circumstance. I think those who favor regulation assume the user will follow all the rules, and that's just not the way it happens. We believe it should be preemptive because once they come into existence it's too late. They will be harder to control, and so if you prevent them from even happening that will be the most effective solution.

The last point I'd make is that it also increases the stigma against the weapons, which can influence even countries that aren't party to a treaty banning them. This is proven in past weapons treaties, and even there's been a preemptive ban on blinding lasers in the 1990s, and that's been very effective. There is legal precedent for this, and many arguments for why a ban is the best solution.

Mary: Yeah, there's two ways of framing that call, which is not just the call of Human Rights Watch, but the call of the Campaign to Stop Killer Robots. We seek a preemptive ban on the development, production and use of fully autonomous weapons. That's a kind of negative way of framing it. The positive way is that we want to retain meaningful human control over the use of force and over weapons systems going forward. There's a lot of interest, and I'd say convergence on those two points.

We're five years on since the launch of the campaign, 26 countries are now supporting the call for a ban and actively trying to get us there, and an even larger number of countries, actually, virtually all of the ones who’ve spoken to-date on this topic, acknowledge the need for some form of human control over the use of force and over weapons systems going forward. It's been interesting to see in the five diplomatic meetings that governments have held on this topic since May 2014, the discussions keep returning to the notion of human control and the role of the human and how we can retain that going forward because autonomy and artificial intelligence are going to be used by militaries. What we want to do, though, is draw a normative line and provide some guidance and a framework going forward that we can work with.

Ariel: You just referred to them as fully autonomous weapons. At FLI we usually talk about lethal autonomous weapons versus non-lethal fully autonomous weapons, and so that sort of drives me to the question of, to what extent do definitions matter?

Then, this is probably a completely different question, how are lethal autonomous weapons different from conventional weapons? The reason I'm combining these two questions is because I'm guessing definition does play a little bit of a role there, but I'm not sure.

Bonnie: Well, it's important for countries to make international law they have to have a general, common understanding of what we're talking about. Generally, in a legal treaty the last thing to be articulated is the actual definition. It's premature to get a detailed, technical definition, but we feel that, although a variety of names have been used, lethal autonomous weapon systems, fully autonomous weapons, killer robots, in essence they're all talking about the same thing. They're all talking about a system that can select a target and choose to fire on that target without meaningful human control. There's already convergence around this definition, even if it hasn't been defined in detail. In terms of conventional munitions, they are, in essence, a conventional munition if they deploy conventional weapons. It depends on what the payload is. If a fully autonomous system were launching nuclear weapons it would not be a conventional weapon. If it's launching cluster munitions it would be a conventional. It's not right to say they're not conventional weapons.

Mary: The talks are being held at the Convention on Conventional Weapons in Geneva. This is where governments decided to house this topic. I think it's natural for people to want to talk about definitions. From the beginning that's what you do with a new topic, right? You try and figure out the boundaries of what you're discussing here. Those talks in Geneva and the reporting that has been done to date and all of the discourse, I think it's been pretty clear that this campaign and this focus on fully autonomous weapons is about kinetic weapons. It's not about cyber, per se, it's about actual things that can kill people physically.

I think the ICRC, the Red Cross, has made it an important contribution with its suggestion to focus on the critical functions of weapons systems, which is what we were doing in the campaign, we just weren't calling it that. That's this action of identifying and selecting a target, and then firing on it, using force, lethal or otherwise. Those are the two functions that we want to ensure remain under human control, under meaningful human control.

For some others, some other states, they like to draw what we call the very wide definition of meaningful human control. For some of them it means good programming, nice design, a weapons review, a kind of legal review of if the weapon system will be legal and if they can proceed to develop it. You could kind of cast a very wide loop when you're talking about meaningful human control, but for us the crux of the whole thing is about this notion of selecting targets and firing on them.

Ariel: What are the concerns that you have about this idea of non-human control? What worries you about that?

Mary: Of autonomy in weapon systems?

Ariel: Yeah, essentially, yes.

Mary: We've articulated legal concerns here at Human Rights Watch just because that's where we always start, and that's Bonnie's area of expertise, but there are much broader concerns here that we're also worried about, too. This notion of crossing a moral line and permitting a machine to take human life on the battlefield or in policing or in border control and other circumstances, that's abhorrent, and that's something that the Nobel Peace Laureates, the faith leaders and the others involved in the Campaign to Stop Killer Robots want to prevent. For them that's a step too far.

They also worry about outsourcing killing to machines. Where's the ethics in that? Then, what impact is this going to have on the system that we have in place globally? How will it be destabilizing in various regions, and, as a whole, what will happen when dictators and one-party states and military regimes get ahold of fully autonomous weapons? How will they use them? How will non-state armed groups use them?

Bonnie: I would just add, building on what Mary said, another reason human control is so important is that humans bring judgment. They bring legal and ethical judgment based on their innate characteristics, on their understanding of another human being, of the mores of a culture, and that a robot cannot bring, certain things cannot be programmed. For example, when they're weighing whether the military advantage will justify an attack if it causes civilian harm, they apply that judgment, which is both legal and ethical. A robot won't have that, that's a human thing. Losing humanity in use of force, potentially, violate the law, and as well as raise serious moral concerns that Mary discussed.

Ariel: I want to go back to the process to get these weapons banned. It's been going on for quite a few years now. I was curious, is that slow, or is that just sort of the normal speed for banning a weapon?

Mary: Look at nuclear weapons, Ariel.

Ariel: Yeah, that's a good point. That took a while.

Mary: That took so many years, you know? That's the example that we're trying to avoid here. We don't want to be negotiating a non-proliferation treaty in 20 years time with the small number of countries who've got these and the other states who don't. We're at a crossroads here. Sorry to interrupt you.

Ariel: No, that was a good point.

Mary: There have been five meetings on this topic to date at the United Nations in Geneva, but each of those meetings has only been up to a week long, so, really, it's only five weeks of talks that have happened in the last four years. That's not much time to make a lot of progress to get everybody around the same table understanding, but I think there's definitely been some progress in those talks to delineate the parameters of this issue, to explore it and begin to pull apart the notion of human control and how you can ensure that that's retained in weapons systems in the selection of targets and the use of force. There's a wide range of different levels of knowledge on this issue, not just in civil society and academia and in the public, but also within governments.

There's a lot of leg work to be done there to increase the awareness, but also the confidence of governments to feel like they can deal with this. What's happened, especially I think in the past year, has been increased calls to now move from exploring the issue and talking about the parameters of the challenge to, "What are we good do about it?" That's going to be the big debate at the next meeting, which is coming up at the end of August, is what will the recommendation be for future work? Are the governments going to keep talking about this, which we hope they do, but what are they going to do about it, more importantly?

We're seeing, I think, a groundswell of support now for moving towards an outcome. States realize that they do not have the time or the money to waste on inconclusive deliberations, and so they met to be exploring options on pathways forward, but there's really not that many options. As has been mentioned, states can talk about international law and the existing rules and how they can apply them and have more transparency there, but I think we've moved beyond that.

There's kind of a couple of possibilities which will be debated. One is political measures, political non-binding declaration. Can we get agreement on some form of principles over human control? That sounds good, but it doesn't go nearly far enough. We could create new international law. How do we do that in this particular treaty at the Convention on Conventional Weapons? You move to a negotiating mandate, and you set the objective of negotiating a new protocol under the Convention on Conventional Weapons. At the moment, there has been no agreement to move to negotiate new international law, but we're expecting that to be the main topic of debate at the next meeting because they have to decide now what they're going to do next year.

For us, the biggest, I think, developments are happening outside of the room right now rather than in Geneva itself. There's a lot of activity now starting to happen in national capitols by governments to try and figure out what their position is on this, what their policy is on this, but there's more prodding and questioning and debate starting to happen in national parliaments, and that has to happen in order to determine what the government position is on this and what's going to happen on it. Then we have the examples of the open letters, the sign-on letters, ethical principles, there's all sorts of new things that are coming out in recent weeks that I think will be relevant to what the governments are discussing, and we hope will provide them with impetus to move forward with focus and purpose here.

We can't put a timeline on by when they might create a new international treaty, but we're saying you can do this quickly if you put your mind to it and you say that this is what you want to try and achieve. We believe that if they move to a negotiating mandate at the end of this year, they could negotiate the treaty next year. Negotiating the treaty is not the part that takes the long time. It's about getting everybody into the position where they want to create new international law. The actual process of negotiating that law should be relatively swift. If it takes longer than a year or two, then it runs the risk of turning into another set of inconclusive deliberations that don't produce anything. For us, the goal is absolutely crucial to get in there at the beginning. The goal at the moment has gone from informal talks to formal talks, but, still, with no option or outcome.

Ariel: What is some of the resistance that you're facing to moving towards a ban? Are governments worried that they're going to miss out on a great technology, or is there some other reason that they're resisting?

Mary: Just to say, 85 countries have spoken out on this topic to date. Most of them not at any great length, but just to say, "This is important. We're concerned. We support the international talks." We have a majority of countries now who want to move towards negotiating new international law. Who's the blockages at the moment? At the last round of talks and at the previous ones it was basically Israel, Russia and the United States who were saying it's premature to decide where these talks should lead. We need to further explore and discuss the issues before we can make any progress. For others, now people are less patient with that position, and it will be interesting to see if those three countries in particular change their minds here.

The particular treaty that we're at, the Convention on Conventional Weapons, the states there take their decisions by consensus, which means they can't vote. There's no voting procedures there. They have to strive for consensus where everybody in the room agrees, or at least does not object with moving forward. That threat of a kind of a blocking of consensus is always there, especially from Russia, but we'll see. There's no kind of pro-killer robot state which is saying, "We want these things. We need these things," right now, at least not in the diplomatic talks. The only countries who have wanted to talk about the potential advantages or benefits are Israel and the United States. All of the other countries who speak about this are more concerned about understanding and coming to grips with all of the challenges that are raised, and then figuring out what the regulatory framework should be.

Ariel: Bonnie, was there anything you wanted to add to that?

Bonnie: I think Mary summarized the key points. I was just going to say that there's some people who would argue that we should wait and see what the technology would bring, we don't know where it'll go. Our argument counter to that is something called the precautionary principle, that even if there's scientific uncertainty about where a technology will go, if there's a significant risk of public harm, which there is in this case, that the scientific uncertainty should not stand in the way of action. I think that the growing number of states that have expressed concern about these weapons, and the majority, the almost consensus or the merging around the need for human control show that there is willingness to act at this point. As Mary said, this is not a situation where people are advocating, and I think that in the long run the agreement that there should be human control over the use of force will outweigh any hesitation based on the wait-and-see approach.

Mary: We had a good proposal, or not proposal, but offer from the United Nations Secretary General in this big agenda for disarmament framework that he launched a couple of months ago, saying that he stands ready to support the efforts of UN member states to elaborate new measures on lethal autonomous weapon systems, including legally-binding arrangements. For him, he wants states to ensure that humans remain at all times in control over the use of force. To have that kind of offer of support from the highest level at the United Nations I think is very important.

The other recent pledges and commitments, the one by the 200 technology companies and more than 2600 scientists and AI experts and other individuals committing not to develop lethal autonomous weapons systems, that's a very powerful message, I think, to the states that these groups and individuals are not going to wait for the regulation. They're committing not to do it, and this is what they expect the governments to do as well. We also saw the ethical principles issued by Google in recent weeks and this pledge by the company not to design or develop artificial intelligence for use in weapons. All of these efforts and initiatives are very relevant to what states need to do going forward. This is why we in the Campaign to Stop Killer Robots welcome them and encourage them, and want to ensure that we have as much of a broad-based appeal to support the government action that we need taken.

Ariel: Can you talk a little bit about what's happening with China? Because they've sort of supported a ban. They're listed as supporting a ban, but it's complicated.

Mary: It's funny because so many other countries that have come forward and endorsed the call for a ban have not elicited the same amount of attention. I guess it's obviously interesting, though, for China to do this because everybody knows about the investments that China is making into military applications of artificial intelligence and autonomy. We see the weapons systems that are in development at the moment, including swarms of very small miniature drones, and where will that head?

What China thinks about this issue matters. At the last meeting, China basically endorsed the call for a ban, but said -- there's always a but -- that their support was limited to prohibiting use only, and to not address development or production. For us it's a partial ban, but we put them on the list that the campaign maintains, and they're the first state to have an asterisk by its entry saying, "Look, China is on the ban list, but it's not fully committed here." We needed to acknowledge that because it wasn't really the first that China had hinted it would support creating new international law. It has been hinting at this in previous papers, including one that found that China's review of existing international law found so many questions and doubts raised that it does see a need to create international law specific to fully autonomous weapons systems. China gave the example of the blinding lasers protocol at the CCW which prohibits laser weapons that would permanently blind human soldiers.

I think the real news on China is that its position now saying that existing law is insufficient and we need to create new international rules, splits the P5, the permanent five members of the United Nations Security Council. You have Russia and the United States arguing that it's too early to determine what the outcome should be, and the UK -- Richard can explain better exactly what the UK wants -- but it seems to be satisfied with the status quo. Then France is pursuing a political declaration, but not legally-binding measures. There's not unity anymore in that group of five permanent members of the Security Council, and those states do matter because they are some of the ones who are best-placed to be developing and investing in increasingly autonomous weapons systems.

Ariel: Okay. I wanted to also ask, unrelated, right now what you're trying to do, what we're trying to do, is get a ban, a preemptive ban on a weapon that doesn't exist. What are some examples in the past of that having succeeded, as opposed to proving some humanitarian disaster as the result of a weapon?

Bonnie: Well, the main precedent for that is the preemptive ban on blinding lasers, which is a protocol to the Convention on Conventional Weapons. We did some research a few years ago into the motives behind the preemptive ban on blinding lasers, and many of them are the same. They raised concerns about the ethics of permanently blinding someone, whether it's a combatant or a civilian. They raised concerns about the threat of an arms race. They raised concerns that there be a ban, but that it not impede peaceful development in that area. That ban has been very successful. It has not impeded the peaceful use of lasers for many civilian purposes, but it has created a stigma against and a legally-binding ruling against using blinding lasers. We think that that's an excellent model for fully autonomous weapons, and it also appeared in the same treaty at which these fully autonomous weapons or lethal autonomous weapon systems are being discussed right now. It's a good model to look at.

Mary: Bonnie, I really like that paper that you did on the other precedents for retaining human control over weapons systems. The notion that looking at past weapons that have been prohibited and finding that, in many instances, it's because of the uncontrollable effects that the weapons create, from chemical weapons and biological and toxin ones to antipersonnel landmines where, once deployed, you cannot control them anymore. This is the kind of notion of being able to control the weapon system once it's activated that has driven those previous negotiations, right?

Bonnie: Correct. There's precedent for both a preemptive ban, but there's also precedent for a desire to maintain human control over weapons. As Mary said, there are several treaties, chemical weapons, biological weapons and landmines, all have been banned, in large part because people in governments were concerned about losing control over the weapons system. In essence, it's the same model here, that by launching fully autonomous weapons you'd be losing control over the use of force. I think there's a precedent for a ban, and there's a precedent for a preemptive ban, all of which are applicable in this situation.

Ariel: I talked to Paul Scharre a little bit earlier, and one of the things that he talked about were treaties that were developed as a result of the powers that be, recognizing that the weapon would be too big of a risk for them, and so they agreed to ban a weapon. Then, the other sort of driving force for treaties was usually civil societies and based on sort of the general public saying, "This is not okay." What role do you see for both of those situations here?

Bonnie: There's a multitude of reasons of why these weapons should be banned, and I think both the ones you mentioned are valid in this case. From our point of view, the main concern is a humanitarian one, and that's civil society's focus. We're concerned about the risk to civilians. We're concerned about moral issues, and those matters. That builds on past, what they call humanitarian disarmament treaties, treaties designed to protect humanity through legal norms, and, traditionally, often through bans, bans of landmines, cluster munitions and nuclear weapons.

There have been other treaties, sometimes they overlap, that have been driven more for security reasons. Countries that are concerned about other nations getting their hands on these weapons, and that they feel in the long run it's better for no one to have them than for others to have them. Certainly, chemical weapons was an example of that. This does not mean that a treaty can't be motivated for both reasons. That often happens, and I think both reasons are applicable here, but they just have come from slightly different trajectories.

Mary: It's pretty amazing some of the diplomatic talks that we've been on on killer robots where we hear the governments debating the ethics of whether or not a specific weapon system such as fully autonomous weapons should be permitted, should be allowed. It's rare that that happens. Normally, we are dealing with the aftermath of the consequences of proliferation and of widespread use and widespread production and stockpiling. This is an opportunity to do something in advance here, and it does kind of lead to a little bit of, I'd say, a North-South divide between the kind of military powers who have the resources at their disposal to invest in increasingly autonomous technology and try and push the boundaries, and then the vast majority of countries who are asking, "What's the point of all of this? Where is the relevance of the UN charter which talks about general and complete disarmament as being the ultimate objective?" They ask, "Have we lost that goal here? Is the ultimate objective to create more and better and more sophisticated weapons systems, or is to end war and deal with the consequences through disarmament of warfare?"

Those are kind of really big-picture questions that are raised in this debate, and ones that we leave to those governments to make, but I think it is indicative of why there is so much interest in this particular concern, and that's demonstrated by just the sheer number of governments who are participating in the international talks. The international talks, they're in the setting called a Group of Governmental Experts, but this is not about a dozen guys sitting around the table in a small room. This is a big plenary meeting with more than 80 countries following, engaging, and avidly trying to figure out what to do.

Ariel: In terms of just helping people understand how the UN works, what role does a group like the Campaign to Stop Killer Robots play in the upcoming meeting? If, ultimately, the decision is made by the states and the nations, what is your role?

Mary: Our role is 24/7, all year round. These international meetings only happen a couple of times a year. This will be the second week this year. Most of our work has been this year happening in capitols and in places outside of the diplomatic meetings because that's where you really make progress, is through the parliamentary initiatives, through reaching the high-level political leadership, through engaging the public, through talking to the media and getting an increased awareness about the challenges here and the need for action. All of those things are what makes things move inside the room with the diplomacy because the diplomats need instructions from capitols in order to really progress.

At the meeting itself, we seek to provide a diverse delegation that's not just people from Europe and North America, but from around the world because this is a multilateral meeting. We need to ensure that we can reach out and engage with all of the delegates in the room because every country matters on this issue, and every country has questions. Can we answer all those questions? Probably not, but we can talk through them with those states, try and address the concerns, and try and be a valued partner in the deliberations that are happening. It's the normal way of working for us here at Human Rights Watch, is to work alongside other organizations through coordinated civil society initiatives so that you don't go to the meeting and have like 50 statements from different NGOs. You have just a few, or just one so that you can be absolutely clear and guiding where you want to see the deliberations go and the outcome that you want.

We'll be holding side events and other efforts to engage with the delegates in different ways, as well as presenting new research and reports. I think you've got something coming out, Bonnie, right?

Bonnie: We'll be releasing a new report on Martens Clause, which is a provision of international law, the Geneva conventions and other treaties that brings ethics into law. It basically has two prongs, which we'll elaborate on in the report, but talking about that countries must comply with the principles of humanity and the dictates of public conscience, which, in short, we believe fully autonomous weapons raise concerns over both of those. We believe losing human control will violate basic principles of humanity, and that there's the groundswell of opposition that's growing among, not only governments, but also faith leaders, scientists, tech companies, academics, civil society, et cetera, all show that the public conscience is coming out against fully autonomous weapons and for maintaining human control over the use of force.

Ariel: To continue with this idea of the ethical issues surrounding lethal autonomous weapons, we’re joined now by Peter Asaro.

Peter: I'm Peter Asaro. I'm an Associate Professor in the School of Media Studies at the New School University in New York City, and I'm also the co-founder and vice chair of the International Committee for Robot Arms Control, which is part of the leadership steering committee of the Campaign to Stop Killer Robots, which is a coalition of NGOs that's working at the UN to ban fully autonomous weapons.

Ariel: Could you tell us a little bit about how you got involved with this and what first gave you cause for concern?

Peter: My background is in philosophy and computer science, and I did a lot of work in artificial intelligence and in the philosophy of artificial intelligence as well as the history of science and early computing and the development of neural networks and the sort of mathematical and computational theories behind all of that. In the 1930s, '40s, '50s, and '60s was my graduate work, and as part of that, I got really interested in the kind of modern or contemporary applications of both artificial intelligence and robotics, and specifically the kind of embodied forms of artificial intelligence, which are robotic in various ways, and got really interested in not just intelligence, but social interaction.

That sort of snowballed into thinking about robot ethics and what seems the most pressing issue within robot ethics was the use of violence, the use of force, and whether we would allow robots to kill people, and of course the first place that that was gonna happen would be the military. So, I'd been thinking a lot about the ethics of military robotics form the perspective of just war theory, but also a broad range of philosophical legal perspectives as well.

That got me involved with Noel Sharkey and some other people who were interested in this from a policy perspective and we launched the International Committee for Robot Arms Control back in 2009, and then in 2012, we got together with Human Rights Watch and a number of other NGOs to form the Campaign to Stop Killer Robots.

Ariel: That leads into the next question I have for you, and it's very broad. Can you talk a little bit about what some of the ethical issues are surrounding robots and more specifically autonomous weapons in warfare?

Peter: I think of course there's a whole host of ethical issues around robotics in general and privacy, safety, sort of the big ones, but all sorts of more complicated ones as well, job displacement, how we treat them, and the impacts on society and things like that. Within the military context, I think the issues are sort of clearer in some sense, because it's mostly around the use autonomous systems in a lethal force.

So the primary question is should we allow autonomous weapons systems to make lethal decisions independently of human control or human judgment, however you frame that. And then sort of subsidiary to that, some would argue does the programming within a system constitute that kind of human control or decision making. From my perspective, pre-programming doesn't really do that, and that's because I come from a philosophical background and so we look at just war theory and you look at ethics, especially Kantian ethics, and the requirements for the morality of killing. So, killing is generally speaking immoral, but there are certain exceptions, and those are generally self-defense or collective self-defense in the case of war, but in order to justify that killing, you need reasons and justifications. And machines, and computational reasoning, at least at this stage of development, is not the type of system that has reasons. It follows rules and if certain conditions are met and a rule is applied and a result is obtained, but making a reasoned judgment about whether to use lethal force or whether to take a human life depends on a deeper understanding of reason, and I think that's a sort of moral agency, it's a moral decision making, and moral judgment that requires capacities that automated decision making systems just don't have.

Maybe down the road in the future, machines will become conscious, machines will understand the meaning of life, machines will understand what it means to take a life, machines will be able to recognize human beings as humans who deserve rights that need to be respected, and systems may understand what it means to have a duty to respect the rights of others. But simply programming rules into machines doesn't really do that. So, from a legal perspective as well, there's no real accountability for these sorts of systems because they're not legal agents, they're not moral agents, you cannot sue a computer or a robot. You cannot charge them with crimes and put them in jail and things like that.

So, we have an entire legal system as well as a moral framework that assumes that humans are the responsible agents and the ones making decisions, and as soon as you start replacing that decision making with automated systems, you start to create significant problems for the regulation of these systems and for accountability and for justice. And then that leads directly to problems of safety and control, and what kinds of systems are gonna be fielded, what are gonna be the implications of that for international stability, who's gonna have access to that, what are the implications for civilians and civilian infrastructures that might be targeted by these systems.

Ariel: I had wanted to go into some of this legality and liability stuff that you've brought up and you sort of given a nice overview of it as it is, but I was hoping you could expand a little bit on how this becomes a liability issue, and also ... This is probably sort of an obvious question, but if you could touch a little on just how complicated it is to change the laws so that they would apply to autonomous systems as opposed to humans.

Peter: A lot of the work I've been doing under a grant for the Future of Life Institute, looks at liability in increasingly autonomous systems. I know within civilian domestic application, of course the big application that everybody's looking at at the moment is the self-driving car, so you can ask this question, who's responsible when the self-driving car creates an accident. And the way that liability law works, of course somebody somewhere is always going to wind up being responsible. The law will find a way to hold somebody responsible. The question is whether existing precedence and the ways of doing things under current legal frameworks is really just or is really the best way going forward as we have these kinds of increasingly autonomous systems.

So, in terms of holding persons responsible and liable, so under tort law, if you have an accident, then you can sue somebody. This isn’t criminal law, this is the law of torts, and under that, then you sort of receive monetary compensation for damages done. But ideally, the person, or agents, or company or what have you that causes the harm is the one that should pay. Of course, that's not always true, and the way that liability works, does things like joint and several liability in which, even though one party only had a small hand in causing a harm, they may have lots of money, like a government or a state, or a city, or something like that, and so they may actually wind up paying far more as a share of damages than they actually contributed to a problem.

You also have situations of strict liability such that even if your agency in causing a problem was very limited, you can still be held fully responsible for the implications. There's some interesting parallels here with the keeping of animals, which are kind of autonomous systems in a sense. They have their minds of their own, they sort of do things. On the other hand, we expect them to be well behaved and well trained, at least for domestic animals. So generally speaking, you have liability for harms caused by your dog or your horse and so forth as a domesticated animal, but you don't have strict liability. So, you actually have to show that maybe you've trained your dog to attack or you've failed to properly train your horse or keep in a stable or what have you, whereas if you keep a tiger or something like that and it gets out and causes harm, then you're strictly liable.

So the question is for a robot, should you be strictly liable for the robots that you create or the robots that you own? Should corporations that manufacture these systems be strictly liable for all of the accidents of self-driving cars? And while that seems like a good policy from the perspective of the public, because all the harms that are caused by these systems will be compensated, that could also stifle innovation. In the car sector, that doesn't seem to be a problem. As it turns out, the president of Volvo said that they will accept strict liability for all of their self-driving cars. Tesla Motors has released a number of autopilot systems for their cars and more or less accepted the liability for that, although there's only been a few accidents, so the actual jurisprudence or case law is still really emerging around that.

But those are, I think, a technology where the cars are very expensive, there's a lot of money to be made in self-driving cars, and so the expectation of the car companies is that there will be very few accidents and that they can really afford to pay the damages for all those accidents. Now, is that gonna be true for personal robots? So, if you have a personal assistant, sort of butler robot who maybe goes on shopping errands and things like that for you, there's a potential for them to cause significant economic damage. They're probably not gonna be nearly as expensive as cars, hopefully, and it's not clear that the market for them is going to be as big, and it's not clear that companies would be able to absorb the cost of strict liability. So, there's a question of whether that's really the best policy for those kinds of systems.

Then there's also questions of ability of people to modify their systems, so if you're holding companies strictly responsible for their products, then those companies are not going to allow consumers to modify those products in any way, because that would affect their ability to control them. If you want a kind of DIY culture around autonomous systems of robotics, then you're gonna see a lot of people modifying these systems, reprogramming these systems. So you also want, I think, a kind of strict liability around anybody who does those kinds of modifications rather than the manufacturer, and that's to sort of break the seal and you accept all the responsibility for what happens.

And I think that's sort of one side of it now and the military side of it, you don't really have torts in the same way. There's of course a couple of extreme issues around torts in war, but generally speaking, militaries do not pay monetary damages when they make mistakes. If they accidentally blow up the wrong building, they don't pay to build a new building. That's just considered a casualty of war and an accident, and it's not even necessarily a war crime or anything else, because you don't have these kind of mechanisms where you can sue an invading army for dropping a bomb in the wrong place.

The idea that liability is going to act as an accountability measure on autonomous system is just silly, I think, in warfare, because you just, you can't sue people in war, basically. There's a few exceptions and the governments that purchase weapons systems can sue the manufacturers, and that's the sort of sense in which there is an ability to do that, but even most of those cases have been largely unsuccessful. Generally, those kinds of lawsuits are based on contracts and not the actual performance or damages caused by an actual system. So, you don't really have that entire regulatory mechanism, so if you have a government that's concerned about not harming civilians and not bombing the wrong buildings and things like that, of course, then they're incentivized to put pressure on manufacturers to build systems that perform well, and that's one of the sort of drivers of that technology.

But it's a much weaker force if you think about what the engineers in a car company are thinking about in terms of safety and the kind of bottom line for their company if they make a product that causes accidents versus how that's thought about in a defense company, where certainly they're trying to protect civilians and ensure that systems work correctly, but they don't have that enormously powerful economic concern about lawsuits in the future. The idea that the technology is going to be driven by similar forces, it doesn't really apply. So that's a big concern, I think, for the development of autonomous systems in the military sphere.

Ariel: Is there a worry or a risk that this sort of -- I don't know if it's lack of liability, maybe it's just whether or not we can trust the systems that are being built -- but is there an increased risk of war crimes as a result of autonomous weapons, either intentionally or accidentally?

Peter: Yeah, I mean, the idea that there's an increased risk of war crimes is kind of an interesting question, because the answer is simultaneously yes and no. What these autonomous systems actually do is diminish or remove, or put a distance between accountability of humans and their actions, or the consequences of their actions. So if you think of the autonomous system as a sort of intermediary between humans and the effects of their actions, there's this sort of accountability gap that gets created. A system could go and do some horrendous act, like devastate a village and all the civilians in the village, and then we say, "Ah, is this a war crime?" And under international law as it stands, you'd have to prove intention, which is usually the most difficult part of war crimes tribunals, being able to actually demonstrate in court that a commander had the intention of committing some genocidal act or some war crime.

And you can build various forms of evidence for that. Now, if you send out an autonomous system, and you may not even know what that system is really gonna do and you don't need to know exactly what it's going to do when you give its orders, it becomes very easy to sort of distance yourself legally from what that system does in the field. Maybe you suspect it might do something terrible, and that's what you really want, but it would be very easy then to sort of cover up your true intentions using these kinds of systems.

On the one hand, it would be much easier to commit war crimes. On the other hand, it'll be much more difficult to prosecute or hold anybody accountable for war crimes that would be committed by autonomous weapons.

Ariel: You've also been producing some open letters this summer. There was one for academics calling on Google to stop work on Project Maven and ... I'm sorry, you had another one... what was that one about?

Peter: The Amazon face recognition.

Ariel: Right. Right. Yeah. I was hoping you could talk a little bit about what you see as the role of academics and corporations and civil society in general in this debate about lethal autonomous weapons.

Peter: I think in terms of the debate of lethal autonomous weapons, civil society has a crucial role to play. I think in a broad range of humanitarian disarmament issues, and in the case of autonomous weapons, it's really, it's a technology that's moving very quickly, and militaries are still a little bit unsure of exactly how they're going to use it, but they're very excited about it and they're putting lots of research investment into new applications and trying to find new ways of using it. And I think that's exciting from a research perspective, but it's very concerning from a humanitarian and human rights perspective, because again, it's not clear what kind of legal accountability will be around these systems. It's not clear what kind of safety, control, and testing might be imposed on these systems, and it also seems quite clear that these systems are ready made for arms races and global and regional military destabilizations, where competitors are acquiring these systems and that has a potential to lead to conflict because of that destabilization itself. Then of course, the rapid proliferation.

So, in terms of civil society's role, I think what we've been doing primarily is voicing of the general concern, I think, of the broad public globally and within specific countries that we've surveyed are largely opposed to these systems. Of course, the proponents say that's just because they've seen too many sci fi movies and these things are gonna be just fine, but I don't think that's really the case. I think there's some genuine fears and concerns that need to be addressed. So, we've also seen the involvement of a number of tech companies that are developing artificial intelligence, machine learning, robotics, and things like that.

And I think their interest and concern in this issue is twofold. We have companies like Clearpath Robotics, which is the largest robotics company in Canada, and also the largest supplier of robots to the Canadian military, whose engineers organized together to say that they do not want their systems to be used for autonomous weapons platforms, and they will not build them, but they also want to support the international campaign to ensure that governments don't acquire their robots and then weaponize them. And they're doing search and rescue robots and bomb disposal robots. This similar movement amongst academics and artificial intelligence and robotics who have spent really their life work developing these fundamental technologies who are then deeply concerned that the first and perhaps last application of this is going to be autonomous weapons, and the public will turn against artificial intelligence and robotics because of that, and then that these systems are genuinely scary and that we shouldn't really be entrusting human lives or the decision to take human lives to these automated systems.

They have all kinds of great practical social applications and we should be pursuing those and just leave aside and really prohibit the use of these systems in the military context for autonomous targeting. And now I think we're seeing more movement from the big companies, particularly this open letter that we're a part of with Google, and their Project Maven. And Project Maven is a Pentagon project that aims at analyzing all the many thousands of hours of drone footage that the US military drones are collecting over Afghanistan and Iraq and various places where they're operating. And to try to automate, using machine learning, to identify objects of interest, to kind of save time for human sensor analysts who have to pour through these images and then try to determine what that is.

And that in and of itself, that doesn't seem too terrible, right? You're just scanning through this imagery. But of course, this is really the first step to an automated targeted recognition system for drones, so if you wanted to fully automate drones, which currently require human operators to interpret the imagery to decide that this is something that should be targeted with a weapon and then to actually target and fire a weapon, that whole process is still controlled by humans. But if you wanted to automate it, the first thing you'd have to do is automate that visual analysis piece. So, Project Maven is trying to do exactly that, and to do that on a really big scale.

The other kind of issue from the perspective of a labor and research organization is that the Pentagon really has trouble, I think, attracting talent. There's a really strong demand for artificial intelligence researchers and developers right now, because there's so many applications and there's so much business opportunity around it. It actually turns out the military opportunities are not nearly as lucrative as a lot of the other business applications. Google, and Amazon, and Facebook, and Microsoft can offer enormous salaries to people with PhDs in machine learning or even just masters degrees or some experience in systems development. And the Pentagon can't compete with that on government salaries, and I think they're even having trouble getting certain contracts with these companies. But when they get a contract with a company like Google, then they're able to get access to really the top talent in artificial intelligence and their Cloud research groups and engineering, and also the sort of enormous capacity computationally of Google that has these massive data centers and processing capabilities.

And then you're also getting ... in some ways, Google is a company that collects data about people all over the world every day, all the time. Every Google search that you do, and there's millions of Google searches per second or something in the world, so they have also the potential of applying the data that's collected on the public in all these complicated ways. It's really kind of a unique company in these respects. I think as a company that collects that kind of private data, they also have a certain obligation to society to ensure that that data isn't used in detrimental ways, and siding with the single military in the world and using data that might be coming from users in countries where that military is operating, I think that's deeply problematic.

We as academics kind of lined up with the engineers and researchers at Google who were already protesting Google's involvement in this project. They were concerned about their involvement in the drone program. They were concerned about how this could be applied to autonomous weapons systems in the future. And they were just generally concerned with Google's attempts to become a major military contractor and not just selling a simple service, like a word processor or a search, which they do anyway, but actually developing customized systems to do military operations, analyze these systems and apply their engineering skills and resources to that.

So, we really joined together as academics to support those workers. The workers passed around an open letter and then we passed around our letter, so the Google employees letter received over 4000 signatures and our letter from academics received almost 1200, a few shy. So, we really got a lot of mobilization and awareness, and then Google agreed to not renew that contract. So, they're not dropping it, they're gonna continue it till the end of the year, but they have said that they will not renew it in the future.

Ariel: Is there anything else that you think is important to mention?

Peter: I wrote a piece last night for a report on human dignity. So, I can just give you a little blurb about human dignity. I think the other kind of interesting ethical question around autonomous systems is this question of the right to human dignity and whether autonomous weapons or allowing robots to kill people would violate human dignity. I think some people have a very simplistic notion of human dignity, that it's just some sort of aura or something of property that hangs around people and can be violated, but in fact I believe human dignity is a relation between people and this is a more Kantian view that human dignity means that you're respected by others as a human. Others respect your rights, which doesn't mean they can never violate them, but they have to have reasons and justifications that are sound in order to override your rights.

And in the case of human dignity, of course you can die in many terrible ways on a battlefield, but the question is whether the decision to kill you is justified and if it's not, then it's sort of an arbitrary killing. That means there's no reasons for it, and I think if you look at the writings of the Special Rapporteur on extrajudicial summary on arbitrary executions, he's written some interesting papers on this, which is essentially that all killing by autonomous weapons would be arbitrary in this kind of legal sense, because these systems don't have access to reasons for killing you to know that it's actually justified to use lethal force in a given situation.

And that's because they're not reasoning in the same way that we are, but it's also because they're not human moral agents, and it's important in a sense that they be human, because human dignity is something that we all lose when it's violated. So, if you look at slavery or you look at torture, it's not simply the person who's being tortured or enslaved who is suffering, though of course they are, but it is in fact all of us who lose a certain value of human life and human dignity by the very existence of slavery or torture, and the acceptance of that.

In a similar way, if we accept the killing of humans by machines, then we're really diminishing the nature of human dignity and the value of human life, in a broad sense that affects everybody, and I think that's really true, and I think we really have to think about what it means to have human control over these systems to ensure that we're not violating the rights and dignity of people when we're engaged in armed conflict.

Ariel: Excellent. I think that was a nice addition. Thank you so much for taking the time to do this today.

We covered a lot of ground in these interviews, and yet we still only scratched the surface of what’s going on in the debate on lethal autonomous weapons. If you want to learn more, please visit autonomousweapons.org and visit the research and reports page. On the FLI site, we’ve also addressed some of the common arguments we hear in favor of lethal autonomous weapons, and we explain why we don’t find those arguments convincing. And if you want to learn even more, of course there’s the Campaign to Stop Killer Robots website, ICRAC has a lot of useful information on their site, and Article 36 has good information, including their report on meaningful human control. And if you’re also concerned about a future with lethal autonomous weapons, please take a moment to sign the pledge. You can find links to the pledge and everything else we’ve talked about on the FLI page for this podcast.

I want to again thank Paul, Toby, Richard, Mary, Bonnie and Peter for taking the time to talk about their work with LAWS.

If you enjoyed this show, please take a moment to like it, share it and maybe even give it a good review. I’ll be back again at the end of next month discussing global AI policy. And don’t forget that Lucas Perry has a new podcast on AI value alignment, and a new episode from him will go live in the middle of the month.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram