Skip to content
All Podcast Episodes

Autonomous Weapons: an Interview With the Experts – Heather Roff and Peter Asaro

Published
November 30, 2016

FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

The following interview has been edited for brevity, but you can read it in its entirety here or listen to it above.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Transcript

ARIEL: I’m Ariel Conn with the Future of Life Institute. Today, I have the privilege of talking with Drs Heather Roff and Peter Asaro, two experts in the field of autonomous weapons. Dr. Roff is a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford. She recently compiled a database of weapons systems that exhibit some level of autonomous capabilities to provide the international community a better understanding of the current state of autonomous weapons. Dr. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

Dr. Roff, I’d like to start with you first. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

HEATHER: The main impetus behind the creation of the database was a feeling that the same autonomous – or at least the same semi-autonomous or automated weapons systems were kind of brought out in discussions over and over and over again. So it was kind of like the same two or three systems that everybody would talk about, and it basically made it seem like there wasn’t anything else to worry about. So the same two or three systems consistently got trotted out as examples. So in 2015 at the United Nations Convention on Conventional Weapons Informal Meeting of Experts on Lethal Autonomous Systems, the International Committee for the Red Cross, the ICRC, came out and basically said, “we need more data. We need to know what we have, what we are already working with, and we need help understanding the state of play.” And so it was their kind of clarion call to say, “ok, you want the data, let’s sit down and try to look at what we can through publically available information.” And that was really the impetus was to provide policy makers and provide NGOs and provide anybody who had a desire to know what we currently field and the state of automation on those systems.

So I created a database of about 250 systems that are currently deployed systems. There are some weapons systems in there that are developmental, but they’re noted as such, and they’re included primarily to show where trajectories are going. So you can see what currently fielded and developed and then you can see developmental systems and the difference between the two. The dataset looks at the top five weapons exporting countries, so that’s Russia, China, the United States, France and Germany, as noted by the Stockholm International Peace Research Institute. SIPRI does a lot of work on arms control and arms trade, and so I basically have taken their work on who’s sending what to whom, and according to them these are the top five exporters, as well as the weapons manufacturers within each country. I’m looking at major sales and major defense industry manufacturers from each country. And then I look at all the systems that are presently deployed by those countries that are manufactured by those top manufacturers, and I code them along a series of about 20 different variables. Everything from, do they have automatic target recognition, do they have the ability to fly to a position or to navigate with GPS, to do they have acquisition capabilities, are they in an air domain, are they in a sea domain, what is their date of operation, their initial date of deployment. And then I have some contextual information about what the system really is, and who makes it and where it is.

And then once I have all of those data points put together, I combine them in these different indices. And one index is about self-mobility, so how well the system can move around in its environment on its own. I have another index of the different capabilities as they relate to what I call self-direction. So this is about how well it can target on its own, so once its given an order to do x, y, or z, or a set of locations or coordinates – how well it can get there by itself, and find that location, find that target. And then there’s another index related to what I call self-determination, and this is about more cognitive capacities about changing one’s goals or being able to update one’s goals, or mission objectives, if you want to call it that way, planning, and these more robust cognitive capacities. So I code along all of these, and then I weight them in different ways to normalize them. And then you see where existing systems lie on any one of these indices.

That’s kind of what that’s doing, and it’s allowing everyone to understand that autonomy isn’t just binary. It’s not a yes or a no. It’s actually, in my view, something that is an emergent property, but also something that’s deeply rooted in automation. And so we have to understand what kinds of things we’re automating, and kinds of what tasks we’re automating. When you’re talking about if it’s GPS, if it’s navigation, if it’s mobility, if it’s target ID, if it’s fire control, right – there’s all sorts of these kinds of individualized capabilities that we have to slice and dice to have a better, more finer-grained understanding of what we mean when we’re talking about in an autonomous system.

Because I’m looking at presently deployed systems, a lot of these are legacy systems, so these are systems that have been in use for decades. What they’ve done over the years is to actually upgrade various components on the system. So there aren’t brand new weapons. At least in the US case we call them block systems, so there’s like block 1, block 2, block 3, and the block 3 systems are the most advanced, and they have the most upgrades on them. In that sense, this is what this project is able to do; it’s able to say ‘look, what did we start with in 1967, and what do we have now? And then where does autonomy come into play? And how is autonomy different, if it is at all, from what we’re currently doing?’ And s o that’s kind of what this data project is attempting to do. It’s attempting to say ‘look, not many people in the world have a pretty good understanding of what modern militaries fight with, and how they fight, and how much of that fighting is done over the horizon with smart munitions that have automatic capabilities already.’ So then the question is, if we have these in play now, then what is different with autonomous weapons?

ARIEL: Excellent. And Dr. Asaro, I want to move to you quickly as well. Your research is about liability of general autonomous systems, not necessarily about autonomous weapons, but you do work with autonomous weapons, and I’m interested to know how liability plays a role in this. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target?

PETER: Right. So my work looks at a number of different aspects of both autonomous weapons and other kinds of autonomous systems, and really kind of looking at the interface of the ethical and legal aspects, and the questions about, specifically when you have autonomous weapons, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict are really central to that. So I have a number of articles over the last decade or so looking at different aspects of this problem. I think the strongest kind of ethical issue, at the root of it, is that it’s these kind of autonomous systems are not really legal and moral agents in the same way that humans are, and so delegating the authority to kill to them is unjustifiable. So then we can really look at different systems in terms of why we have systems of accountability and responsibility and liability and legal frameworks and ethical frameworks and try to understand what it means to have autonomous agents who can do the sort of material acts or make decisions about what to target and fire a weapon at. And our sort of notions of human ethics and legal requirements don’t really apply in the same way to these new kinds of autonomous agents, whether those are AIs or self-driving cars or autonomous weapons. So if you start to really get into it, there are different senses of responsibility and accountability.

So one aspect of accountability is if a mistake is made, holding people to account for that mistake. And there’s a certain sort of feedback mechanism to try to prevent that error occurring in the future. But there’s also a justice element, which could be attributive justice, in which you try to make up for loss. And other forms of accountability really look at punishment in and of itself. And then when you have autonomous systems, you can’t really punish the system. If you want to reform the system, then you really need to look at the people who are designing that system, and making sure that the feedback is getting to them. But if you’re trying to look at punishment, it might be the commander of a system. But more importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system.  If there’s not intention, you can’t convict somebody for a war crime, for instance. And most of our legal frameworks for criminal activity are punishment and really depend on intention.

So I guess, if you kind of take a broader view of it, what we do in the debate that’s going on in the United Nations and the Convention on Unconventional Weapons, it’s really kind of framed around this question of the accountability gap. So if you allow these kinds of autonomous weapons to be fielded, it’s not clear who’s going to be accountable when they do make mistakes. You can impose strict liability or something like that and just say well, we’re just going to treat it as if whoever deployed a system is fully responsible for what happens in that system, but in practice, we don’t actually do that. We don’t hold people responsible for unanticipated events. So whether that will actually play out legally or not is an open question, until we start to see different acts of these systems and what the legal frameworks are for trying to hold people accountable.

ARIEL: And so, one of the things that we hear a lot in the news is this idea of always keeping a human in the loop. And I’m sort of curious, one, how does that play into the idea of liability, two, is it even physically possible. And this question is for both of you- the idea of autonomous weapons, working as a team with a human – what does that mean, realistically?

HEATHER: So, I think the human on the loop, the human in the loop, and out of the loop and all of that stuff, I actually think that this is just a really unhelpful heuristic. I think it’s actually hindering our ability to think about what’s wrong with autonomous systems or not necessarily what’s wrong but what’s potentially risky or dangerous or might produce unintended consequences. Here’s an example: In the United Kingdom, the UK’s ministry of defense calls this the empty hangar problem. And what they mean is that it’s very unlikely that they’re going to walk down to an airplane hangar and they’re going to look inside and they’re going to be like “hey! Where’s the airplane? Oh, it’s decided to go to war today.” That’s just not going to happen.

These systems are always going to be used by humans, and humans are going to decide to use them. And so then the question is, well, how big is the loop? Then you say something like, well if we’re assuming when we’re talking about human in the loop systems, it seems as if one person can think of human in the loop and they can assume that what you mean is, the loop is very small and very tight. And so the decision to employ force, the firing of the trigger and then the actual effect of that force is very small, is very tightly bounded and is constrained by time and space. That’s one way to think about that loop.

But another way to think about that loop is in a targeting process, which is a much broader loop. In the targeting process, at least within the United States and within NATO, you have what’s called a targeting doctrine, and this involves lots of people and the formation of various goals, and the vetting of various targets, and the decision by human commanders to go over here to do this, to do that. The use of a particular weapons system is part of the process that commanders undertake. And so if you say something like, “well, I’m talking about that loop.” You get yourself into a kind of conceptual worry when you say, “well, the system is in the loop.” Which loop are you talking about? So I actually think that the loop, in the loop, on the loop, out of the loop… It’s a heuristic that we’ve used to kind of help us to visualize what’s happening, but the consequences of that heuristic is that it’s shutting down our ability to see some of the nuance of autonomous technologies.

I would submit that, a better way to think about this is in terms of task allocation. If we’re thinking that all autonomous systems are able to undertake a task by themselves and execute that task by themselves, then the question becomes, what is the content of the task, what is the scope of the task, and how much information and control does the human have before deploying that system to execute? And if there is a lot of time and space and distance between the time the decision is made to field it and then the application of force, there’s more time for things to change on the ground, there’s more time for the human to basically call, that they didn’t know what was going to happen or they didn’t intend for this to happen, and then you get into this problem where you might not have anyone actually responsible for the outcome. And this is what Peter was talking about. You could call it strict liability, but under all kinds of legal regimes where we talk about liability, there usually has to be some sort of intent to cause harm. Because the other part of this is to say, “well, ok, but we do have some sorts of regimes, legal and moral regimes, where intent doesn’t matter,” and that would be the strict liability. But we don’t really like strict liability in war and we don’t have a doctrine of negligence in war. We don’t have torte law in the law of armed conflict, for example. Recklessness and negligence is not sufficient sometimes to actually hold anybody responsible. So I think that’s where Peter’s work is really interesting. And if we think about autonomy as related to a task, then we can say, “well, we’re ok with some tasks, but we’re not ok with others.” But I think it’s much more coherent to talk about the task that you’re giving or handing over or authorizing the machine or the system to do, than to say whether or not this system fits into one of these categories that either in the loop or out of the loop and where the human is in relation to that. Because it just begs too many questions.

ARIEL: Dr. Asaro, can you weigh in on some of that as well?

PETER: [laughing] I think Heather stated that very well. I don’t know how much to add to that. Again, there’s this sort of after-the-fact liability which is what a lot of legal structures are looking at, and you know, the fact that there’s no torte law in war. So this is different for autonomous vehicles. So, if self-driving cars start running people over, people will sue the company, the manufacturer. And they’ll improve, over time, the safety of those kinds of systems. But there are no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesn’t happen. So there’s no sort of incentive either for companies who manufacture those to necessarily improve the safety and performance of the systems based on those kinds of mistakes. Only if the militaries who deploy them put pressure on the developers of those systems, and then you have to ask, “well hold on, there are some questions about, well, where does that pressure come from?” So it is very different in the private sphere.

ARIEL: So, I guess I’m a little bit curious, now that you mention that, where would an accident most likely happen? I mean, would it happen at the development stages with the company itself, or would it happen when military has deployed it and it just didn’t get deployed properly or there was some sort of misunderstanding? What are some of the processes that could happen there?

PETER: That’s a huge scope of possibilities, right? These are incredibly complex systems, and so they’re liable to failures and breakdowns at many different levels of development and deployment. And even traditional munitions fail in all kinds of ways, and vehicles break down. The question is when those become severe and then they have severe impacts on civilians and civilian infrastructure. So there’s all sorts of scenarios we can think of and there has already been testing events that have happened with actual weapons systems. So, the US tested a harpoon missile in the 1980s that tracked onto a merchant vessel in the area and killed some people aboard. There was an automatic weapon system test in South Africa in the early 2000s that went awry and actually shot a bunch of people in the observation stands and killed quite a few people. So, you know, that can happen very early in testing. But I think the bigger worry is when you have a large deployed fleet of these kinds of weapons, and then they could commit major atrocities, and then who would be accountable for that? It could then provide a sort of way for people to commit atrocities without being legally liable for them. I think that’s the real worry, ultimately, that you could use it as a way to get out of any legal responsibility by saying, “well, gee, the robots just did that that day. Sorry.” And then there not being a mechanism to hold them accountable.

HEATHER: Yeah. So I think I would add onto Peter’s response that there are ways to also start to think about how the systems would interact with one another. So, if you view an autonomous weapons system as a single platform that’s not connected to any other systems, that has a limited payload, and that can’t reload, then the opportunity for serious harm would go down. But if you think of them as networked systems in a larger system of systems or of systems of systems of systems, then things start to get a little bit different. And you could think about various areas of abuse, so you could think about, at the design phase, you could think about how these systems are tested, verified and validated, and whether or not the way in which that’s done is reflective of reality, whether there’s a high fidelity of when they’re going to be deployed versus simulation, and then whether or not the simulation is going to account for the interactions between network systems. So I think that’s one way to think about how you might see some failures. The other kind of failures I would talk about is a negligent entrustment, ur overtrusting the capabilities of the system. So the commander fields the system thinking it can do things it can’t actually do. And thus the system tracks onto something that is a civilian object, or goes awry. You could also think about people not being trained on the system correctly or the operators of the system not understanding it. And I think this gets more at your question of human-robot teaming because when you start to see humans and robots team together in these various exercises, I think the operator/handler side of that is going to be increasingly important and the commander is going to have to evaluate the human and the robot as a unit, and then you can have failings of the robot or failings of the human or both together. So I think there are various locations or possibilities for failure when we’re talking about network systems and when we’re talking about humans and machines working together as a unit. So kind of divorcing ourselves from the idea that an autonomous weapons system is a unitary platform acting in isolation, will again help us to see where those points of failure may exist.

ARIEL: Dr. Asaro, going back to you, if we’ve got issues even just defining this, how does the liability play in there?

PETER: I think one of the crucial things to keep in mind and I think Heather touched on a little bit when she was talking about how operators might overly rely on the capabilities of a system, is this sort of forward psychology of how people make decisions. I think the law of international armed conflict is pretty clear that humans are the ones who make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. And there’s again questions of how exactly do you scale a decision, what’s the extent of a decision in some sense, but it’s pretty clear that the humans are the ones who make the decisions and thus also have to take the responsibility to ensure that civilians are being protected in those decisions and that’s for each individual attack as it’s framed in international law.

So this question of having a system that could range over many miles and many days and select targets on its own is where I think things are clearly problematic. So I think part of the definition is how do you figure out exactly what constitutes a targeting decision and how do you ensure that a human is making that decision. And I think that’s really the direction that the discussion at the UN is going as well is, instead of really trying to define what’s an autonomous system, what’s the boundary of the system, what we focus on is really the targeting decision and firing decisions of weapons for individual attacks. And then saying, what we have and what we want to acquire is meaningful human control over those decisions. And so you have to positively demonstrate, however your system works, however it utilizes autonomous functions, that humans are meaningfully controlling the targeting and firing of individual attacks. I think that kind of cuts through a lot of the confusion really quickly, without having to settle some of these much bigger issues about what is autonomy, what is the limit of a system, when all of these systems are embedded in larger systems and even human decisions are embedded in larger human institutions of decision-making and chains of command. But those systems of chains of command have been built to make very explicit who has what responsibility for what decisions at what time. It really is sort of a social solution to this problem historically. Now we have technologies that are capable of replacing humans, so I think this is what’s short-circuiting some of our traditional, ethical, and legal thinking, because we never believed that machines are going to be able to do that, and now that they could, it plays with a lot of our other assumptions about human responsibility and leadership.

ARIEL: So I’m really interested in this idea of meaningful control, because I spoke with Dr. Roff many months ago and one of the things that stood out, and I still remember from the conversation, was the idea of, what do we mean by control and what do we mean by meaningful control specifically? And so Dr. Roff, like I said it’s been quite a few months since I spoke with you about it, I don’t know if you’ve got any better answers about how you think of this, or if you think it’s still not as well defined as we would like?

HEATHER: Yeah. So, I’ve done some work with Article 36, the NGO that coined the phrase, meaningful human control, and one of the outputs of the project was a framing paper, a concept paper, of what really meaningful human control looks like. And the way in which we’ve decided, or at leas the way we’ve approached it is through kind of a nested doll approach, if you will.

So if you think of three concentric circles. So you could have one set of circles that are antebellum measures of control – so this would be all the things that we would do before a state or a party engages in hostilities. And this can be everything from how you train your soldiers to the weapons review processes that you undertake to make sure that you have systems that comply with IHL (International Humanitarian Law). You could have command and control structures, communication links, you could do drills and practices, and all of these things that attempt to control the means and methods of violence as well as the weapons and the personnel when you deploy force. So that’s one way to think about it.

Then there’s in bello processes, which is during the state of hostilities and this would be something like the laws of armed conflict, as well as rules of engagement, as well as use of particular weapons, when it’s appropriate to use weapons rather than other weapons, so principles of precaution and necessity would reign here. And also things like command responsibility. If something goes wrong during an attack or if a commander orders an illegal attack that there is also some sort of mechanism by which one can find locuses of accountability or responsibility for the use of violence during armed conflict. And then there’s another wider set of concerns that if, after the end of hostilities, we find that there needs to be postbellum accountability measures that those measures would be put in place. So something like the ICC, something like military tribunals, like Nuremburg, to where we want to hold states as well as leaders and individuals accountable for the crimes that they commit during war. That’s the total notion of meaningful control. And then you think about, well where do autonomous weapons fit in to that? That’s at the level of attack, as Peter alluded to, so it’s meaningful human control over direct attacks in the in bello instance. A commander is obligated, under IHL to undertake proportionality and precaution. She needs to make that proportionality calculation. She needs to have good foresight and good information in order to uphold her obligation under IHL. If she fields a weapon that can go from attack to attack without checking back with her, then the situation has changed and the weapon is de facto making the proportionality calculation, and she had de facto delegated her authority and delegated her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited, and I would say is frankly conceptually impossible too, when you’re saying, you can’t offload your moral obligation to a non-moral agent.

So that’s where I think our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack. Now that attack doesn’t have to be every single time a bullet is fired. It can be a broader notion of attack. But that is up for states to figure out what they mean by the concept of attack. But there are certain thresholds that are put on a system and put on the ability of a system to function when you say that there is meaningful human control over the attack, in time and space particularly.

ARIEL: So we’re running a little bit short on time and I’ve got two questions that I still wanted to ask you both. One is, obviously, this is an international issue, and I’m curious how the international community is responding issues of autonomous weapons and how that coincides with what the US is doing or how maybe it doesn’t coincide with what the US is doing.

PETER: So, I guess in terms of the international discussion, we’ve had three meetings of experts at the CCW at the UN. The Human Rights Council has also brought up the issue a number of times through the special repertoire for extrajuducial summary of arbitrary execution. There have also been statements made at the UN General Assembly, but small, and then previous meetings. But then there is discussion going on. It’s hopefully going to move to a more formal stage this year. We’ll find out in a few weeks when the CCW has a review conference in December about whether they’ll hold a three-week set of governmental experts meetings, which would be the prelude to treaty negotiation meetings. We hope. But we’re still waiting to see whether that meeting will happen and what the mandate to sort of come out of that meeting with will turn out to be. And I think a lot of the discussion in that venue has been and will continue to be focused on this question of meaningful human control and what that constitutes.

My own view, I look at each of those three words as being crucially important. So control means you actually have to have a human who has the positive control over a system and could call off the system if necessary, and the system is not out of control in the sense that nobody can do something about it. And the meaningful part is really the crucial one, which means people, as Heather mentioned, have to be able to think through the implications of their action and take responsibility for that action. And so they have to have enough of a situational awareness of where that system is, what’s happening. If a system is designating a target as a potential target, that they’re able to evaluate that against some criteria beyond just the fact that it was nominated by a system which they don’t understand how that system works or if there’s too much opacity. You don’t want someone sitting in a room somewhere and every time a light comes on, they press a button to authorize a machine to fire because they don’t really have any information about that situation and the appropriateness of using force. And I think that’s why we have humans involved in these decisions because humans actually have a lot of capability for thinking through meaning and the implications of actions and whether they’re justified in a legal and moral sense.

ARIEL: In just the last minute or two, is there anything else that you think is important to add?

HEATHER: Well, I think from FLI’s perspective, on the notion of artificial intelligence in these systems, it’s important to realize that we have limitations AI. We have really great applications of AI, and we have blind spots of applications in AI, and I think it would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. And if they don’t think that those technologies or those applications could be reliably and predictably deployed, then they need to stand up and say as much.

PETER: Yeah, and I would just add that we’re not trying to prohibit autonomous operations of different kind of systems or the development and application of artificial intelligence for a wide range of civilian and military applications, but that there are certain applications, specifically the lethal ones, the ones that involve the use of force and violence, that actually have higher standards of moral and legal requirements that need to be me. And that we don’t want to just automate those blindly without really thinking through the best ways to regulate that, and ensure going forward that we don’t end up with these sort of worst possible applications of these technologies being realized, and instead use them to maximal effect in conjunction with humans who are able to make proper decisions and retain control over the use of violence and force in international conflict.

HEATHER: Ditto

ARIEL: Dr. Asaro and Dr. Roth, thank you very much for joining us today.

PETER: Thank you.

HEATHER: Sure.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram