Transcript: Autonomous Weapons: an Interview With the Experts

ARIEL

I’m Ariel Conn with the Future of Life Institute. Today, I have the privilege of talking with Drs Heather Roff and Peter Asaro, two experts in the field of autonomous weapons. Dr. Roff is a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford. She recently compiled a database of weapons systems that exhibit some level of autonomous capabilities to provide the international community a better understanding of the current state of autonomous weapons. Dr. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

 

Dr. Roff, I’d like to start with you first. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

 

HEATHER

The main impetus behind the creation of the database was a feeling that the same autonomous – or at least the same semi-autonomous or automated weapons systems were kind of brought out in discussions over and over and over again. So it was kind of like the same two or three systems that everybody would talk about, and it basically made it seem like there wasn’t anything else to worry about. So the same two or three systems consistently got trotted out as examples. So in 2015 at the United Nations Convention on Conventional Weapons Informal Meeting of Experts on Lethal Autonomous Systems, the International Committee for the Red Cross, the ICRC, came out and basically said, “we need more data. We need to know what we have, what we are already working with, and we need help understanding the state of play.” And so it was their kind of clarion call to say, “ok, you want the data, let’s sit down and try to look at what we can through publically available information.” And that was really the impetus was to provide policy makers and provide NGOs and provide anybody who had a desire to know what we currently field and the state of automation on those systems.

 

So I created a database of about 250 systems that are currently deployed systems. There are some weapons systems in there that are developmental, but they’re noted as such, and they’re included primarily to show where trajectories are going. So you can see what currently fielded and developed and then you can see developmental systems and the difference between the two. The dataset looks at the top five weapons exporting countries, so that’s Russia, China, the United States, France and Germany, as noted by the Stockholm International Peace Research Institute. SIPRI does a lot of work on arms control and arms trade, and so I basically have taken their work on who’s sending what to whom, and according to them these are the top five exporters, as well as the weapons manufacturers within each country. I’m looking at major sales and major defense industry manufacturers from each country. And then I look at all the systems that are presently deployed by those countries that are manufactured by those top manufacturers, and I code them along a series of about 20 different variables. Everything from, do they have automatic target recognition, do they have the ability to fly to a position or to navigate with GPS, to do they have acquisition capabilities, are they in an air domain, are they in a sea domain, what is their date of operation, their initial date of deployment. And then I have some contextual information about what the system really is, and who makes it and where it is.

 

And then once I have all of those data points put together, I combine them in these different indices. And one index is about self-mobility, so how well the system can move around in its environment on its own. I have another index of the different capabilities as they relate to what I call self-direction. So this is about how well it can target on its own, so once its given an order to do x, y, or z, or a set of locations or coordinates – how well it can get there by itself, and find that location, find that target. And then there’s another index related to what I call self-determination, and this is about more cognitive capacities about changing one’s goals or being able to update one’s goals, or mission objectives, if you want to call it that way, planning, and these more robust cognitive capacities. So I code along all of these, and then I weight them in different ways to normalize them. And then you see where existing systems lie on any one of these indices.

 

That’s kind of what that’s doing, and it’s allowing everyone to understand that autonomy isn’t just binary. It’s not a yes or a no. It’s actually, in my view, something that is an emergent property, but also something that’s deeply rooted in automation. And so we have to understand what kinds of things we’re automating, and kinds of what tasks we’re automating. When you’re talking about if it’s GPS, if it’s navigation, if it’s mobility, if it’s target ID, if it’s fire control, right – there’s all sorts of these kinds of individualized capabilities that we have to slice and dice to have a better, more finer-grained understanding of what we mean when we’re talking about in an autonomous system.

 

Because I’m looking at presently deployed systems, a lot of these are legacy systems, so these are systems that have been in use for decades. What they’ve done over the years is to actually upgrade various components on the system. So there aren’t brand new weapons. At least in the US case we call them block systems, so there’s like block 1, block 2, block 3, and the block 3 systems are the most advanced, and they have the most upgrades on them. In that sense, this is what this project is able to do; it’s able to say ‘look, what did we start with in 1967, and what do we have now? And then where does autonomy come into play? And how is autonomy different, if it is at all, from what we’re currently doing?’ And s o that’s kind of what this data project is attempting to do. It’s attempting to say ‘look, not many people in the world have a pretty good understanding of what modern militaries fight with, and how they fight, and how much of that fighting is done over the horizon with smart munitions that have automatic capabilities already.’ So then the question is, if we have these in play now, then what is different with autonomous weapons?

 

ARIEL

Excellent. And Dr. Asaro, I want to move to you quickly as well. Your research is about liability of general autonomous systems, not necessarily about autonomous weapons, but you do work with autonomous weapons, and I’m interested to know how liability plays a role in this. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target?

 

PETER

Right. So my work looks at a number of different aspects of both autonomous weapons and other kinds of autonomous systems, and really kind of looking at the interface of the ethical and legal aspects, and the questions about, specifically when you have autonomous weapons, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict are really central to that. So I have a number of articles over the last decade or so looking at different aspects of this problem. I think the strongest kind of ethical issue, at the root of it, is that it’s these kind of autonomous systems are not really legal and moral agents in the same way that humans are, and so delegating the authority to kill to them is unjustifiable. So then we can really look at different systems in terms of why we have systems of accountability and responsibility and liability and legal frameworks and ethical frameworks and try to understand what it means to have autonomous agents who can do the sort of material acts or make decisions about what to target and fire a weapon at. And our sort of notions of human ethics and legal requirements don’t really apply in the same way to these new kinds of autonomous agents, whether those are AIs or self-driving cars or autonomous weapons. So if you start to really get into it, there are different senses of responsibility and accountability.

 

So one aspect of accountability is if a mistake is made, holding people to account for that mistake. And there’s a certain sort of feedback mechanism to try to prevent that error occurring in the future. But there’s also a justice element, which could be attributive justice, in which you try to make up for loss. And other forms of accountability really look at punishment in and of itself. And then when you have autonomous systems, you can’t really punish the system. If you want to reform the system, then you really need to look at the people who are designing that system, and making sure that the feedback is getting to them. But if you’re trying to look at punishment, it might be the commander of a system. But more importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system.  If there’s not intention, you can’t convict somebody for a war crime, for instance. And most of our legal frameworks for criminal activity are punishment and really depend on intention.

 

So I guess, if you kind of take a broader view of it, what we do in the debate that’s going on in the United Nations and the Convention on Unconventional Weapons, it’s really kind of framed around this question of the accountability gap. So if you allow these kinds of autonomous weapons to be fielded, it’s not clear who’s going to be accountable when they do make mistakes. You can impose strict liability or something like that and just say well, we’re just going to treat it as if whoever deployed a system is fully responsible for what happens in that system, but in practice, we don’t actually do that. We don’t hold people responsible for unanticipated events. So whether that will actually play out legally or not is an open question, until we start to see different acts of these systems and what the legal frameworks are for trying to hold people accountable.

 

ARIEL

And so, one of the things that we hear a lot in the news is this idea of always keeping a human in the loop. And I’m sort of curious, one, how does that play into the idea of liability, two, is it even physically possible. And this question is for both of you- the idea of autonomous weapons, working as a team with a human – what does that mean, realistically?

 

HEATHER

So, I think the human on the loop, the human in the loop, and out of the loop and all of that stuff, I actually think that this is just a really unhelpful heuristic. I think it’s actually hindering our ability to think about what’s wrong with autonomous systems or not necessarily what’s wrong but what’s potentially risky or dangerous or might produce unintended consequences. Here’s an example: In the United Kingdom, the UK’s ministry of defense calls this the empty hangar problem. And what they mean is that it’s very unlikely that they’re going to walk down to an airplane hangar and they’re going to look inside and they’re going to be like “hey! Where’s the airplane? Oh, it’s decided to go to war today.” That’s just not going to happen.

 

These systems are always going to be used by humans, and humans are going to decide to use them. And so then the question is, well, how big is the loop? Then you say something like, well if we’re assuming when we’re talking about human in the loop systems, it seems as if one person can think of human in the loop and they can assume that what you mean is, the loop is very small and very tight. And so the decision to employ force, the firing of the trigger and then the actual effect of that force is very small, is very tightly bounded and is constrained by time and space. That’s one way to think about that loop.

 

But another way to think about that loop is in a targeting process, which is a much broader loop. In the targeting process, at least within the United States and within NATO, you have what’s called a targeting doctrine, and this involves lots of people and the formation of various goals, and the vetting of various targets, and the decision by human commanders to go over here to do this, to do that. The use of a particular weapons system is part of the process that commanders undertake. And so if you say something like, “well, I’m talking about that loop.” You get yourself into a kind of conceptual worry when you say, “well, the system is in the loop.” Which loop are you talking about? So I actually think that the loop, in the loop, on the loop, out of the loop… It’s a heuristic that we’ve used to kind of help us to visualize what’s happening, but the consequences of that heuristic is that it’s shutting down our ability to see some of the nuance of autonomous technologies.

 

I would submit that, a better way to think about this is in terms of task allocation. If we’re thinking that all autonomous systems are able to undertake a task by themselves and execute that task by themselves, then the question becomes, what is the content of the task, what is the scope of the task, and how much information and control does the human have before deploying that system to execute? And if there is a lot of time and space and distance between the time the decision is made to field it and then the application of force, there’s more time for things to change on the ground, there’s more time for the human to basically call, that they didn’t know what was going to happen or they didn’t intend for this to happen, and then you get into this problem where you might not have anyone actually responsible for the outcome. And this is what Peter was talking about. You could call it strict liability, but under all kinds of legal regimes where we talk about liability, there usually has to be some sort of intent to cause harm. Because the other part of this is to say, “well, ok, but we do have some sorts of regimes, legal and moral regimes, where intent doesn’t matter,” and that would be the strict liability. But we don’t really like strict liability in war and we don’t have a doctrine of negligence in war. We don’t have torte law in the law of armed conflict, for example. Recklessness and negligence is not sufficient sometimes to actually hold anybody responsible. So I think that’s where Peter’s work is really interesting. And if we think about autonomy as related to a task, then we can say, “well, we’re ok with some tasks, but we’re not ok with others.” But I think it’s much more coherent to talk about the task that you’re giving or handing over or authorizing the machine or the system to do, than to say whether or not this system fits into one of these categories that either in the loop or out of the loop and where the human is in relation to that. Because it just begs too many questions.

 

ARIEL

Dr. Asaro, can you weigh in on some of that as well?

 

PETER

[laughing] I think Heather stated that very well. I don’t know how much to add to that. Again, there’s this sort of after-the-fact liability which is what a lot of legal structures are looking at, and you know, the fact that there’s no torte law in war. So this is different for autonomous vehicles. So, if self-driving cars start running people over, people will sue the company, the manufacturer. And they’ll improve, over time, the safety of those kinds of systems. But there are no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesn’t happen. So there’s no sort of incentive either for companies who manufacture those to necessarily improve the safety and performance of the systems based on those kinds of mistakes. Only if the militaries who deploy them put pressure on the developers of those systems, and then you have to ask, “well hold on, there are some questions about, well, where does that pressure come from?” So it is very different in the private sphere.

 

ARIEL

So, I guess I’m a little bit curious, now that you mention that, where would an accident most likely happen? I mean, would it happen at the development stages with the company itself, or would it happen when military has deployed it and it just didn’t get deployed properly or there was some sort of misunderstanding? What are some of the processes that could happen there?

 

PETER

That’s a huge scope of possibilities, right? These are incredibly complex systems, and so they’re liable to failures and breakdowns at many different levels of development and deployment. And even traditional munitions fail in all kinds of ways, and vehicles break down. The question is when those become severe and then they have severe impacts on civilians and civilian infrastructure. So there’s all sorts of scenarios we can think of and there has already been testing events that have happened with actual weapons systems. So, the US tested a harpoon missile in the 1980s that tracked onto a merchant vessel in the area and killed some people aboard. There was an automatic weapon system test in South Africa in the early 2000s that went awry and actually shot a bunch of people in the observation stands and killed quite a few people. So, you know, that can happen very early in testing. But I think the bigger worry is when you have a large deployed fleet of these kinds of weapons, and then they could commit major atrocities, and then who would be accountable for that? It could then provide a sort of way for people to commit atrocities without being legally liable for them. I think that’s the real worry, ultimately, that you could use it as a way to get out of any legal responsibility by saying, “well, gee, the robots just did that that day. Sorry.” And then there not being a mechanism to hold them accountable.

 

HEATHER

Yeah. So I think I would add onto Peter’s response that there are ways to also start to think about how the systems would interact with one another. So, if you view an autonomous weapons system as a single platform that’s not connected to any other systems, that has a limited payload, and that can’t reload, then the opportunity for serious harm would go down. But if you think of them as networked systems in a larger system of systems or of systems of systems of systems, then things start to get a little bit different. And you could think about various areas of abuse, so you could think about, at the design phase, you could think about how these systems are tested, verified and validated, and whether or not the way in which that’s done is reflective of reality, whether there’s a high fidelity of when they’re going to be deployed versus simulation, and then whether or not the simulation is going to account for the interactions between network systems. So I think that’s one way to think about how you might see some failures. The other kind of failures I would talk about is a negligent entrustment, ur overtrusting the capabilities of the system. So the commander fields the system thinking it can do things it can’t actually do. And thus the system tracks onto something that is a civilian object, or goes awry. You could also think about people not being trained on the system correctly or the operators of the system not understanding it. And I think this gets more at your question of human-robot teaming because when you start to see humans and robots team together in these various exercises, I think the operator/handler side of that is going to be increasingly important and the commander is going to have to evaluate the human and the robot as a unit, and then you can have failings of the robot or failings of the human or both together. So I think there are various locations or possibilities for failure when we’re talking about network systems and when we’re talking about humans and machines working together as a unit. So kind of divorcing ourselves from the idea that an autonomous weapons system is a unitary platform acting in isolation, will again help us to see where those points of failure may exist.

 

ARIEL

Dr. Asaro, going back to you, if we’ve got issues even just defining this, how does the liability play in there?

 

PETER

I think one of the crucial things to keep in mind and I think Heather touched on a little bit when she was talking about how operators might overly rely on the capabilities of a system, is this sort of forward psychology of how people make decisions. I think the law of international armed conflict is pretty clear that humans are the ones who make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. And there’s again questions of how exactly do you scale a decision, what’s the extent of a decision in some sense, but it’s pretty clear that the humans are the ones who make the decisions and thus also have to take the responsibility to ensure that civilians are being protected in those decisions and that’s for each individual attack as it’s framed in international law.

 

So this question of having a system that could range over many miles and many days and select targets on its own is where I think things are clearly problematic. So I think part of the definition is how do you figure out exactly what constitutes a targeting decision and how do you ensure that a human is making that decision. And I think that’s really the direction that the discussion at the UN is going as well is, instead of really trying to define what’s an autonomous system, what’s the boundary of the system, what we focus on is really the targeting decision and firing decisions of weapons for individual attacks. And then saying, what we have and what we want to acquire is meaningful human control over those decisions. And so you have to positively demonstrate, however your system works, however it utilizes autonomous functions, that humans are meaningfully controlling the targeting and firing of individual attacks. I think that kind of cuts through a lot of the confusion really quickly, without having to settle some of these much bigger issues about what is autonomy, what is the limit of a system, when all of these systems are embedded in larger systems and even human decisions are embedded in larger human institutions of decision-making and chains of command. But those systems of chains of command have been built to make very explicit who has what responsibility for what decisions at what time. It really is sort of a social solution to this problem historically. Now we have technologies that are capable of replacing humans, so I think this is what’s short-circuiting some of our traditional, ethical, and legal thinking, because we never believed that machines are going to be able to do that, and now that they could, it plays with a lot of our other assumptions about human responsibility and leadership.

 

ARIEL

So I’m really interested in this idea of meaningful control, because I spoke with Dr. Roff many months ago and one of the things that stood out, and I still remember from the conversation, was the idea of, what do we mean by control and what do we mean by meaningful control specifically? And so Dr. Roff, like I said it’s been quite a few months since I spoke with you about it, I don’t know if you’ve got any better answers about how you think of this, or if you think it’s still not as well defined as we would like?

 

HEATHER

Yeah. So, I’ve done some work with Article 36, the NGO that coined the phrase, meaningful human control, and one of the outputs of the project was a framing paper, a concept paper, of what really meaningful human control looks like. And the way in which we’ve decided, or at leas the way we’ve approached it is through kind of a nested doll approach, if you will.

 

So if you think of three concentric circles. So you could have one set of circles that are antebellum measures of control – so this would be all the things that we would do before a state or a party engages in hostilities. And this can be everything from how you train your soldiers to the weapons review processes that you undertake to make sure that you have systems that comply with IHL (International Humanitarian Law). You could have command and control structures, communication links, you could do drills and practices, and all of these things that attempt to control the means and methods of violence as well as the weapons and the personnel when you deploy force. So that’s one way to think about it.

 

Then there’s in bello processes, which is during the state of hostilities and this would be something like the laws of armed conflict, as well as rules of engagement, as well as use of particular weapons, when it’s appropriate to use weapons rather than other weapons, so principles of precaution and necessity would reign here. And also things like command responsibility. If something goes wrong during an attack or if a commander orders an illegal attack that there is also some sort of mechanism by which one can find locuses of accountability or responsibility for the use of violence during armed conflict. And then there’s another wider set of concerns that if, after the end of hostilities, we find that there needs to be postbellum accountability measures that those measures would be put in place. So something like the ICC, something like military tribunals, like Nuremburg, to where we want to hold states as well as leaders and individuals accountable for the crimes that they commit during war. That’s the total notion of meaningful control. And then you think about, well where do autonomous weapons fit in to that? That’s at the level of attack, as Peter alluded to, so it’s meaningful human control over direct attacks in the in bello instance. A commander is obligated, under IHL to undertake proportionality and precaution. She needs to make that proportionality calculation. She needs to have good foresight and good information in order to uphold her obligation under IHL. If she fields a weapon that can go from attack to attack without checking back with her, then the situation has changed and the weapon is de facto making the proportionality calculation, and she had de facto delegated her authority and delegated her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited, and I would say is frankly conceptually impossible too, when you’re saying, you can’t offload your moral obligation to a non-moral agent.

 

So that’s where I think our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack. Now that attack doesn’t have to be every single time a bullet is fired. It can be a broader notion of attack. But that is up for states to figure out what they mean by the concept of attack. But there are certain thresholds that are put on a system and put on the ability of a system to function when you say that there is meaningful human control over the attack, in time and space particularly.

 

ARIEL

So we’re running a little bit short on time and I’ve got two questions that I still wanted to ask you both. One is, obviously, this is an international issue, and I’m curious how the international community is responding issues of autonomous weapons and how that coincides with what the US is doing or how maybe it doesn’t coincide with what the US is doing.

 

PETER

So, I guess in terms of the international discussion, we’ve had three meetings of experts at the CCW at the UN. The Human Rights Council has also brought up the issue a number of times through the special repertoire for extrajuducial summary of arbitrary execution. There have also been statements made at the UN General Assembly, but small, and then previous meetings. But then there is discussion going on. It’s hopefully going to move to a more formal stage this year. We’ll find out in a few weeks when the CCW has a review conference in December about whether they’ll hold a three-week set of governmental experts meetings, which would be the prelude to treaty negotiation meetings. We hope. But we’re still waiting to see whether that meeting will happen and what the mandate to sort of come out of that meeting with will turn out to be. And I think a lot of the discussion in that venue has been and will continue to be focused on this question of meaningful human control and what that constitutes.

 

My own view, I look at each of those three words as being crucially important. So control means you actually have to have a human who has the positive control over a system and could call off the system if necessary, and the system is not out of control in the sense that nobody can do something about it. And the meaningful part is really the crucial one, which means people, as Heather mentioned, have to be able to think through the implications of their action and take responsibility for that action. And so they have to have enough of a situational awareness of where that system is, what’s happening. If a system is designating a target as a potential target, that they’re able to evaluate that against some criteria beyond just the fact that it was nominated by a system which they don’t understand how that system works or if there’s too much opacity. You don’t want someone sitting in a room somewhere and every time a light comes on, they press a button to authorize a machine to fire because they don’t really have any information about that situation and the appropriateness of using force. And I think that’s why we have humans involved in these decisions because humans actually have a lot of capability for thinking through meaning and the implications of actions and whether they’re justified in a legal and moral sense.

 

ARIEL

In just the last minute or two, is there anything else that you think is important to add?

 

HEATHER

Well, I think from FLI’s perspective, on the notion of artificial intelligence in these systems, it’s important to realize that we have limitations AI. We have really great applications of AI, and we have blind spots of applications in AI, and I think it would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. And if they don’t think that those technologies or those applications could be reliably and predictably deployed, then they need to stand up and say as much.

 

PETER

Yeah, and I would just add that we’re not trying to prohibit autonomous operations of different kind of systems or the development and application of artificial intelligence for a wide range of civilian and military applications, but that there are certain applications, specifically the lethal ones, the ones that involve the use of force and violence, that actually have higher standards of moral and legal requirements that need to be me. And that we don’t want to just automate those blindly without really thinking through the best ways to regulate that, and ensure going forward that we don’t end up with these sort of worst possible applications of these technologies being realized, and instead use them to maximal effect in conjunction with humans who are able to make proper decisions and retain control over the use of violence and force in international conflict.

 

HEATHER

Ditto

 

ARIEL

Dr. Asaro and Dr. Roth, thank you very much for joining us today.

 

PETER

Thank you.

 

HEATHER

Sure.

Autonomous Weapons: an Interview With the Experts

FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

The following interview has been edited for brevity, but you can read it in its entirety here or listen to it above.

ARIEL: Dr. Roff, I’d like to start with you. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

ROFF: The main impetus behind the creation of the database [was] a feeling that the same autonomous or automated weapons systems were brought out in discussions over and over and over again. It made it seem like there wasn’t anything else to worry about. So I created a database of about 250 autonomous systems that are currently deployed [from] Russia, China, the United States, France, and Germany. I code them along a series of about 20 different variables: from automatic target recognition [to] the ability to navigate [to] acquisition capabilities [etc.].

It’s allowing everyone to understand that autonomy isn’t just binary. It’s not a yes or a no. Not many people in the world have a good understanding of what modern militaries fight with, and how they fight.

ARIEL: And Dr. Asaro, your research is about liability. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target.

ASARO: My work looks at autonomous weapons and other kinds of autonomous systems and the interface of the ethical and legal aspects. Specifically, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict. These kind of autonomous systems are not really legal and moral agents in the way that humans are, and so delegating the authority to kill to them is unjustifiable.

One aspect of accountability is, if a mistake is made, holding people to account for that mistake. There’s a feedback mechanism to prevent that error occurring in the future. There’s also a justice element, which could be attributive justice, in which you try to make up for loss. Other forms of accountability look at punishment itself. When you have autonomous systems, you can’t really punish the system. More importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system. The debate — it’s really kind of framed around this question of the accountability gap.

ARIEL: One of the things we hear a lot in the news is about always keeping a human in the loop. How does that play into the idea of liability? And realistically, what does it mean?

ROFF: I actually think this is just a really unhelpful heuristic. It’s hindering our ability to think about what’s potentially risky or dangerous or might produce unintended consequences. So here’s an example: the UK’s Ministry of Defense calls this the Empty Hangar Problem. It’s very unlikely that they’re going to walk down to an airplane hangar, look in, and be like, “Hey! Where’s the airplane? Oh, it’s decided to go to war today.” That’s just not going to happen.

These systems are always going to be used by humans, and humans are going to decide to use them. A better way to think about this is in terms of task allocation. What is the scope of the task, and how much information and control does the human have before deploying that system to execute? If there is a lot of time, space, and distance between the time the decision is made to field it and then the application of force, there’s more time for things to change on the ground, and there’s more time for the human to basically [say] they didn’t intend for this to happen.

ASARO: If self-driving cars start running people over, people will sue the manufacturer. But there’s no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesn’t happen. So there’s no incentives for companies that manufacture those [weapons] to improve safety and performance.

ARIEL: Dr. Asaro, we’ve briefly mentioned definitional problems of autonomous weapons — how does the liability play in there?

ASARO: The law of international armed conflict is pretty clear that humans are the ones that make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. This question of having a system that could range over many miles and many days and select targets on its own is where things are problematic. Part of the definition is: how do you figure out exactly what constitutes a targeting decision, and how do you ensure that a human is making that decision? That’s the direction the discussion at the UN is going as well. Instead of trying to define what’s an autonomous system, what we focus on is the targeting decision and firing decisions of weapons for individual attacks. What we want to acquire is meaningful human control over those decisions.

ARIEL: Dr. Roff, you were working on the idea of meaningful human control, as well. Can you talk about that?

ROFF: If [a commander] fields a weapon that can go from attack to attack without checking back with her, then the weapon is making the proportionality calculation, and she [has] delegated her authority and her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited. You can’t offload your moral obligation to a nonmoral agent. So that’s where our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.

ARIEL: Is there anything else you think is important to add?

ROFF: We still have limitations AI. We have really great applications of AI, and we have blind. It would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. If they don’t think that those technologies or applications could be reliably and predictably deployed, then they need to stand up and say as much.

ASARO: We’re not trying to prohibit autonomous operations of different kind of systems or the development and application of artificial intelligence for a wide range of civilian and military applications. But there are certain applications, specifically the lethal ones, that have higher standards of moral and legal requirements that need to be met.

 

The Problem of Defining Autonomous Weapons

What, exactly, is an autonomous weapon? For the general public, the phrase is often used synonymously with killer robots and triggers images of the Terminator. But for the military, the definition of an autonomous weapons system, or AWS, is deceivingly simple.

The United States Department of Defense defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.  This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

Basically, it is a weapon that can be used in any domain — land, air, sea, space, cyber, or any combination thereof — and encompasses significantly more than just the platform that fires the munition. This means that there are various capabilities the system possesses, such as identifying targets, tracking, and firing, all of which may have varying levels of human interaction and input.

Heather Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, suggests that even the basic terminology of the DoD’s definition is unclear.

“This definition is problematic because we don’t really know what ‘select’ means here.  Is it ‘detect’ or ‘select’?” she asks. Roff also notes another definitional problem arises because, in many instances, the difference between an autonomous weapon (acting independently) and an automated weapon (pre-programmed to act automatically) is not clear.

 

A Database of Weapons Systems

State parties to the UN’s Convention on Conventional Weapons (CCW) also grapple with what constitutes an autonomous — and not a current automated — weapon. During the last three years of discussion at Informal Meetings of Experts at the CCW, participants typically only referred to two or three presently deployed weapons systems that appear to be AWS, such as the Israeli Harpy or the United States’ Counter Rocket and Mortar system.

To address this, the International Committee of the Red Cross requested more data on presently deployed systems. It wanted to know what the weapons systems are that states currently use and what projects are under development. Roff took up the call to action. She poured over publicly available data from a variety of sources and compiled a database of 284 weapons systems. She wanted to know what capacities already existed on presently deployed systems and whether these were or were not “autonomous.”

“The dataset looks at the top five weapons exporting countries, so that’s Russia, China, the United States, France and Germany,” says Roff. “I’m looking at major sales and major defense industry manufacturers from each country. And then I look at all the systems that are presently deployed by those countries that are manufactured by those top manufacturers, and I code them along a series of about 20 different variables.”

These variables include capabilities like navigation, homing, target identification, firing, etc., and for each variable, Roff coded a weapon as either having the capacity or not. Roff then created a series of three indices to bundle the various capabilities: self-mobility, self-direction, and self-determination. Self-mobility capabilities allow a system to move by itself, self-direction relates to target identification, and self-determination indexes the abilities that a system may possess in relation to goal setting, planning, and communication. Most “smart” weapons have high self-direction and self-mobility, but few, if any, have self-determination capabilities.

As Roff explains in a recent Foreign Policy post, the data shows that “the emerging trend in autonomy has less to do with the hardware and more on the areas of communications and target identification. What we see is a push for better target identification capabilities, identification friend or foe (IFF), as well as learning.  Systems need to be able to adapt, to learn, and to change or update plans while deployed. In short, the systems need to be tasked with more things and vaguer tasks.” Thus newer systems will need greater self-determination capabilities.

 

The Human in the Loop

But understanding what the weapons systems can do is only one part of the equation. In most systems, humans still maintain varying levels of control, and the military often claims that a human will always be “in the loop.” That is, a human will always have some element of meaningful control over the system. But this leads to another definitional problem: just what is meaningful human control?

Roff argues that this idea of keeping a human “in the loop” isn’t just “unhelpful,” but that it may be “hindering our ability to think about what’s wrong with autonomous systems.” She references what the UK Ministry of Defense calls, the Empty Hangar Problem: no one expects to walk into a military airplane hangar and discover that the autonomous plane spontaneously decided, on its own, to go to war.

“That’s just not going to happen,” Roff says, “These systems are always going to be used by humans, and humans are going to decide to use them.” But thinking about humans in some loop, she contends, means that any difficulties with autonomy get pushed aside.

Earlier this year, Roff worked with Article 36, which coined the phrase “meaningful human control,” to establish more a more clear-cut definition of the term. They published a concept paper, Meaningful Human Control, Artificial Intelligence and Autonomous Weapons, which offered guidelines for delegates at the 2016 CCW Meeting of Experts on Lethal Autonomous Weapons Systems.

In the paper, Roff and Richard Moyes outlined key elements – such as predictable, reliable and transparent technology, accurate user information, a capacity for timely human action and intervention, human control during attacks, etc. – for determining whether an AWS allows for meaningful human control.

“You can’t offload your moral obligation to a non-moral agent,” says Roff. “So that’s where I think our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.” The weapon system cannot do it for the human.

Researchers and the international community are only beginning to tackle the ethical issues that arise from AWSs. Clearly defining the weapons systems and the role humans will continue to play is one small part of a very big problem. Roff will continue to work with the international community to establish more well defined goals and guidelines.

“I’m hoping that the doctrine and the discussions that are developing internationally and through like-minded states will actually guide normative generation of how to use or not use such systems,” she says.

Heather Roff also spoke about this work on an FLI podcast.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

2300 Scientists from All Fifty States Pen Open Letter to Incoming Trump Administration

The following press release comes from the Union of Concerned Scientists.

Unfettered Science Essential to Decision Making; the Science Community Will Be Watching

WASHINGTON (November 30, 2016)—More than 2300 scientists from all fifty states, including 22 Nobel Prize recipients, released an open letter urging the Trump administration and Congress to set a high bar for integrity, transparency and independence in using science to inform federal policies. Some notable signers have advised Republican and Democratic presidents, from Richard Nixon to Barack Obama.

“Americans recognize that science is critical to improving our quality of life, and when science is ignored or politically corrupted, it’s the American people who suffer,” said physicist Lewis Branscomb, professor at the University of California, San Diego School of Global Policy and Strategy, who served as vice president and chief scientist at IBM and as director of the National Bureau of Standards under President Nixon. “Respect for science in policymaking should be a prerequisite for any cabinet position.”

The letter lays out several expectations from the science community for the Trump administration, including that he appoint a cabinet with a track record of supporting independent science and diversity; independence for federal science advisors; and sufficient funding for scientific data collection. It also outlines basic standards to ensure that federal policy is fully informed by the best available science.

For example, federal scientists should be able to: conduct their work without political or private-sector interference; freely communicate their findings to Congress, the public and their scientific peers; and expose and challenge misrepresentation, censorship or other abuses of science without fear of retaliation.

“A thriving federal scientific enterprise has enormous benefits to the public,” said Nobel Laureate Carol Greider, director of molecular biology and genetics at Johns Hopkins University. “Experts at federal agencies prevent the spread of diseases, ensure the safety of our food and water, protect consumers from harmful medical devices, and so much more. The new administration must ensure that federal agencies can continue to use science to serve the public interest.”

The letter also calls on the Trump administration and Congress to resist attempts to weaken the scientific foundation of laws such as the Clean Air Act and Endangered Species Act. Congress is expected to reintroduce several harmful legislative proposals—such as the REINS Act and the Secret Science Reform Act—that would increase political control over the ability of federal agency experts to use science to protect public health and the environment.

The signers encouraged their fellow scientists to engage with the executive and legislative branches, but also to monitor the activities of the White House and Congress closely. “Scientists will pay close attention to how the Trump administration governs, and are prepared to fight any attempts to undermine of the role of science in protecting public health and the environment,” said James McCarthy, professor of biological oceanography at Harvard University and former president of the American Association for the Advancement of Science. “We will hold them to a high standard from day one.”

Complex AI Systems Explain Their Actions

cobots_mauela_veloso

In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight. These bots will help people navigate crowded airports, serve meals, or even schedule meetings.

As these AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders.

This human-robot interaction is what Manuela M. Veloso’s research is all about. Veloso, a professor at Carnegie Mellon University, has focused her research on CoBots, autonomous indoor mobile service robots which transport items, guide visitors to building locations, and traverse the halls and elevators. The CoBot robots have been successfully autonomously navigating for several years now, and have traveled more than 1,000km. These accomplishments have enabled the research team to pursue a new direction, focusing now on novel human-robot interaction.

“If you really want these autonomous robots to be in the presence of humans and interacting with humans, and being capable of benefiting humans, they need to be able to talk with humans” Veloso says.

 

Communicating With CoBots

Veloso’s CoBots are capable of autonomous localization and navigation in the Gates-Hillman Center using WiFi, LIDAR, and/or a Kinect sensor (yes, the same type used for video games).

The robots navigate by detecting walls as planes, which they match to the known maps of the building. Other objects, including people, are detected as obstacles, so navigation is safe and robust. Overall, the CoBots are good navigators and are quite consistent in their motion. In fact, the team noticed the robots could wear down the carpet as they traveled the same path numerous times.

Because the robots are autonomous, and therefore capable of making their own decisions, they are out of sight for large amounts of time while they navigate the multi-floor buildings.

The research team began to wonder about this unaccounted time. How were the robots perceiving the environment and reaching their goals? How was the trip? What did they plan to do next?

“In the future, I think that incrementally we may want to query these systems on why they made some choices or why they are making some recommendations,” explains Veloso.

The research team is currently working on the question of why the CoBots took the route they did while autonomous. The team wanted to give the robots the ability to record their experiences and then transform the data about their routes into natural language. In this way, the bots could communicate with humans and reveal their choices and hopefully the rationale behind their decisions.

 

Levels of Explanation

The “internals” underlying the functions of any autonomous robots are completely based on numerical computations, and not natural language. For example, the CoBot robots in particular compute the distance to walls, assigning velocities to their motors to enable the motion to specific map coordinates.

Asking an autonomous robot for a non-numerical explanation is complex, says Veloso. Furthermore, the answer can be provided in many potential levels of detail.

“We define what we call the ‘verbalization space’ in which this translation into language can happen with different levels of detail, with different levels of locality, with different levels of specificity.”

For example, if a developer is asking a robot to detail their journey, they might expect a lengthy retelling, with details that include battery levels. But a random visitor might just want to know how long it takes to get from one office to another.

Therefore, the research is not just about the translation from data to language, but also the acknowledgment that the robots need to explain things with more or less detail. If a human were to ask for more detail, the request triggers CoBot “to move” into a more detailed point in the verbalization space.

“We are trying to understand how to empower the robots to be more trustable through these explanations, as they attend to what the humans want to know,” says Veloso. The ability to generate explanations, in particular at multiple levels of detail, will be especially important in the future, as the AI systems will work with more complex decisions. Humans could have a more difficult time inferring the AI’s reasoning. Therefore, the bot will need to be more transparent.

For example, if you go to a doctor’s office and the AI there makes a recommendation about your health, you may want to know why it came to this decision, or why it recommended one medication over another.

Currently, Veloso’s research focuses on getting the robots to generate these explanations in plain language. The next step will be to have the robots incorporate natural language when humans provide them with feedback. “[The CoBot] could say, ‘I came from that way,’ and you could say, ‘well next time, please come through the other way,’” explains Veloso.

These sorts of corrections could be programmed into the code, but Veloso believes that “trustability” in AI systems will benefit from our ability to dialogue, query, and correct their autonomy. She and her team aim at contributing to a multi-robot, multi-human symbiotic relationship, in which robots and humans coordinate and cooperate as a function of their limitations and strengths.

“What we’re working on is to really empower people – a random person who meets a robot – to still be able to ask things about the robot in natural language,” she says.

In the future, when we will have more and more AI systems that are able to perceive the world, make decisions, and support human decision-making, the ability to engage in these types of conversations will be essential­­.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Who is Responsible for Autonomous Weapons?

Consider the following wartime scenario: Hoping to spare the lives of soldiers, a country deploys an autonomous weapon to wipe out an enemy force. This robot has demonstrated military capabilities that far exceed even the best soldiers, but when it hits the ground, it gets confused. It can’t distinguish the civilians from the enemy soldiers and begins taking innocent lives. The military generals desperately try to stop the robot, but by the time they succeed it has already killed dozens.

Who is responsible for this atrocity? Is it the commanders who deployed the robot, the designers and manufacturers of the robot, or the robot itself?

 

Liability: Autonomous Systems

As artificial intelligence improves, governments may turn to autonomous weapons — like military robots — in order to gain the upper hand in armed conflict. These weapons can navigate environments on their own and make their own decisions about who to kill and who to spare. While the example above may never occur, unintended harm is inevitable. Considering these scenarios helps formulate important questions that governments and researchers must jointly consider, namely:

How do we hold human beings accountable for the actions of autonomous systems? And how is justice served when the killer is essentially a computer?

As it turns out, there is no straightforward answer to this dilemma. When a human soldier commits an atrocity and kills innocent civilians, that soldier is held accountable. But when autonomous weapons do the killing, it’s difficult to blame them for their mistakes.

An autonomous weapon’s “decision” to murder innocent civilians is like a computer’s “decision” to freeze the screen and delete your unsaved project. Frustrating as a frozen computer may be, people rarely think the computer intended to complicate their lives.

Intention must be demonstrated to prosecute someone for a war crime, and while autonomous weapons may demonstrate outward signs of decision-making and intention, they still run on a code that’s just as impersonal as the code that glitches and freezes a computer screen. Like computers, these systems are not legal or moral agents, and it’s not clear how to hold them accountable — or if they can be held accountable — for their mistakes.

So who assumes the blame when autonomous weapons take innocent lives? Should they even be allowed to kill at all?

 

Liability: from Self-Driving Cars to Autonomous Weapons

Peter Asaro, a philosopher of science, technology, and media at The New School in New York City, has been working on addressing these fundamental questions of responsibility and liability with all autonomous systems, not just weapons. By exploring fundamental concepts of autonomy, agency, and liability, he intends to develop legal approaches for regulating the use of autonomous systems and the harm they cause.

At a recent conference on the Ethics of Artificial Intelligence, Asaro discussed the liability issues surrounding the application of AI to weapons systems. He explained, “AI poses threats to international law itself — to the norms and standards that we rely on to hold people accountable for [decisions, and to] hold states accountable for military interventions — as [people are] able to blame systems for malfunctioning instead of taking responsibility for their decisions.”

The legal system will need to reconsider who is held liable to ensure that justice is served when an accident happens. Asaro argues that the moral and legal issues surrounding autonomous weapons are much different than the issues surrounding other autonomous machines, such as self-driving cars.

Though researchers still expect the occasional fatal accident to occur with self-driving cars, these autonomous vehicles are designed with safety in mind. One of the goals of self-driving cars is to save lives. “The fundamental difference is that with any kind of weapon, you’re intending to do harm, so that carries a special legal and moral burden,” Asaro explains. “There is a moral responsibility to ensure that [the weapon is] only used in legitimate and appropriate circumstances.”

Furthermore, liability with autonomous weapons is much more ambiguous than it is with self-driving cars and other domestic robots.

With self-driving cars, for example, bigger companies like Volvo intend to embrace strict liability – where the manufacturers assume full responsibility for accidental harm. Although it is not clear how all manufacturers will be held accountable for autonomous systems, strict liability and threats of class-action lawsuits incentivize manufacturers to make their product as safe as possible.

Warfare, on the other hand, is a much messier situation.

“You don’t really have liability in war,” says Asaro. “The US military could sue a supplier for a bad product, but as a victim who was wrongly targeted by a system, you have no real legal recourse.”

Autonomous weapons only complicate this. “These systems become more unpredictable as they become more sophisticated, so psychologically commanders feel less responsible for what those systems do. They don’t internalize responsibility in the same way,” Asaro explained at the Ethics of AI conference.

To ensure that commanders internalize responsibility, Asaro suggests that “the system has to allow humans to actually exercise their moral agency.”

That is, commanders must demonstrate that they can fully control the system before they use it in warfare. Once they demonstrate control, it can become clearer who can be held accountable for the system’s actions.

 

Preparing for the Unknown

Behind these concerns about liability, lies the overarching concern that autonomous machines might act in ways that humans never intended. Asaro asks: “When these systems become more autonomous, can the owners really know what they’re going to do?”

Even the programmers and manufacturers may not know what their machines will do. The purpose of developing autonomous machines is so they can make decisions themselves – without human input. And as the programming inside an autonomous system becomes more complex, people will increasingly struggle to predict the machine’s action.

Companies and governments must be prepared to handle the legal complexities of a domestic or military robot or system causing unintended harm. Ensuring justice for those who are harmed may not be possible without a clear framework for liability.

Asaro explains, “We need to develop policies to ensure that useful technologies continue to be developed, while ensuring that we manage the harms in a just way. A good start would be to prohibit automating decisions over the use of violent and lethal force, and to focus on managing the safety risks in beneficial autonomous systems.”

Peter Asaro also spoke about this work on an FLI podcast. You can learn more about his work at http://www.peterasaro.org.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

MIRI’S November 2016 Newsletter

Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.Since we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!

Research updates

General updates

News and links

Cybersecurity and Machine Learning

When it comes to cybersecurity, no nation can afford to slack off. If a nation’s defense systems cannot anticipate how an attacker will try to fool them, then an especially clever attack could expose military secrets or use disguised malware to cause major networks to crash.

A nation’s defense systems must keep up with the constant threat of attack, but this is a difficult and never-ending process. It seems that the defense is always playing catch-up.

Ben Rubinstein, a professor at the University of Melbourne in Australia, asks: “Wouldn’t it be good if we knew what the malware writers are going to do next, and to know what type of malware is likely to get through the filters?”

In other words, what if defense systems could learn to anticipate how attackers will try to fool them?

 

Adversarial Machine Learning

In order to address this question, Rubinstein studies how to prepare machine-learning systems to catch adversarial attacks. In the game of national cybersecurity, these adversaries are often individual hackers or governments who want to trick machine-learning systems for profit or political gain.

Nations have become increasingly dependent on machine-learning systems to protect against such adversaries. Unaided by humans, machine-learning systems in anti-malware and facial recognition software have the ability to learn and improve their function as they encounter new data. As they learn, they become better at catching adversarial attacks.

Machine-learning systems are generally good at catching adversaries, but they are not completely immune to threats, and adversaries are constantly looking for new ways to fool them. Rubinstein says, “Machine learning works well if you give it data like it’s seen before, but if you give it data that it’s never seen before, there’s no guarantee that it’s going to work.”

With adversarial machine learning, security agencies address this weakness by presenting the system with different types of malicious data to test the system’s filters. The system then digests this new information and learns how to identify and capture malware from clever attackers.

 

Security Evaluation of Machine-Learning Systems

Rubinstein’s project is called “Security Evaluation of Machine-Learning Systems”, and his ultimate goal is to develop a software tool that companies and government agencies can use to test their defenses. Any company or agency that uses machine-learning systems could run his software against their system. Rubinstein’s tool would attack and try to fool the system in order to expose the system’s vulnerabilities. In doing so, his tool anticipates how an attacker could slip by the system’s defenses.

The software would evaluate existing machine-learning systems and find weak spots that adversaries might try to exploit – similar to how one might defend a castle.

“We’re not giving you a new castle,” Rubinstein says, “we’re just going to walk around the perimeter and look for holes in the walls and weak parts of the castle, or see where the moat is too shallow.”

By analyzing many different machine-learning systems, his software program will pick up on trends and be able to advise security agencies to either use a different system or bolster the security of their existing system. In this sense, his program acts as a consultant for every machine-learning system.

Consider a program that does facial recognition. This program would use machine learning to identify faces and catch adversaries that pretend to look like someone else.

Rubinstein explains: “Our software aims to automate this security evaluation so that it takes an image of a person and a program that does facial recognition, and it will tell you how to change its appearance so that it will evade detection or change the outcome of machine learning in some way.”

This is called a mimicry attack – when an adversary makes one instance (one face) look like another, and thereby fools a system.

To make this example easier to visualize, Rubinstein’s group built a program that demonstrates how to change a face’s appearance to fool a machine-learning system into thinking that it is another face.

In the image below, the two faces don’t look alike, but the left image has been modified so that the machine-learning system thinks it is the same as the image on the right. This example provides insight into how adversaries can fool machine-learning systems by exploiting quirks.

ben-rubinstein-facial-recognition

When Rubinstein’s software fools a system with a mimicry attack, security personnel can then take that information and retrain their program to establish more effective security when the stakes are higher.

 

Minimizing the Attacker’s Advantage

While Rubinstein’s software will help to secure machine-learning systems against adversarial attacks, he has no illusions about the natural advantages that attackers enjoy. It will always be easier to attack a castle than to defend it, and the same holds true for a machine-learning system. This is called the ‘asymmetry of cyberwarfare.’

“The attacker can come in from any angle. It only needs to succeed at one point, but the defender needs to succeed at all points,” says Rubinstein.

In general, Rubinstein worries that the tools available to test machine-learning systems are theoretical in nature, and put too much responsibility on the security personnel to understand the complex math involved. A researcher might redo the mathematical analysis for every new learning system, but security personnel are unlikely to have the time or resources to keep up.

Rubinstein aims to “bring what’s out there in theory and make it more applied and more practical and easy for anyone who’s using machine learning in a system to evaluate the security of their system.”

With his software, Rubinstein intends to help level the playing field between attackers and defenders. By giving security agencies better tools to test and adapt their machine-learning systems, he hopes to improve the ability of security personnel to anticipate and guard against cyberattacks.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Insight From the Dalai Lama Applied to AI Ethics

One of the primary objectives — if not the primary objective — of artificial intelligence is to improve life for all people. But an equally powerful motivator to create AI is to improve profits. These two goals can occasionally be at odds with each other.

Currently, with AI becoming smarter and automation becoming more efficient, many in AI and government are worried about mass unemployment. But the results of mass unemployment may be even worse than most people suspect. A study released last year found that 1 in 5 people who committed suicide were unemployed. Another study found significant increases in suicide rates during recessions and the Great Depression.

A common solution that’s often suggested to address mass unemployment is that of a universal basic income (UBI). A UBI would ensure everyone has at least some amount of income. However, this would not address non-financial downsides of unemployment.

A recent op-ed, co-authored by the Dalai Lama for the New York Times, suggests he doesn’t believe money alone would cheer up the unemployed.

He explains, “Americans who prioritize doing good for others are almost twice as likely to say they are very happy about their lives. In Germany, people who seek to serve society are five times likelier to say they are very happy than those who do not view service as important. … The more we are one with the rest of humanity, the better we feel.”

But, he continues, “In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. This speaks to a broader human truth: We all need to be needed.”

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

“Leaders need to recognize that a compassionate society must create a wealth of opportunities for meaningful work, so that everyone who is capable of contributing can do so,” says the Dalai Lama.

Yet, presumably, the senior citizens mentioned above were retired, and some of them still felt needed. Perhaps those who thrived in retirement volunteered their time, or perhaps they focused on relationships and social interactions. Maybe they achieved that feeling of being needed through some other means altogether.

More research is necessary, but understanding how people without jobs find meaning in their lives will likely be necessary in order to successfully move toward beneficial AI.

And the Dalai Lama also remains hopeful, suggesting that recognizing and addressing the need to be needed could have great benefits for society:

“[Society’s] refusal to be content with physical and material security actually reveals something beautiful: a universal human hunger to be needed. Let us work together to build a society that feeds this hunger.”