Skip to content

Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons

Published:
March 2, 2016
Author:
Matt Scherer

Contents

Dilbert

Source: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams


An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors.  It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without violating the laws of war. An American Lt. General then orders that 50 of these units be deployed during a conflict in the Persian Gulf for use in ongoing urban combat in several cities. One of those units had previously seen action in urban combat in the Caucuses and desert combat during the same Persian Gulf conflict, all without incident. A Major makes the decision to deploy that AWS unit to assist a platoon engaged in block-to-block urban combat in Sana’a. Once the AWS unit is on the ground, a Lieutenant is responsible for telling the AWS where to go. The Lt. General, the Major, and the Lieutenant all had previous experience using this model of AWS and had given similar orders to these in prior combat situations without incident.

The Lieutenant has lost several men to enemy snipers over the past several weeks.  He orders the AWS to accompany one of the squads under his command and preemptively strike any enemy sniper nests it detects–again, an order he had given to other AWS units before without incident.  This time, the AWS unit misidentifies a nearby civilian house as containing a sniper nest, based on the fact that houses with similar features had frequently been used as sniper nests in the Caucuses conflict. It launches a RPG at the house.  There are no snipers inside, but there are 10 civilians–all of whom are killed by the RPG. Human soldiers who had been fighting in the area would have known that that particular house likely did not contain a sniper’s nest because the glare from the sun off a nearby glass building reduces visibility on that side of the street at the times of day that American soldiers typically patrol the area–a fact that the human soldiers knew well from prior combat in the area, but a variable that the AWS had not been programmed to take into consideration.

In my most recent post for FLI on autonomous weapons, I noted that it would be difficult for humans to predict the actions of autonomous weapon systems (AWSs) programmed with machine learning capabilities.  If the military commanders responsible for deploying AWSs were unable to reliably foresee how the AWS would operate on the battlefield, it would be difficult to hold those commanders responsible if the AWS violates the law of armed conflict (LOAC).  And in the absence of command responsibility, it is not clear whether any human could be held responsible under the existing LOAC framework.

A side comment from a lawyer on Reddit made me realize that my reference to “foreseeability” requires a bit more explanation.  “Foreseeability” is one of those terms that makes lawyers’ ears perk up when they hear it because it’s a concept that every American law student encounters when learning the principles of negligence in their first-year class on Tort Law.

Here’s the 1-minute explanation of what law students learn about the concept of “foreseeability.”  Every tort case has at least two parties: a plaintiff who was injured and a defendant whose actions allegedly led to that injury.  For the defendant to be held liable for the plaintiff’s injury, the defendant’s actions must, among other things, be the “legal cause” (also called “proximate cause”) of the injury.  For the defendant’s actions to be the legal cause of the injury, it is not enough for the plaintiff to show that his injury would not have happened “but for” the defendant’s conduct. The plaintiff must also show that the defendant’s actions played a big enough role in bringing about the injury that it would be fair to hold the defendant legally responsible for the harm suffered by the plaintiff.  As part of that analysis, courts began to hold that if the defendant could not have foreseen that his actions would lead to the plaintiff’s injury, then the defendant’s actions are not the legal cause of the injury because it would be unfair to hold the defendant legally responsible for something he could not have anticipated.  No foreseeability = no legal causation = no liability.

(It is worth noting that many-to-most American states have abandoned the doctrine of foreseeability as part of the legal causation analysis and instead make a more general determination of whether the defendant’s actions created or increased the risk of plaintiff’s injury.)

It seems to me that machine learning would create problems applying the concept of foreseeability.  With learning machines, it might not be enough for someone simply to understand how a particular type of AI system generally works.  To truly “foresee” how a machine will operate and what it will do, a person must also know what the machine has learned.  But because the behavior of a learning AI system depends in part on its post-design experience, and even the most careful designers, programmers, and manufacturers will not be able to control or predict what an AI will experience after it leaves their care, it follows that a learning AI’s designers will not be able to foresee how it will act after it is sent out into the world.


The problem of being able to predict what a learning AI system will do presents an even greater challenge in the context of autonomous weapons, but for a somewhat different reason.  The relevant area of law for autonomous is not American tort law (which only deals with civil liability and monetary damages), but rather the international law of armed conflict (or “LOAC,” under which violators can be criminally punished).  And in the LOAC context, the concept of “foreseeability” is (at most) a side issue when it comes to principle of command responsibility, which is the most obvious route to achieving human accountability for AWS operations.

The reason that difficulties in predicting AI behavior would create difficulties when AWSs violate LOAC is that command responsibility attaches only if the commanding officer knew or should have known that the specific LOAC violation would occur. That requires more than a mere showing that the LOAC violation was “foreseeable” and instead requires a showing that the commanding officer had actual information indicating that the subordinate was likely or at least at significant risk of committing that type of LOAC violation. It’s usually written something like “the officer had information available to him that put him on notice that a LOAC violation was about to occur, but failed to take reasonable steps to prevent or correct it.”  (The ICRC has a page listing various versions of the command responsibility rule for the American military.)

That is where the complexity of AI systems and the difficulties created by machine learning become more important. Once a soldier is trained on how to use a M777 howitzer, that soldier will be able to operate any other M777 without the need for additional training. But due to machine learning, AWSs will not be so easily interchangeable.  To truly understand how a particular AWS unit might “operate” in a battlefield combat situation in the same way a soldier understands how a firearm or howitzer works, a soldier would have to both 1) receive training in how that model of AWS is programmed and how it learns (call this “type training”), and 2) know what that particular AWS unit has learned during its prior combat operations (call this “unit familiarity”).

The necessary type training is likely to be more difficult than training a soldier on how to use a particular type of artillery or a particular drone model, because AWSs are likely to be far more sophisticated in terms of both armaments and programming.  And knowing what any particular AWS has “learned” might be quite difficult–an AWS is likely to collect a massive amount of data each time it goes into combat, and it may not be immediately apparent to a human (or to the AWS) which parts of that data will prove most relevant when the AWS is deployed in a new, different combat situation.  But absent such comprehensive training on how an AWS works and a deep knowledge of the particular AWS’s prior combat experience, it’s hard to argue that an officer had enough information for a court to say that he or she should have known what the AWS would do.  That makes a finding of command responsibility difficult.


Moreover, for any particular mission in which an AWS participates, there probably will be several officers at different levels of command who were involved in the planning and execution of the AWS’s mission.  Consider the scenario laid out at the top of this post, which is loosely similar to the Iron Dome III hypothetical from the first installment of the “Who’s to Blame” series. Who in that chain of events “should have known” that the LOAC violation would occur?  Did any of the military officers have information that put them on notice that a LOAC violation by the AWS was likely?  More fundamentally, would it be fair to hold any of them criminally responsible for the AWS’s attack?

I don’t mean to suggest that these issues are insoluble. But due to the difficulties in predicting what particular AWS units would do in combat, the current LOAC rules for command responsibility do not seem sufficient to ensure human accountability for a LOAC violation committed by an AWS.  Either new rules need to be made or the current rules should be heavily amended to more clearly delineate the requirements of supervision and principles of legal responsibility with respect to AWSs.  That will be the subject of the next and final entry in the “Who’s to Blame” series.

This content was first published at futureoflife.org on March 2, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram