Skip to content

Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?

Published:
February 24, 2016
Author:
Matt Scherer

Contents

accountability-joke


The previous entry in this series examined why it would be very difficult to ensure that autonomous weapon systems (AWSs) consistently comply with the laws of war.  So what would happen if an attack by an AWS resulted in the needless death of civilians or otherwise constituted a violation of the laws of war?  Who would be held legally responsible?

In that regard, AWSs’ ability to operate free of human direction, monitoring, and control would raise legal concerns not shared by drones and other earlier generations of military technology.  It is not clear who, if anyone, could be held accountable if and when AWS attacks result in illegal harm to civilians and their property.  This “accountability gap” was the focus of a 2015 Human Rights Watch report.  The HRW report ultimately concluded that there was no plausible way to resolve the accountability issue and therefore called for a complete ban on fully autonomous weapons.

Although some commentators have taken issue with this prescription, the diagnosis seems to be correct—it simply is not obvious who would be responsible if an AWS commits an illegal act.  This accountability gap exists because AWSs incorporate AI technology could collect information and determine courses of action based on the conditions in which they operate.  It is unlikely that even the most careful human programmers could predict the nearly infinite on-the-ground circumstances that an AWS could face.  It would therefore be difficult for an AWS designer–to say nothing of its military operators–to foresee how the AWS would react in the fluid, fast-changing world of combat operations.  The inability to foresee an AWS’s actions would complicate the assignment of legal responsibility.

This is not unlike the foreseeability and resultant accountability problems presented by AI systems more generally, especially those that incorporate some form of machine learning technology, as discussed in my soon-to-be-published article on AI regulation:

The development of more versatile AI systems combined with advances in machine learning make it all but certain that issues pertaining to unforeseeable AI behavior will crop up with increasing frequency, and that the unexpectedness of AI behavior will rise significantly. The experiences of a learning AI systems could be viewed as a superseding cause—that is, “an intervening force or act that is deemed sufficient to prevent liability for an actor whose . . . conduct was a factual cause of harm” —of any harm that such systems cause. This is because the behavior of a learning AI system depends in part on its post-design experience, and even the most careful designers, programmers, and manufacturers will not be able to control or predict what an AI will experience after it leaves their care. Thus, a learning AI’s designers will not be able to foresee how it will act after it is sent out into the world.

For similar reasons, it is unlikely that the designers of an AWS would be able to fully anticipate how the weapon system would react when placed in the unpredictable environment of the battlefield.  And a human commander who deploys an AWS may likewise have difficulty predicting how the system will react if unforeseen events occur during combat.

If an AWS operates in a manner not foreseen by its human designers and commanders, it is not clear that anyone could be held legally responsible under present international law if civilians suffer harm as a result of the AWS’s unforeseeable acts.  When a human soldier commits a war crime, that soldier can be prosecuted and punished.  But human society’s sense of justice would not likely be satisfied by the ‘punishment’ of a machine.

So if we can’t punish the machine, who can we punish?  The obvious answer might be the AWSs immediate human superior.  But it would be difficult to hold the AWS’s human commander legally responsible for the system’s conduct.  The Geneva Conventions only hold a commander accountable for the crimes of a subordinate if the commander knowingly or negligently allowed the crime to occur—that is, if the commander knew or should have known that the subordinate would commit that crime, but did nothing to stop it:

The fact that a breach of the Conventions or of this Protocol was committed by a subordinate does not absolve his superiors from penal or disciplinary responsibility, as the case may be, if they knew, or had information which should have enabled them to conclude in the circumstances at the time, that he was committing or was going to commit such a breach and if they did not take all feasible measures within their power to prevent or repress the breach.

Such a negligence standard would be difficult to apply in cases involving autonomous machines such as AIs.  The freer an AWS is from human direction, supervision, and control, the more difficult it will be to prove that a human commander “knew or should have known” that the AWS would commit an illegal act.  The lack of a viable method for bringing AWS-caused harm into the existing international legal framework—combined with the absence of laws specifically governing the development and operation of AWSs—means that it is quite possible that no human would be held legally responsible for even a grave illegal act committed by an AWS.

This article was originally posted on Scherer’s blog, Law and AI.

This content was first published at futureoflife.org on February 24, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram