Skip to content

Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons

Published:
February 4, 2016
Author:
Matt Scherer

Contents

The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza.  In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system.  Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources.  Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched.  The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable.  Unfortunately, no one can agree on who that is…

Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans.  But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm.  Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum.  The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief.  AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that.  But applying the laws of armed conflict to attacks initiated by machines is no simple matter.

The core principles of the laws of armed conflict are straightforward enough.  Those most important to the AWS debate are: attackers must distinguish between civilians and combatants; they must strike only when it is actually necessary to a legitimate military purpose; and they must refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained.  But what if the attacker is a machine?  How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary?  Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?”  Does the law permit anyone other than a human being to make that kind of determination?  Should it?

But the issue goes even deeper than simply determining whether the laws of war can be encoded into the AI components of an AWS.  Even if everyone agreed that a particular AWS attack constituted a war crime, would our sense of justice be satisfied by “punishing” that machine?  I suspect that most people would answer that question with a resounding “no.”  Human laws demand human accountability.  Unfortunately, as of right now, there are no laws at the national or international level that specifically address whether, when, or how AWSs can be deployed, much less who (if anyone) can be held legally responsible if an AWS commits an act that violates the laws of armed conflict.  This makes it difficult for those laws to have the deterrent effect that they are designed to have; if no one will be held accountable for violating the law, then no one will feel any particular need to ensure compliance with the law.  On the other hand, if there are human(s) with a clear legal responsibility to ensure that an AWS’s operations comply with the laws of war, then horrors such as the hospital bombing described in the intro to this essay would be much less likely to come to fruition.

So how should the legal voids surrounding autonomous weapons–and for that matter, AI in general–be filled?  Over the coming weeks and months, that question–along with the other questions raised in this essay–will be examined in greater detail on the FLI website and on the Law and AI blog.  Stay tuned.

The next segment of this series is scheduled for February 10.

The original post can be found at Law and AI.

This content was first published at futureoflife.org on February 4, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram