Skip to content

Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict

Published:
February 17, 2016
Author:
Matt Scherer

Contents

“Robots won’t commit war crimes.  We just have to program them to follow the laws of war.”  This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers.  But designing such autonomous weapon systems (AWSs) is far easier said than done.  True, if we could design and program AWSs that always obeyed the international law of armed conflict (LOAC), then the issues raised in the previous segment of this series — which suggested the need for human direction, monitoring, and control of AWSs — would be completely unfounded. But even if such programming prowess is possible, it seems unlikely to be achieved anytime soon. Instead, we need be prepared for powerful AWS that may not recognize where the lines blur between what is legal and reasonable during combat and what is not.

While the basic LOAC principles seem straightforward at first glance, their application in any given military situation depends heavily on the specific circumstances in which combat takes place. And the difference between legal and illegal acts can be blurry and subjective.  It therefore would be difficult to reduce the laws and principles of armed conflict into a definite and programmable form that could be encoded into the AWS and, from which the AWS could consistently make battlefield decisions that comply with the laws of war.

Four core principles guide LOAC: distinction, military necessity, unnecessary suffering, and proportionality.  Distinction means that participants in an armed conflict must distinguish between military and civilian personnel (and between military and civilian objects) and limit their attacks to military targets.  It follows that an attack must be justified by military necessity–i.e., the attack, if successful, must give the attacker some military advantage.  The next principle, as explained by the International Committee of the Red Cross, is that combatants must not “employ weapons, projectiles material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.”  Unlike the other core principles, the principle of unnecessary suffering generally protects combatants to the same extent as civilians.  Finally, proportionality dictates that the harm done to civilians and civilian property must not be excessive in light of the military advantage expected to be gained by an attack.

For a number of reasons, it would be exceedingly difficult to ensure that an AWS would consistently comply with these requirements if it were permitted to select and engage targets without human input.  One reason is that it would be difficult for an AWS to gather all objective information relevant to making determinations of the core LOAC principles.  For example, intuition and experience might allow a human soldier to infer from observing minute details of his surroundings–such as seeing a well-maintained children’s bicycle or detecting the scent of recently cooked food–that civilians may be nearby.  It might be difficult to program an AWS to pick up on such subtle insignificant clues, even though those clues might be critical to assessing whether a targeted structure contains civilians (relevant to distinction and necessity) or whether engaging nearby combatants might result in civilian casualties (relevant to proportionality).

But there is an even more fundamental and vexing challenge in ensuring that AWSs comply with LOAC: even if an AWS were somehow able to obtain all objective information relevant to the LOAC implications of a potential military engagement, all of the core LOAC principles are subjective to some degree.  For example, the operations manual of the US Air Force Judge Advocate General’s Office states that “roportionality in attack is an inherently subjective determination that will be resolved on a case-by-case basis.”  This suggests that proportionality is not something that simply can be reduced to a formula or otherwise neatly encoded so that an AWS would never launch disproportionate attacks.  It would be even more difficult to formalize the concept of “military necessity,” which is fiendishly difficult to articulate without getting tautological and/or somehow incorporating the other LOAC principles.

The principle of distinction might seem fairly objective–soldiers are fair game, civilians are not.  But it can even be difficult–sometimes exceptionally so–to determine whether a particular individual is a combatant or a civilian.  The Geneva Conventions state that civilians are protected from attack “unless and for such time as they take a direct part in hostilities.”  But how “direct” must participation in hostilities be before a civilian loses his or her LOAC protection?  A civilian in an urban combat area who picks up a gun and aims it at an enemy soldier clearly has forfeited his civilian status.  But what about a civilian in the same combat zone who is acting as a spotter?  Who is transporting ammunition from a depot to the combatants’ posts?  Who is repairing an enemy Jeep?  Do these answers change if the combat zone is in a desert instead of a city?  Given that humans frequently disagree on where the boundary between civilians and combatants should lie, it would be difficult to agree on an objective framework that would allow an AWS to accurately distinguish between civilians and combatants in the myriad scenarios it might face on the battlefield.

Of course, humans can also have great difficulty in making such determinations–and humans have been known to intentionally violate LOAC’s core principles, a rather significant drawback to which AWSs might be more resistant.  But when a human commits a LOAC violation, that human being can be brought to justice and punished. Who would be held responsible if an AWS attack violates those same laws?  As of now, that is far from clear.  That accountability problem will be the subject of the next entry in this series.

This content was first published at futureoflife.org on February 17, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram