Skip to content

Who’s to Blame (Part 2): What is an “autonomous” weapon?

Published:
February 10, 2016
Author:
Matt Scherer

Contents

The following is the second in a series about the limited legal oversight of autonomous weapons. The first segment can be found here.

pe160131

Source: Peanuts by Charles Schulz, January 31, 2016 Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of a weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting it down completely if the system malfunctions.  Existing commentaries on “autonomy” in weapon systems all seem to invoke at least one of these three concepts, though they may use different words to refer to those concepts.

The operation of modern military drones such as the MQ-1 Predator and MQ-9 Reaper illustrates how these concepts work in practice.  A Predator or Reaper will not take off, select a target, or launch a missile without direct human input.  Such drones thus are completely dependent on human direction.  While a drone, like a commercial airliner on auto-pilot, may steer itself during non-mission-critical phases of flight, human operators closely monitor the drone throughout each mission both through live video feeds from cameras mounted on the drone and through flight data transmitted by the drone in real time.  And, of course, humans directly (though remotely) control the drone during all mission-critical phases.  Indeed, if the communications link that allows the human operator to control the drone fails, “the drone is programmed to fly autonomously in circles, or return to base, until the link can be reconnected.”  The dominating presence of human direction, monitoring, and control mean that a drone is, in effect, “little more than a super-fancy remote-controlled plane.”  The human-dependent nature of drones makes the task of piloting a drone highly stressful and labor-intensive–so much so that recruitment and retention of drone pilots has proven to be a major challenge for the U.S. Air Force.  That, of course, is part of why militaries might be tempted to design and deploy weapon systems that can direct themselves and/or that do not require constant human monitoring or control.

Direction, monitoring, and control are very much interrelated, with monitoring and control being especially intertwined.  During an active combat mission, human monitoring must be accompanied by human control (and vice versa) to act as an effective check on a weapon system’s operations.  (For that reason, commentators often seem to combine monitoring and control into a single broader concept, such as “oversight” or, my preferred term, “supervision.“)  Likewise, direction is closely related to control; an AWS could not be given new orders (i.e., direction) by a human commander if the AWS was not equipped with mechanisms allowing for human control of its operations.  Such an AWS would only be human-directed in terms of its initial programming.

Particularly strong human direction can also reduce the need for monitoring and control, and vice versa.  A weapon system that is subject to complete human direction in terms of the target, timing, and method of attack (and that has no ability to alter those parameters) has no more autonomy than fire-and-forget guided missiles, a technology that has been available for decades.  And a weapon system subject to constant real-time human monitoring and control may have no more practical autonomy than the remotely piloted drones that are already in widespread military use.

Consequently, the strongest concerns relate to weapon systems that are “fully autonomous”–that is, weapon systems that can select and engage targets without specific orders from a human commander and operate without real-time human supervision.  A 2015 Human Rights Watch (HRW) report, for instance, defines “fully autonomous weapons” as systems that lack meaningful human direction regarding the selection of targets and delivery of force and whose human supervision is so limited that humans are effectively “out-of-the-loop.”  A directive issued by the United States Department of Defense (DoD) in 2009 similarly defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”

These sources also recognize the existence of weapon systems with lower levels of autonomy.  The DoD directive covers “semi-autonomous weapons systems” that are “intended to only engage individual targets or specific target groups that have been selected by a human operator.”  Such systems must be human-directed in terms of target selection, but could be largely free from human supervision and can even be self-directed with respect to the means and timing of attack.  The same directive discusses “human-supervised” AWSs that, while capable of fully autonomous operation, are “designed to provide human operators with the ability to intervene and terminate engagements.”  HRW similarly distinguishes fully autonomous weapons from those with a human “on the loop,” meaning AWSs that “can select targets and deliver force under the oversight of a human operator who can override the robots’ actions.”


In sum, “autonomy” in weapon systems refers to the degree to which the weapon system operates free from meaningful human direction, monitoring, and control.  Weapon systems that operate without those human checks on their autonomy would raise unique legal issues if those systems’ operations lead to violations of international law.  Those legal challenges will be the subject of the next post in this series.

This segment was originally posted on the blog, Law and AI.

This content was first published at futureoflife.org on February 10, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram