Skip to content

AI Researcher Heather Roff

Published:
September 30, 2016
Author:
Revathi Kumar

Contents

AI Safety Research




Heather Roff

Senior Research Fellow

Department of Politics and International Relations

University of Oxford

heather.roff@colorado.edu

Project: Lethal Autonomous Weapons, Artificial Intelligence and Meaningful Human Control

Amount Recommended:    $136,918




Project Summary

There is a growing concern over the deployment of autonomous weapons systems, and how the partnering of artificial intelligence (AI) and weapons will change the future of conflict. The United Nations recently took up the subject of autonomous weapons, and many governments and key international organizations are arguing that such systems require meaningful human control to be acceptable. However, what is human control, and how do we ensure that it is meaningful? This project helps the international community, scholars and practitioners by providing answers those questions and helping to protect the essential elements of human control over the application of force. Bringing together computer scientists, roboticists, ethicists, lawyers and diplomats, the project will produce a conceptual framework that can shape new research and international policy for the future. Moreover, it will create a freely downloadable dataset on existing and emerging semi-autonomous weapons. Through this data, we can gain clarity on how and where autonomous functions are already deployed and on how such functions are kept under human control. A focus on current and emerging technologies makes it clear that the relationship between AI and weapons is not a problem for the distant future, but is a pressing issue now.

Technical Abstract

The project addresses the relationships between artificial intelligence (AI), weapons systems and society. In particular, the project provides a framework for meaningful human control (MHC) of autonomous weapons systems. In international discussions, a number of governments and organizations adopted MHC as a tool for approaching problems and potential solutions raised by autonomous weapons. However, the content of MHC was left open. While useful for policy reasons, the international community, academics and practioners are calling for further work on this issue. This project responds to that call by bringing together a multidisciplinary and multi-stakeholder team to address key questions. For example, we question the values associated with MHC, what rules should inform the design of the systems “both in software and hardware” and how existing and currently developing weapons systems advance possible relationships between human control, autonomy and AI. To achieve impact across academic, industry and policy arenas, we will produce academic publications, policy briefs, an open access database on ‘semi-autonomous’ weapons, and will sponsor multi-sector stakeholder discussions on how human values can be maintained as systems develop. Furthermore, the organization Article 36 will channel outputs directly into the international diplomatic community to achieve impact in international legal and policy forums.


The Problem of Defining Autonomous Weapons

What, exactly, is an autonomous weapon? For the general public, the phrase is often used synonymously with killer robots and triggers images of the Terminator. But for the military, the definition of an autonomous weapons system, or AWS, is deceivingly simple.

The United States Department of Defense defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.  This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

Basically, it is a weapon that can be used in any domain — land, air, sea, space, cyber, or any combination thereof — and encompasses significantly more than just the platform that fires the munition. This means that there are various capabilities the system possesses, such as identifying targets, tracking, and firing, all of which may have varying levels of human interaction and input.

Heather Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, suggests that even the basic terminology of the DoD’s definition is unclear.

“This definition is problematic because we don’t really know what ‘select’ means here.  Is it ‘detect’ or ‘select’?” she asks. Roff also notes another definitional problem arises because, in many instances, the difference between an autonomous weapon (acting independently) and an automated weapon (pre-programmed to act automatically) is not clear.

A Database of Weapons Systems

State parties to the UN’s Convention on Conventional Weapons (CCW) also grapple with what constitutes an autonomous — and not a current automated — weapon. During the last three years of discussion at Informal Meetings of Experts at the CCW, participants typically only referred to two or three presently deployed weapons systems that appear to be AWS, such as the Israeli Harpy or the United States’ Counter Rocket and Mortar system.

To address this, the International Committee of the Red Cross requested more data on presently deployed systems. It wanted to know what the weapons systems are that states currently use and what projects are under development. Roff took up the call to action. She poured over publicly available data from a variety of sources and compiled a database of 284 weapons systems. She wanted to know what capacities already existed on presently deployed systems and whether these were or were not “autonomous.”

“The dataset looks at the top five weapons exporting countries, so that’s Russia, China, the United States, France and Germany,” says Roff. “I’m looking at major sales and major defense industry manufacturers from each country. And then I look at all the systems that are presently deployed by those countries that are manufactured by those top manufacturers, and I code them along a series of about 20 different variables.”

These variables include capabilities like navigation, homing, target identification, firing, etc., and for each variable, Roff coded a weapon as either having the capacity or not. Roff then created a series of three indices to bundle the various capabilities: self-mobility, self-direction, and self-determination. Self-mobility capabilities allow a system to move by itself, self-direction relates to target identification, and self-determination indexes the abilities that a system may possess in relation to goal setting, planning, and communication. Most “smart” weapons have high self-direction and self-mobility, but few, if any, have self-determination capabilities.

As Roff explains in a recent Foreign Policy post, the data shows that “the emerging trend in autonomy has less to do with the hardware and more on the areas of communications and target identification. What we see is a push for better target identification capabilities, identification friend or foe (IFF), as well as learning.  Systems need to be able to adapt, to learn, and to change or update plans while deployed. In short, the systems need to be tasked with more things and vaguer tasks.” Thus newer systems will need greater self-determination capabilities.

The Human in the Loop

But understanding what the weapons systems can do is only one part of the equation. In most systems, humans still maintain varying levels of control, and the military often claims that a human will always be “in the loop.” That is, a human will always have some element of meaningful control over the system. But this leads to another definitional problem: just what is meaningful human control?

Roff argues that this idea of keeping a human “in the loop” isn’t just “unhelpful,” but that it may be “hindering our ability to think about what’s wrong with autonomous systems.” She references what the UK Ministry of Defense calls, the Empty Hangar Problem: no one expects to walk into a military airplane hangar and discover that the autonomous plane spontaneously decided, on its own, to go to war.

“That’s just not going to happen,” Roff says, “These systems are always going to be used by humans, and humans are going to decide to use them.” But thinking about humans in some loop, she contends, means that any difficulties with autonomy get pushed aside.

Earlier this year, Roff worked with Article 36, which coined the phrase “meaningful human control,” to establish more a more clear-cut definition of the term. They published a concept paper, Meaningful Human Control, Artificial Intelligence and Autonomous Weapons, which offered guidelines for delegates at the 2016 CCW Meeting of Experts on Lethal Autonomous Weapons Systems.

In the paper, Roff and Richard Moyes outlined key elements – such as predictable, reliable and transparent technology, accurate user information, a capacity for timely human action and intervention, human control during attacks, etc. – for determining whether an AWS allows for meaningful human control.

“You can’t offload your moral obligation to a non-moral agent,” says Roff. “So that’s where I think our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.” The weapon system cannot do it for the human.

Researchers and the international community are only beginning to tackle the ethical issues that arise from AWSs. Clearly defining the weapons systems and the role humans will continue to play is one small part of a very big problem. Roff will continue to work with the international community to establish more well defined goals and guidelines.

“I’m hoping that the doctrine and the discussions that are developing internationally and through like-minded states will actually guide normative generation of how to use or not use such systems,” she says.

Heather Roff also spoke about this work on an FLI podcast.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

This content was first published at futureoflife.org on September 30, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

AI Researcher Daniel Weld

AI Safety Research Daniel Weld Thomas J. Cable / WRF Professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow […]
1 October, 2016

AI Researcher Adrian Weller

AI Safety Research Adrian Weller Senior Research Fellow, Department of Engineering University of Cambridge aw665@cam.ac.uk Project: Investigation of Self-Policing AI […]
1 October, 2016

AI Researcher Michael Wellman

AI Safety Research Michael Wellman Lynn A. Conway Professor of Computer Science and Engineering Professor, Electrical Engineering and Computer Science […]
1 October, 2016

AI Researcher Michael Wooldridge

AI Safety Research Michael Wooldridge Head of Department of Computer Science, Professor of Computer Science University of Oxford Senior Research […]
1 October, 2016
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram