Skip to content

Lethal Autonomous Weapons Systems – FAQS

Published:
July 18, 2018
Author:
Ariel Conn

Contents

Lethal Autonomous Weapons Systems – Frequently Asked Questions

The pledge provides a good summary of why FLI is concerned about lethal autonomous weapons systems (LAWS). The following FAQ conveys FLI’s own position on autonomous weapons in more detail, though these more detailed views should not be attributed to any signatory of the pledge, who have signaled their agreement only to what is written in it and particularly the pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”

What sort of weapons systems do “LAWS” refer to?

The weapons at issue here are those for which the decision as to whether to target and attack a particular person is ultimately made by a machine rather than by a human. It does not concern semi-autonomous or remote-piloted systems like current drones, or defensive systems that attach other weapons, or nonlethal weapons, or cyberweapons. For these reasons we sometimes refer to “KLAOWS” (Kinetic Lethal Autonomous Offensive Weapons Systems), but this term is not widely used. This being said, there are disagreements as to the details of these definitions. In our view the most problematic aspects of (K)LA(O)WS are those that (a) move moral responsibility from a person to a machine (or nowhere), (b) dramatically lower usual barriers against the killing and assassination of individuals, and (c) allow killing to scale in a way qualitatively different from semi-autonomous or non-autonomous weapons systems.

Does FLI oppose weapons in general or AI use in the military?

Like almost anyone else, we’d like to see a peaceful world where large militaries are unnecessary and military conflicts are absent. But we’re far from such a world and we regard a role for AI in the military as inevitable and potentially positive if it can help reduce conflict or make it more consistent with international law or universally recognized human rights. Our hope is to avoid an arms race involving weapons that seem likely to lead to unstable dynamics, lowering of the threshold for armed conflict, violations of current international moral norms, and other negative outcomes. There are other applications of AI in the military (e.g. to nuclear or other command-and-control systems) that we also believe would be dangerous, making it vital for AI experts to help military planners attain a clear understanding, not only of the strengths, but also of the limitations of AI and machine learning (ML) systems, particularly with regards to robustness, security, reliability, and predictability.

Won’t militaries without LAWS be at a disadvantage against adversaries who develop them?

Possibly, but it’s exactly this line of reasoning that’s most likely to lead to an arms race. This is why an international agreement is vital: countries can avoid the negative outcomes of LAWS without feeling they are putting themselves at a military disadvantage. Note that the same considerations apply to torture, indiscriminate killing of civilians, etc.: we avoid them because it is morally right to do so, but we also forbid them by international agreement partly so that each country can eschew these practices without fearing that adversaries will gain an advantage by not doing so. Moreover, we are unconvinced that there are realistic scenarios in which a major military power is threatened by LAWS in a way that requires LAWS (rather than defensive systems, nuclear deterrent, etc.) to counter.

Aren’t LAWS already disallowed by policies of many countries?

A number of countries, including the US, already have policies requiring humans to make kill decisions. However there are ambiguities in definitions, including in the separation of autonomous from semi-autonomous weapons, what “target” means, and so on; moreover these policies could be changed by fiat at any time. If the intention is to keep the prohibitions in place indefinitely, it would seem in such countries’ strategic interest to seek an international agreement that would bind potential adversaries to the same types of restrictions using a consistent set of definitions.

Won’t countries just clandestinely develop LAWS anyway even if they are banned?

We anticipate countries will respond to a ban on LAWS as they did to bans on chemical, biological, nuclear, and space weapons. Yes, some countries may still develop these weapons, but there is a huge difference between open development of a military capability versus open development of defensive measures against that capability with a much smaller and clandestine effort to develop the offensive side. Even if LAWS are banned, we believe it would absolutely make sense for militaries to develop countermeasures to them, just as we do for example with bioweapons.

Won’t LAWS save lives by having robots die rather than soldiers, and minimizing collateral damage?

This is a laudable goal, and might be true if after the advent of LAWS, the specific battles —numbers, times, locations, places, circumstances, victims—were exactly those that would have occurred with human soldiers, had autonomous weapons been banned. But this is rather obviously not the case. Even if only one side in a conflict has LAWS, it would almost certainly use them in a different and probably expanded variety of ways. Moreover unlike nuclear weapons, LAWS don’t require hard-to-obtain ingredients or dangerous processes; unlike remote-piloted aircraft and the like, they don’t require sophisticated satellite or communications systems. It then seems likely that these weapons will, after some period, widely proliferate. At the same time, because of their ease of use, LAWS are likely to dramatically lower the threshold of military conflict, leading to instability and potentially far more loss of life on both sides. Finally, it seems unlikely to us that conflicts will ever be fully confined to robots and other unmanned military systems fighting each other. While this might ensue for a time, a determined military is unlikely to surrender because its robots have been defeated, but only when there is too high a cost in human life or an inability to continue to wage conflict.

Won’t LAWS prevent wars by allowing us to just “remove” the leadership of an adversary?

See above, and remember that both sides consider themselves “us.”

Won’t LAWS save lives in some cases, for example with a sniper holed up and shooting civilians, or a ticking-bomb scenario?

Unlikely. A sniper or suicide-bomber without hostages can be handled just as well by conventional means, or possibly by a semi-autonomous weapon with a human-in-the loop (as seen with the Texas sniper case). A terrorist or group with hostages would be a terrible situation into which to send a fully autonomous weapon, as the technology simply doesn’t exist to reliably differentiate between bad actors versus bystanders and hostages, and such situations have in the past been peacefully resolved only via enormous skill and subtlety in negotiation.

LAWS are sometimes termed (by FLI and others) “weapons of mass destruction.” Why?

There is no real consensus definition of WMDs, and many are historical. LAWS would have very different characteristics than these other WMDs in that they would not be indiscriminate in the way that nuclear, chemical or (present-day) biological weapons would be. We use this term because they could put the unimpeded ability to kill a very large number of people into a small number of hands, and at a very low cost (financial and otherwise). The cost-per-kill of LAWS could be as little of hundreds of dollars, which is comparable to nuclear weapons and far below many other weapons systems. The term “scalable weapons of mass killing” would also be appropriate but is not widely used.

This content was first published at futureoflife.org on July 18, 2018.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Grants F.A.Q.

An International Request for Proposals – Frequently Asked Questions Does FLI have particular agenda or position on AI safety? FLI’s […]
December 6, 2015

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram