Skip to content

The Risks Posed By Lethal Autonomous Weapons

Published:
September 4, 2018
Author:
Ariel Conn

Contents

The following article was originally posted on Metro.

Killer robots. It’s a phrase that’s both terrifying, but also one that most people think of as still in the realm of science fiction. Yet weapons built with artificial intelligence (AI) – weapons that could identify, target, and kill a person all on their own – are quickly moving from sci-fi to reality.

To date, no weapons exist that can specifically target people. But there are weapons that can track incoming missiles or locate enemy radar signals, and these weapons can autonomously strike these non-human threats without any person involved in the final decision. Experts predict that in just a few years, if not sooner, this technology will be advanced enough to use against people.

Over the last few years, delegates at the United Nations have debated whether to consider banning killer robots, more formally known as lethal autonomous weapons systems (LAWS). This week delegates met again to consider whether more meetings next year could lead to something more tangible – a political declaration or an outright ban.

Meanwhile, those who would actually be responsible for designing LAWS — the AI and robotics researchers and developers — have spent these years calling on the UN to negotiate a treaty banning LAWS. More specifically, nearly 4,000 AI and robotics researchers called for a ban on LAWS in 2015; in 2017, 137 CEOs of AI companies asked the UN to ban LAWS; and in 2018, 240 AI-related organizations and nearly 3,100 individuals took that call a step further and pledged not to be involved in LAWS development.

And AI researchers have plenty of reasons for their consensus that the world should seek a ban on lethal autonomous weapons. Principle among these is that AI experts tend to recognize how dangerous and destabilizing these weapons could be.

The weapons could be hacked. The weapons could fall into the hands of “bad actors.” The weapons may not be as “smart” as we think and could unwittingly target innocent civilians. Because the materials necessary to build the weapons are cheap and easy to obtain, military powers could mass-produce these weapons, increasing the likelihood of proliferation and mass killings. The weapons could enable assassinations or, alternatively, they could become weapons of oppression, allowing dictators and warlords to subdue their people.

But perhaps the greatest risk posed by LAWS, is the potential to ignite a global AI arms race.

For now, governments insist they will ensure that testing, validation, and verification of these weapons is mandatory. However, these weapons are not only technologically novel, but also transformative; they have been described as the third revolution in warfare, following gun powder and nuclear weapons. LAWS have the potential to become the most powerful types of weapons the world has seen.

Varying degrees of autonomy already exist in weapon systems around the world, and levels of autonomy and advanced AI capabilities in weapons are increasing rapidly. If one country were to begin substantial development of a LAWS program — or even if the program is simply perceived as substantial by other countries — an AI arms race would likely be imminent.

During an arms race, countries and AI labs will feel increasing pressure to find shortcuts around safety precautions. Once that happens, every threat mentioned above becomes even more likely, if not inevitable.

As stated in the Open Letter Against Lethal Autonomous Weapons:

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Most countries here have expressed their strong desire to move from talking about this topic to reaching an outcome. There have been many calls from countries and groups of countries to negotiate a new treaty to either prohibit LAWS and/or affirm meaningful human control over the weapons. Some countries have suggested other measures such as a political declaration. But a few countries – especially Russia, the United States, South Korea, Israel, and Australia – are obfuscating the process, which could lead us closer to an arms race.

This is a threat we must prevent.

This content was first published at futureoflife.org on September 4, 2018.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram