Skip to content

An introduction to the issue of Lethal Autonomous Weapons

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them 'smart'. This means these weapons will soon be making critical decisions by themselves - perhaps even deciding who lives and who dies.
Published:
November 30, 2021
Author:
Taylor Jones

Contents

In the last few years, there has been a new development in the field of weapons technology.

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them ‘smart’. This means these weapons will soon be making critical decisions by themselves – perhaps even deciding who lives and who dies.

If you’re safe at home, far from the front lines, you may think this does not concern you – but it should.

What are lethal autonomous weapons?

Slaughterbots, also called “lethal autonomous weapons systems” or “killer robots”, are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention.

Whereas in the case of unmanned military drones the decision to take life is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms alone.

Slaughterbots are pre-programmed to kill a specific “target profile.” The weapon is then deployed into an environment where its AI searches for that “target profile” using sensor data, such as facial recognition.

When the weapon encounters someone the algorithm perceives to match its target profile, it fires and kills.

What’s the problem?

Weapons that use algorithms to kill, rather than human judgement are immoral and a grave threat to national and global security.

  1. Immoral: Algorithms are incapable of comprehending the value of human life, and so should never be empowered to decide who lives and who dies. Indeed, the United Nations Secretary General António Guterres agrees that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
  2. Threat to Security: Algorithmic decision-making allows weapons to follow the trajectory of software: faster, cheaper, and at greater scale. This will be highly destabilising on both national and international levels because it introduces the threats of proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.

How soon will they be developed?

Terms like “slaughterbots” and “killer robots” remind people of science fiction movies like The Terminator, which features a self-aware, human-like, robot assassin. This fuels the assumption that lethal autonomous weapons are of the far–future.

But that is incorrect.

In reality, weapons which can autonomously select, target, and kill humans are already here.

A 2021 report by the U.N. Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating forces. Since then, there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.

The accelerating rate of these use cases is a clear warning that time to act is quickly running out.

  • March 2021 – First documented use of a lethal autonomous weapon
  • June 2021 – First documented use of a drone swarm in combat
This content was first published at futureoflife.org on November 30, 2021.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

10 Reasons Why Autonomous Weapons Must be Stopped

Lethal autonomous weapons pose a number of severe risks. These risks significantly outweigh any benefits they may provide, even for the world's most advanced military programs.
November 27, 2021

Real-Life Technologies that Prove Autonomous Weapons are Already Here

For years, we have seen the signs that lethal autonomous weapons were coming. Unfortunately, these weapons are no longer just 'in development' - they are starting to be used in real military applications. Slaugherbots are officially here.
November 22, 2021

Why support a ban on Autonomous weapons?

Why support a ban on Autonomous weapons? Artificial Intelligence (AI) will soon become the most powerful technology ever created. It […]
October 26, 2021

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram