Skip to content
All Newsletters

Special Newsletter: Slaughterbots Sequel

Published
December 13, 2021
Author
Will Jones

Contents

Special Newsletter: Slaughterbots Sequel

We’ve produced a sequel to the viral film Slaughterbots: Click to watch

The Future of Life Institute (FLI) has today released Slaughterbots – if human: kill(), a short film that warns anew of humanity’s accelerating path towards the widespread proliferation of slaughterbots – weapons that use artificial intelligence (AI) to identify, select, and kill people without human intervention. Watch it here.

if human: kill() follows up on the FLI’s award-winning short film, Slaughterbots, to depict a dystopian future in which these weapons have been allowed to become the tool of choice not just for militaries, but any group seeking to achieve scalable violence against a specific group, individual, or population.

When FLI first released Slaughterbots in 2017, some criticized the scenario as unrealistic and technically unfeasible. Since then, however, slaughterbots have been used on the battlefield, and similar, easy-to-make weapons are currently in development, marking the start of a global arms race that currently faces no legal restrictions.

if human: kill() conveys a concrete path to avoid the outcome it warns of. The vision for action is based on the real-world policy prescription of the International Committee of the Red Cross (ICRC), an independent, neutral organisation that plays a leading role in the development and promotion of laws regulating the use of weapons. A central tenet of the ICRC’s position is the need to adopt a new, legally binding prohibition on autonomous weapons that target people. FLI agrees with the ICRC’s most recent recommendation that the time has come to adopt legally binding rules on lethal autonomous weapons through a new international treaty.

The United Nations will soon meet in Geneva to debate this proposal. Demand that your country choose humanity, not Slaughterbots by signing Amnesty International’s new petition.

Press inquiries and requests for film distribution may be directed to Georgiana Gilgallon, Director of Communications, at georgiana@futureoflife.org

COMING LATER THIS WEEK…

Panel Discussion on ‘Slaughterbots  if human: kill()’ – Perspectives on the State of Lethal Autonomous Weapons

We will also be releasing a panel discussion with Richard Moyes, Managing Director of the Campaign to Stop Killer Robots, Max Tegmark, Stuart Russell and Maya Brehm from the International Committee of the Red Cross (ICRC). Tegmark and Russell will speak to the technical issues of lethal autonomous weapons, from their perspective as AI professors; Moyes will offer the civil society perspective as coordinator of the Campaign to Stop Killer Robots (CSKR), and the ICRC will provide their own perspective, namely the recommendation that states adopt new legally binding rules to regulate lethal autonomous weapons.

The Ideas Behind Slaughterbots – if human: kill()’ – A Deep Dive Interview

In support of Slaughterbots – if human: kill(), we’ve produced an in-depth interview that explores the ideas and purpose behind this new film. We interviewed Emilia Javorsky, who heads up FLI’s lethal autonomous weapons policy and advocacy efforts, Max Tegmark, Co-founder and President of FLI, and Stuart Russell, Professor of Computer Science at University of California, Berkeley, Director of the Center for Intelligent Systems, and a world-leading AI researcher.

Javorsky sheds light on the intentions behind the film, its associated policy goals, and broader implications. Russell elucidates the problems with lethal autonomous weapons, their technical capabilities, and why their use by good actors is ultimately counterproductive. He argues that a ban on these weapons is the only sensible option given the risk they pose. Tegmark ties the issue of lethal autonomous weapons into a grander vision of the potential for both positive and negative uses of technology in the future.

To learn more, visit autonomousweapons.org

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Notes on the AI Seoul Summit

Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
May 31, 2024

Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons

Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.
May 3, 2024

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram