Skip to content
All Newsletters

FLI December 2021 Newsletter

Published
21 December, 2021
Author
Will Jones

Contents

FLI December 2021 Newsletter

FLI Launches a New Film:
Slaughterbots – if human: kill()

At the beginning of the month, the Future of Life Institute (FLI) released Slaughterbots – if human: kill(), a short film that warns anew of humanity’s accelerating path towards the widespread proliferation of slaughterbots – weapons that use artificial intelligence (AI) to identify, select, and kill people without human intervention. Give it a watch.

Slaughterbots – if human: kill() follows up on FLI’s award-winning short film, Slaughterbots, which went viral back in 2017. The new film depicts a dystopian future in which these weapons have been allowed to become the tool of choice not just for militaries, but any group seeking to achieve scalable violence against a specific group, individual, or population.

When FLI first released Slaughterbots in 2017, some criticized the scenario as unrealistic and technically unfeasible. Since then, however, slaughterbots have been used on the battlefield, and similar, easy-to-make weapons are currently in development, marking the start of a global arms race that currently faces no legal restrictions.

if human: kill() conveys a concrete path to avoid the outcome of which it warns. The vision for action is based on the real-world policy prescription of the International Committee of the Red Cross (ICRC), an independent, neutral organisation that plays a leading role in the development and promotion of laws regulating the use of weapons. A central tenet of the ICRC’s position is the need to adopt a new, legally binding prohibition on autonomous weapons that target people. FLI agrees with the ICRC’s most recent recommendation that the time has come to adopt legally binding rules on lethal autonomous weapons through a new international treaty.

FLI’s new film has been watched by over 10 million people across YouTube, Facebook and Twitter, and received substantial media coverage, from outlets such as Axios, Forbes, BBC World Service’s Digital Planet and BBC Click, PopularMechanics, and ProlificNorth. Politico later recommended the film as the best way for readers to clarify for themselves the dangers we face from slaughterbots, and what can be done to prevent them. This places the film well to influence United Nations delegates next week as they meet at the Convention on Conventional Weapons (CCW) review conference. In the meantime, you can still impress upon your national delegates the importance of choosing humanity over Slaughterbots, by signing Amnesty International’s new petition.
 
To learn more about lethal autonomous weapons and what can be done to prevent their proliferation, visit autonomousweapons.org or watch our panel discussion.

Other Policy & Outreach Efforts

Policy Advocacy for Banning Slaughterbots

Several members of the FLI policy team have been featured in high-profile media outlets as part of their advocacy on autonomous weapons systems. FLI Director of European Policy, Mark Brakel, was quoted in an article in the NRC Handelsblad, a daily evening newspaper in the Netherlands, as the Dutch government changed its position on autonomous weapons. Brakel is currently attending the Review Conference of the Convention on Certain Conventional Weapons in Geneva on behalf of FLI.
 
Elsewhere, Emilia Javorsky MD, MPH – a physician-scientist who leads FLI’s lethal autonomous weapons policy and advocacy efforts – appeared in numerous BBC programmes in conjunction with Stuart Russell’s BBC Reith Lecture on ‘AI in Warfare’ (see below). She promoted Slaughterbots – if human: kill() and explain FLI’s arguments against allowing lethal autonomous weapons to proliferate in public and private hands. As well as featuring in the aforementioned BBC Digital Planet episode, Javorsky was interviewed, along with Stuart Russell, on this BBC Click segment which featured both of our films in an explanation of the risks posed by autonomous weapons.

Help us find our next unsung hero

Nominations for next year’s Future of Life Award are officially open. For the unfamiliar, this award is a $50,000/person prize given to individuals who, without having received much recognition at the time, have helped make today dramatically better than it may otherwise have been. The award is funded by Skype Co-founder Jaan Tallinn and presented by us, the Future of Life Institute. Past winners include Stanislav Petrov, who helped prevent an all-out US-Russian nuclear war, William Foege & Viktor Zhdanov (right), who played key roles in the eradication of smallpox, and most recently, Joseph Farman, Susan Solomon and Stephen Andersen for their work in saving the earth’s ozone layer. To nominate an unsung hero, please follow the link here.  If we decide to give the award to your nominee, you will receive a $3,000 prize from us!

New Podcast Episodes

The ideas behind ‘Slaughterbots – if human: kill()’ | A deep dive interview

In support of Slaughterbots – if human: kill(), we’ve produced an in-depth interview that explores the ideas and purpose behind the new film. We interviewed Emilia Javorsky, a physician-scientist who leads FLI’s lethal autonomous weapons policy and advocacy efforts, Max Tegmark, FLI President and Professor of Physics at MIT, and Stuart Russell, Professor of Computer Science at Berkeley, Director of the Center for Intelligent Systems, and world-leading AI researcher. They share their perspectives on particular scenes from the film and how they fit into wider issues surrounding lethal autonomous weapons. We hope this deep-dive into the content of Slaughterbots – if human: kill() helps to share the message of hope in the film, and the policy solution we see as crucial for allowing AI to benefit the world’s future, rather than oppress and harm it.

News & Reading

Recent Developments in Autonomous Weapons

In October, military hardware company Ghost Robotics debuted a robodog, with a sniper rifle attached. Ghost Robotics later denied that the robot was fully autonomous but it remains unclear just how far its autonomous capabilities extend. Leading AI researcher Toby Walsh said that he hoped the public outcry at the robodog would add ‘urgency to the ongoing discussions at the UN to regulate this space.’

Only weeks later, the Australian Army put in an order for the “Jaeger-C” (right), a bulletproof attack robot vehicle with anti-tank and anti-personnel capabilities. Forbes reported that “autonomous operation… means the Jaeger-C will work even if jammed”. It also means that the vehicle can go fully autonomous, with no human in control of its actions. The field of autonomous weaponry is advancing faster than legislation can hold it to account.

At the beginning of December, the Group of Governmental Experts (GGE) met at the UN in Geneva to discuss the appropriate legislation to regulate autonomous weapons systems. FLI hoped that the International Committee of the Red Cross (ICRC) position (namely, among other stipulations, to ban autonomous weapons that target humans) would be accepted, but instead, no consensus of any kind was reached, leaving the issue wide open for the upcoming Convention on certain Conventional Weapons (CCW) UN review conference proceeding this week. Mark Brakel attends on our behalf.

In the meantime, AI’s abilities to target accurately in situations resembling war remain dubious. Defense One writes of an Air Force targeting algorithm thought to have a 90% success rate, which turned out to be more like 25% accurate. A ‘subtle tweak’ in conditions sent this AI’s performance into a ‘dramatic nosedive’. And while the algorithm was only right 25% of the time, ‘it was confident that it was right’ 90% of the time. FLI takes this as yet more evidence that humans must remain in control: not only do AIs get things wrong; they also never stop to consider if they are wrong – a disastrous shortcoming.

The Gathering Swarm

Following the Indian military’s drone swarm demonstration in January, and the IDF (Israeli Defence Forces) use of a swarm to find, select and attack Hamas militants in Gaza over the summer (left), Russian defence firm Kronshtadt has now announced that they are soon debuting their own drone swarm. Kronschtadt CEO Sergei Bogatikov said, ‘This is a new stage in drone control, which will make them more autonomous’ – in other words, uncontrollable. Swarms represent a particularly large scale threat, due to the number of drones involved, and thus the potentially high number of human fatalities; in addition, Zachary Kallenborn pointed out on the FLI Podcast the danger of one drone’s error being communicated and maximised in impact by the entire swarm dynamic. So many military powers investing in them represents a frightening next step in the evolution of autonomous weapon systems.

New Uses of Drones But a Sign of Things to Come

The above developments are confined to high-tech, world-leading military firms. But further afield, we see uses of piloted drones which give a sense of the decentralised damage soon to be done by autonomous quadcopters and the like. At the beginning of November, an attack by a “small explosive-laden drone” failed to kill Iraqi PM Mustafa al-Kadhimi. However, like the Maduro drone attack in 2018, it demonstrated just how easy it is for powerful new technology to fall into criminal hands. With added autonomy, such a drone would have been harder to hold anyone accountable for, and much harder to defend against. al-Kadhimi would likely have died, Baghdad’s tense situation would have erupted, and Iraq’s latest chance at democratic stability might have been over. The Washington Post, quite rightly, later identified the attack as emblematic of a ‘growing threat’ of drone terrorism.

We learn that Mexican cartels have also embraced aerial drones for reconnaissance, and indeed for attacking rival gangs and security forces. And, in the same week as the Iraqi assassination attempt, WIRED ran this piece on the growing threat drone attacks disrupting the power grid in the U.S., and elsewhere. We are seeing the emergence of a truly destabilising force, here, one which the addition of autonomy could make devastating, and, as Brian Barrett writes in the article, ‘not enough is being done to stop it’.

Keep an Ear Out for Stuart Russell’s Reith Lecture Series

As mentioned in our November newsletter, Professor Stuart Russell, Founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, a world-leading AI researcher and long-time ally of FLI, is giving this year’s BBC Reith Lectures in the United Kingdom. The Reith Lectures, founded in 1948 by Sir John (later Lord) Reith (pictured left), first Director-General of the BBC, are regarded in Britain as the most significant public lecture platform in any given year. The 2021 series is entitled Living with Artificial Intelligence; in the build-up to the lectures, Russell gave this interview with The Guardian, in which he stated that AI experts were “spooked” by recent developments in the field, comparing these developments to those of the atom bomb.
 
The Reith Lectures broadcasted Russell’s first two lectures on December 1st and 8th, on BBC Radio 4 and BBC World Service; the third and fourth follow on the 15th and 22nd, respectively. Post-broadcast, the lectures are internationally available online, through BBC Sounds. Of particular note to FLI readers at this time is the second lecture, on ‘AI in Warfare’ (listen here). In the Guardian interview, Russell singled out military uses of AI as a particularly concerning area of development. He emphasised the threat from anti-personnel weapons: “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city”. He hopes that the Reith lectures will help get the public “involved in those choices” about the direction we take going forward, explaining, “it’s the public who will benefit or not”. In their follow-up opinion column, The Guardian made it clear that they endorse Russell’s message, declaring, “AI needs regulating before it’s too late”. We hope that the lectures, and the press around them in the British Isles and beyond, can continue to impact the way that academics, researchers, journalists and the broader public think about AI safety risks.

Learning the lessons of our times

Lena Sun writes in The Washington Post that “two years into this pandemic, the world is dangerously unprepared for the next one”. A new global security index ranking of 195 countries by their preparedness for future biological risks reveals great complacency, which one might have expected COVID-19 to shake-up somewhat. Sun quotes epidemiologist Dr. Jennifer Nuzzo on the strange disconnect between scientific risk assessments and national reactions: If “the alarms go off and your political leaders tell you, ‘Pay no attention to that alarm…’ that doesn’t mean that the fire alarms don’t work”.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram