Skip to content
Project

Artificial Escalation

Our new fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results. When disaster strikes, American, Chinese and Taiwanese military commanders quickly discover that with their new operating system in place, everything has sped up. They have little time to work out what is going on, and even less time to prevent the situation escalating into a major catastrophe.

See the related Op-Ed in the Bulletin of the Atomic Scientists.

What is the danger of AI in NC3?

The safety of our world is already at risk from accidental or intentional nuclear war. AI integration into the critical functions of NC3 systems could further destabilize this delicate dynamic, with calamitous consequences.

Here are just a few reasons why, which are depicted in the film:

  1. The Nature of AI. AI can be unpredictable, unreliable and is vulnerable to cyberattacks – not ideal for the systems controlling the world’s most dangerous weapons. With no real nuclear war scenarios available, they would also train primarily on simulations, meaning they may respond erratically to unexpected events. Nuclear escalations are not likely to unfold by the book, and AI systems can often react (or fail) in ways quite different from humans.
  2. Losing Control at Breakneck Speed. AI can accelerate the speed of warfare, leaving less time for understanding, communication and clear-headed decision-making. With only a moment to think, commanders are likelier to trust computer readouts or judgements, and less likely to interrogate or reject them.
  3. Geopolitical Instability. A world of arms races and nuclear tensions often prioritizes speed over safety, conflict over diplomacy, action over understanding. At such times, novel technology could be adopted before it has been properly tested.

Now is the time for countries to draw lines on prohibiting uses of AI in NC3, develop robust mitigation measures, and identify stabilizing policies to ensure humans always maintain control over nuclear stability.

Backstory

In Artificial Escalation, the US and China both rapidly adopt AI into their command, control and communication systems. Read more about this fictional (but all too plausible) world.
Read the Backstory

Policy Primer

Learn more about the risks of AI in NC3 and potential policy solutions in our Policy Primer:
Read the Policy Primer

What can you do about this?

How Bad Can It Be?

Along with explosions, radioactive fallout and electromagnetic pulse, a nuclear war could cause black smoke to block sunlight across the northern hemisphere, destroying agriculture for several years. This is called nuclear winter, and it could kill 2 in 3 people on Earth. Watch the video below to see what this would look like.

Further resources

If you would like to learn more about these scenarios, we recommend the following papers:

Xia L, Robock A, Scherrer K, et al. Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection. Nature Food. August 2022.

Boulanin V, Saalman L, Topychkanow P, Su F, Carlsson MP. Artificial Intelligence, Strategic Stability and Nuclear Risk. Stockholm International Peace Research Institute. 2020.

Hruby J, Miller MN. Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. Nuclear Threat Initiative. 2021.

Wehsener A, Walker L, Beck R, Philips L, Leader A. Forecasting the AI and Nuclear Landscape. Institute for Security and Technology. September 2022.

Bajema N & Gower J. A Handbook for Nuclear Decision-making and Risk Reduction in an Era of Technological Complexity. Council on Strategic Risks. December 2022.

Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Imagine A World Podcast

Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of artificial intelligence? Our new podcast series explores ways we could build a more positive future, and offers thought provoking ideas for how we might get there.

Worldbuilding Competition

The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram