Skip to content
project

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

FLI seeks to reduce the risk of nuclear war by raising awareness of just how bad such a war would be - namely due to nuclear winter - and by supporting specific measures that take us back from the brink of nuclear destruction. We also educate the public about the inspiring individuals who prevented nuclear war in the past and celebrate scientists who reduced nuclear risk by discovering nuclear winter. Our current policy work is focussed on ensuring that nuclear stability is not undermined by efforts to incorporate AI systems into nuclear weapons command, control and communication (NC3).

Related project

Artificial Escalation

This fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.
View the short film

AI in nuclear weapons launch

The Stockholm International Peace Research Institute (SIPRI) has outlined three layers of risk around integrating AI systems in NC3. Firstly, AI systems have inherent limitations, often proving unpredictable, unreliable, and highly vulnerable to cyberattacks and spoofing. Secondly, when incorporated into the military domain, AI-powered technologies will accelerate the speed of warfare. This leaves less time for states to signal their own capabilities and intentions, or to understand their opponents'. Thirdly, this risk of AI in warfare becomes even more profound in highly networked NC3 systems. Reliance on AI systems could undermine states' confidence in their retaliatory strike capabilities, or be used to weaken nuclear cybersecurity. These risks are magnified by a lack of past data on nuclear exchanges to train algorithms and a geopolitical context of arms races and nuclear tensions which prioritises speed over safety.

Some application in AI in nuclear systems can, on balance, be stabilising. Nuclear communications, for example, might benefit from the integration of AI systems. According to analysis by the Nuclear Threat Initiative, however, the vast majority of AI applications in NC3 have an uncertain or net destabilising effect on nuclear stability.

Figure: Risks and challenges posed by the use of artificial intelligence in nuclear weapons (source, page 124)

The FLI policy team advocates for the responsible integration of AI systems in line with the final report of the U.S. National Security Commission on AI. Our priority is to ensure that nuclear powers implement the Commission’s recommendation that ‘only human beings can authorize employment of nuclear weapons' (page 10).

Our broader approach to nuclear risk

FLI supports measures that reduce the risk of global nuclear escalation and advocates for the solutions as laid out by the Union of Concerned Scientists. These measures include getting the nine nuclear weapon states to commit to a “No First Use” policy, meaning that they will not be the first state to use nuclear weapons.

We believe in taking land-based nuclear weapons off hair-trigger alert, which would greatly reduce the risk of accidental launch due to a malfunctioning warning system. Likewise, we support the extension of the New Strategic Arms Reduction Treaty between the US and Russia until 2026, among other reduction measures.

FLI further backs the end of ‘sole authority’ use of nuclear weapons, to avoid any future scenarios where the fate of humanity lies in the hands of a single individual. In the past, we have survived at least two such scenarios largely due to luck.

Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram