Skip to content
All documents

Framework for Responsible Use of AI in the Nuclear Domain

This policy brief outlines the need for an international framework addressing the convergence of artificial intelligence (Al) and nuclear command, control, and communications (NC3) systems.

Since the outbreak of the Ukraine war in 2022, communication channels between big powers have broken down. During the same period the AI race has escalated. At a time of low deficit of trust, an interface between AI and the nuclear systems of the five largest nuclear powers poses catastrophic risk. A global nuclear war triggered by AI by intent, incident or accident cannot be ruled out, even though AI is not currently used in the actual command functions.

A war can take place due to the malfunctioning or manipulation of the early warning systems, poisoning or inadequacies of synthetic data, or other problems associated with threat detection and target detection functions. As the decision-making time is getting highly compressed with the rapid evolution of AI, the risk of mistakes and miscalculations is increasing by the day.

This policy brief outlines the need for an international framework addressing the convergence of artificial intelligence (AI) and nuclear command, control, and communications (NC3) systems. It draws from a dialogue process, from 2022 to 2024, involving experts from the five permanent members of the UN Security Council (P5): China, France, Russia, the UK, and the US.

The basis of such a framework could be joint political declarations by P5 countries to set the strategic direction for AI-nuclear governance and establish a foundation for broader international cooperation, eventually leading to a global multilateral agreement. But declarations are not enough. We need the principles of transparency and explainability, adherence to the international humanitarian law, and human control accepted by the signatory countries.

To operationalise the proposed principles, it is necessary to take concrete steps for prohibiting offensive AI capabilities, preventing data manipulation, and maintaining robust cybersecurity.

At the national level, voluntary measures can include regular audits of AI systems in NC3, the development of fail-safes, and improvements in crisis communication channels. International collaboration should focus on shared guidelines for AI in the nuclear domain, including safety, ethics, and performance standards. This could involve joint research, simulation exercises, and the establishment of international governance mechanisms for AI in NC3. In order to facilitate international cooperation, it is necessary to have periodic meetings of the P5 countries at the technical level and a high-level meeting at least once in two years at the senior leadership level.

The proposed framework is seen as the first step toward a long-term vision of a world free from nuclear weapons. By fostering responsible AI use in the nuclear domain, the framework aims to reduce the risks posed by nuclear weapons and AI convergence, contributing to global peace and stability.

Author(s)
Future of Life Institute, Strategic Foresight Group
Project(s)
Date published
5 February, 2025
View PDF

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram