Skip to content
project

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.
A frame from our short film Artificial Escalation on the dangers of integrating AI into Nuclear Command and Control.

The dual-use nature of AI systems can amplify the dual-use nature of other technologies—including biological, chemical, nuclear, and cyber. This phenomenon has come to be known as AI convergence. Policy thought leaders have traditionally focused on examining the risks and benefits of distinct technologies in isolation, assuming a limited interaction between threat areas. Artificial intelligence, however, is uniquely capable of being integrated with and amplifying the risks of other technologies. This demands a reevaluation of the standard policy approach and the creation of a typology of convergence risks that, broadly speaking, might stem from either of two concepts: convergence by technology or convergence by security environment.

The Future of Life Institute, which has a decade of experience engaging in grantmaking and providing education on emerging technology issues, has sought to provide policy expertise on AI-convergence to policymakers in the United States. In each case, our work seeks to summarize the main threats from these intersections and provide concise policy recommendations to mitigate each threat. Our work in this space currently focuses on three key areas of AI convergence:

Biological and Chemical Weapons

AI could reverse the progress made in the last fifty years to abolish chemical weapons and develop strong norms against their use. Recent discoveries have proven that AI systems could generate thousands of novel chemical weapons. Most of these new compounds, as well as their key precursors, were not on any government watch-lists due to their novelty. On the biological weapons front, cutting-edge biosecurity research, such as gain-of-function research, qualifies as dual-use research of concern – i.e. while such research offers significant potential benefits it also creates significant hazards.

Accompanying these rapid developments are even faster advancements in AI tools used in tandem with biotechnology. For instance, advanced AI systems have enabled several novel practices such as AI-assisted identification of virulence factors and in silico design of novel pathogens. More general-purpose AI systems, such as large language models, have also enabled a much larger set of individuals to access potentially hazardous information with regard to procuring and weaponizing dangerous pathogens, lowering the barrier of biological competency necessary to carry out these malicious acts.

Cybersecurity

AI systems can make it easier for malevolent actors to develop more virulent and disruptive malware. AI systems can also help adversaries automate attacks on cyberspaces, increasing the efficiency, creativity and impact of cyberattacks via novel zero-day exploits (i.e. previously unidentified vulnerabilities), targeting critical infrastructure and also enhancing techniques such as phishing and ransomware. As powerful AI systems are increasingly empowered to develop the set of tasks and subtasks to accomplish their objectives, autonomously-initiated hacking is also expected to emerge in the near-term.

Nuclear Weapons

Developments in AI can produce destabilizing effects on nuclear deterrence, increasing the probability of nuclear weapons use and imperiling international security. Advanced AI systems could enhance nuclear risks through further integration into nuclear command and control procedures, by reducing the deterrence value of nuclear stockpiles through augmentation of Intelligence, Surveillance, and Reconnaissance (ISR), by making nuclear arsenals vulnerable to cyber-attacks and manipulation, and by driving nuclear escalation with AI-generated disinformation.

Related project

Artificial Escalation

This fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.

For questions regarding our work in this space, invitations and opportunities for collaboration, please reach out to policy@futureoflife.org.

Our content

Related content

If you enjoyed this, you also might like:

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
1 February, 2024
Our content
Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram