Skip to content

Panda vs. Eagle

FLI's Director of Policy on why the U.S. national interest is much better served by a cooperative than an adversarial strategy towards China.
Published:
September 27, 2024
Author:
Mark Brakel
Vice Minister Wu of China and Secretary Raimondo of the US speaking about AI safety and governance at the UK AI Safety Summit. @matthewclifford, X.com, 1 November 2023.

Contents

Crosspost: This is a crosspost from Mark Brakel’s Substack

On Wednesday, Ivanka Trump reshared Leopold Aschenbrenner’s influential Situational Awareness essay.

Aschenbrenner’s essay has taken the AI policy bubble by storm. Aschenbrenner argued that artificial general intelligence (AGI) will be built soon, that we can expect the US government to take control of AGI development by 2028, and that the U.S. should step up its efforts to beat China. According to Asschenbrenner, the stakes are high: “The torch of liberty will not survive Xi getting AGI first.” In my view, the U.S. national interest is much better served by a cooperative than an adversarial strategy towards China.

AGI may not be controllable 

Aschenbrenner’s recommendation that the U.S. engage in an AGI arms race with China only makes sense if this is a race that can actually be won. Aschenbrenner himself notes that “reliably controlling AI systems much smarter than we are is an unsolved technical problem” and that “failure could easily be catastrophic.” The CEOs of the major corporations currently developing AGI, Sam Altman at OpenAI, Demis Hassabis at Google DeepMind, and Dario Amodei at Anthropic, all believe that their technology poses an existential threat to humanity (rather than just to China) and leading AI researchers such as Yoshua Bengio, Geoffrey Hinton and Stuart Russell have expressed deep scepticism about our ability to reliably control AGI systems. If there is some probability that the U.S. racing towards AGI wipes out all of humanity, including all Americans, then it might be more sensible for the U.S. government to pursue global cooperation around limits to AI development. 

China will likely understand its national interest

You may however believe that the existential risk is small enough to be outweighed by the risk of permanent Chinese (technological) dominance or, like Aschenbrenner, feel very bullish on a breakthrough in our understanding of what it would take to control superhuman AI systems. Still, I don’t think this justifies an AI arms race.

In Aschenbrenner’s words: “superintelligence will be the most powerful technology—and most powerful weapon—mankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons.” Clearly, if any of the existing superpowers believe that a rival power is about to gain a “decisive military advantage” over them, this will be hugely destabilising to the international system. To stave off subjugation by the United States, China and Russia will likely initiate preemptive military action to prevent a scenario in which the U.S. becomes the forever hegemon. An AGI arms race could push us to the brink of nuclear war, and this would seem a very strong argument for global cooperation over frenzied competition. 

The view from Beijing

It takes two to tango and pursuing cooperation on AI is foolish if China will race ahead regardless. China certainly has its own equivalents of Marc Andreessen and Yann LeCun – the West’s loud and financially motivated evangelists of unbounded AI development. The Economist recently identified Zhu Songchun, the director of a state-backed programme to develop AGI, and science and technology minister Yin Hejun as two leading voices pushing back against any restraint.

Nevertheless, more safety-minded voices seem to be winning out for now. The summer saw the official launch of a Chinese AI Safety Network, with support from major universities in Beijing and Shanghai. Andrew Yao, the only Chinese person to ever have won the Turing award for advances in computer science, Xue Lan, the president of the state’s expert committee on AI governance, and a former president of Chinese tech company Baidu have all warned that reckless AI development can threaten humanity. In June, China’s President Xi sent Andrew Yao a letter with praise for his work and in July the President put AI risk front and centre at a meeting of the party’s Central Committee. 

Cold shoulder? 

November last year was particularly promising for US-China cooperation on AI. On the first day of the month, US and Chinese representatives quite literally shared a stage at the Bletchley Park AI Safety Summit in the UK. Two weeks later, Presidents Biden and Xi met at a summit in San Francisco and agreed to open a bilateral channel on AI issues specifically. This nascent but fragile coordination was further demonstrated at a subsequent AI Safety conference in South Korea in May.

Clearly, China and the US are at odds over many issues including the future of Taiwan, industrial policy and export controls. Some issues, such as climate change, nuclear security and AI safety, cannot however be solved within geopolitical blocs. They demand a global response. The moves that nations make over the coming months may well determine the global AI trajectory; towards an AI arms race with a deeply uncertain outcome or towards some form of shared risk management.

The West (including the US) has two opportunities to keep China at the table and to empower the safety-minded voices in Beijing: the November San Francisco meeting between AI Safety Institutes and the Paris AI Action Summit in February. A substantial part of both summits will deal with safety benchmarks, evaluations, and company obligations. Whilst some of these issues are undoubtedly political, others are not. Ensuring that AI systems remain under human control is as much a Chinese as a Western concern and a meeting of the safety institutes in particular can be a neutral space where technical experts meet across the geopolitical divide. 

This content was first published at futureoflife.org on September 27, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

US Federal Agencies: Mapping AI Activities

This guide outlines AI activities across the US Executive Branch, focusing on regulatory authorities, budgets, and programs.
9 September, 2024

Paris AI Safety Breakfast #1: Stuart Russell

The first of our 'AI Safety Breakfasts' event series, featuring Stuart Russell on significant developments in AI, AI research priorities, and the AI Safety Summits.
5 August, 2024

Poll Shows Broad Popularity of CA SB1047 to Regulate AI

A new poll from the AI Policy Institute shows broad and overwhelming support for SB1047, a bill to evaluate the risk of catastrophic harm posed by AI models.
23 July, 2024

FLI Praises AI Whistleblowers While Calling for Stronger Protections and Regulation 

We need to strengthen current whistleblower protections. Lawmakers should act immediately to pass legal measures that provide the protection these individuals deserve.
16 July, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram