Skip to content

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
Published:
March 22, 2024
Author:
Anthony Aguirre

One year ago today, the Future of Life Institute put out an open letter that called for a pause of at least six months on “giant AI experiments” – systems more powerful than GPT-4. It was signed by more than 30,000 individuals, including pre-eminent AI experts and industry executives, and made headlines around the world. The letter represented the widespread and rapidly growing concern about the massive risks presented by the out-of-control and unregulated race to develop and deploy increasingly powerful systems.

These risks include an explosion in misinformation and digital impersonation, widespread automation condemning millions to economic disempowerment, enablement of terrorists to build biological and chemical weapons, extreme concentration of power into the hands of a few unelected individuals, and many more. These risks have subsequently been acknowledged by the AI corporations’ leaders themselves in newspaper interviews, industry conferences, joint statements, and U.S. Senate hearings. 

Despite admitting the danger, aforementioned AI corporations have not paused. If anything they have sped up, with vast investments in infrastructure to train ever-more giant AI systems. At the same time, the last 12 months have seen growing global alarm, and calls for lawmakers to take action. There has been a flurry of regulatory activity. President Biden signed a sweeping Executive Order directing model developers to share their safety test results with the government, and calling for rigorous standards and tools for evaluating systems. The UK held the first global AI Safety Summit, with 28 countries signing the “Bletchley Declaration”, committing to cooperate on safe and responsible development of AI. Perhaps most significantly, the European Parliament passed the world’s first comprehensive legal framework in the space – the EU AI Act.

These developments should be applauded. However, the creation and deployment of the most powerful AI systems is still largely ungoverned, and rushes ahead without meaningful oversight. There is still little-to-no legal liability for corporations when their AI systems are misused to harm people, for example in the production of deepfake pornography. Despite conceding the risks, and in the face of widespread concern, Big Tech continues to spend billions on increasingly powerful and dangerous models, while aggressively lobbying against regulation. They are placing profit above people, while often reportedly viewing safety as an afterthought.

The letter’s proposed measures are more urgent than ever. We must establish and implement shared safety protocols for advanced AI systems, which must in turn be audited by independent outside experts. Regulatory authorities must be empowered. Legislation must establish legal liability for AI-caused harm. We need public funding for technical safety research, and well-resourced institutions to cope with incoming disruptions. We must demand robust cybersecurity standards, to help prevent the misuse of said systems by bad actors.

AI promises remarkable benefits – advances in healthcare, new avenues for scientific discovery, increased productivity, and more. However there is no reason to believe that vastly more complex, powerful, opaque, and uncontrollable systems are necessary to achieve these benefits. We should instead identify and invest in narrow and controllable general-purpose AI systems that solve specific global challenges.

Innovation needs regulation and oversight. We know this from experience. The establishing of the Federal Aviation Administration facilitated convenient air travel, while ensuring that airplanes are safe and reliable. On the flipside, the 1979 meltdown at the Three Mile Island nuclear reactor effectively shuttered the American nuclear energy industry, in large part due to insufficient training, safety standards and operating procedures. A similar disaster would do the same for AI. We should not let the haste and competitiveness of a handful of companies deny us incredible benefits it can bring.

Regulatory progress has been made, but the technology has advanced faster. Humanity can still enjoy a flourishing future with AI, and we can realize a world in which its benefits are shared by all. But first we must make it safe. The open letter referred to giant AI experiments because that’s what they are: the researchers and engineers creating them do not know what capabilities, or risks, the next generation of AI will have. They only know they will be greater, and perhaps much greater, than today’s. Even AI companies that take safety seriously have adopted the approach of aggressively experimenting until their experiments become manifestly dangerous, and only then considering a pause. But the time to hit the car brakes is not when the front wheels are already over a cliff edge. Over the last 12 months developers of the most advanced systems have revealed beyond all doubt that their primary commitment is to speed and their own competitive advantage. Safety and responsibility will have to be imposed from the outside. It is now our lawmakers who must have the courage to deliver – before it is too late.

This content was first published at futureoflife.org on March 22, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #3: Yoshua Bengio

The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram