Skip to content
All documents

Safety Standards Delivering Controllable and Beneficial AI Tools

We present a concrete proposal for how humanity can maintain control over highly advanced AI systems.

The past decade has seen the extraordinary development of artificial intelligence from a niche academic pursuit to a transformative technology. Al tools promise to unlock incredible benefits for people and society, from Nobel prizewinning breakthroughs in drug discovery to autonomous vehicles and personalized education. Unfortunately, two core dynamics threaten to derail this promise:

  1. First, the speed and manner in which Al is being developed-as a chaotic nearly-unregulated race between companies and countries-incentivizes a race to the bottom, cutting corners on security, safety and controllability. We are now closer to figuring out how to build general-purpose smarter-than-human machines (AGI) than to figuring out how to keep them under control.
  2. Second, the main direction of Al development not toward trustworthy controllable tools to empower people, but toward potentially uncontrollable AGI that threatens to replace them, jeopardizing our livelihoods and lives as individuals, and our future as a civilization.

With many leading Al scientists and CEOs predicting AGI to be merely 1-5 years away, it is urgent to correct the course of Al development. Fortunately, there is an easy and well-tested way to do this: start treating the Al industry like all other high-impact industries, with legally binding safety standards, incentivizing companies to innovate to meet them in a race to the top. Here we make a concrete proposal for such standards.

Date published
6 February, 2025
View PDF

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram