Skip to content

UK AI Safety Summit

The Future of Life Institute recommendations

Widespread unemployment, hacking of critical infrastructure, bioterrorism, nuclear war, human extinction – these are all real but overwhelming risks of unchecked, out-of-control AI development. The companies building them are banking on us feeling powerless. But we are not – at least not yet.

FLI Recommendations for the UK AI Safety Summit

September 2023
View recommendations

On November 1-2, at the world’s first-ever global government meeting focussed on AI Safety, leaders have an opportunity to change the dangerous trajectory of AI development. By acting now, they can ensure a safety-first approach that helps mitigate the risks of this incredibly powerful technology, while safeguarding the incredible benefits it could bring.

We call on global leaders to:

  1. Regulate AI through hard law so companies must prove the safety of their experiments before they develop advanced AI systems. Experience has shown that we cannot trust them to self-regulate.
  2. Establish a global agency that will create and oversee vigorous safety standards for AI development.
  3. Reconvene in six months, and every six months thereafter, to accelerate the creation of a robust AI governance regime.
Image: Bletchley Park, where the UK AI Safety Summit will take place from 1-2 November 2023.

What is the UK AI Safety Summit?

On November 1-2, the United Kingdom will convene the first ever global government meeting focussed on AI Safety. The meeting has been convened by British Prime Minister Rishi Sunak, and will take place at Bletchley Park. The Summit seeks to achieve five objectives and the UK government recently provided an outline of what the conference will focus on.

Why does this meeting matter?

Major companies are locked in an arms race to develop increasingly advanced AI systems, which can be misused by terrorist groups to carry out cyber attacks on digital infrastructure or to create new biochemical weapons. These technological advances also risk consolidating power into the hands of a limited number of private companies. Government intervention and international coordination are essential to mitigate these risks.

What recommendations does FLI have for the UK AI Safety Summit?

In the the run-up to the United Kingdom’s AI Safety Summit, being held at Bletchley Park on November 1st and 2nd, FLI produced and published a document outlining key recommendations. These include:

  • A proposed Declaration on AI Safety, for attendees to sign.
  • Key recommendations of specific governments in advance of the Summit.
  • A proposed Summit agenda, with crucial topics to include.
  • A post-Summit roadmap, which outlined necessary actions to be taken after the event.

FLI Recommendations for the UK AI Safety Summit

September 2023
View recommendations

We must take control of AI before it controls us

The ongoing, unchecked, out-of-control race to develop increasingly powerful AI systems puts humanity at risk. These threats are potentially catastrophic, including rampant unemployment, bioterrorism, widespread disinformation, nuclear war, and many more.

We urgently need lawmakers to step in and ensure a safety-first approach with proper oversight, standards and enforcement. These are not only critical to protecting human lives and wellbeing, they are essential to safeguarding innovation and ensuring that everyone can access the incredible potential benefits of AI going forward. We must not let a handful of tech corporations jeopardise humanity’s shared future.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram