Skip to content
project

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Bletchley Park, the venue of the UK AI Safety Summit in 2023.

What are the AI Safety Summits?

AI Safety Summits are bi-annual international meetings hosted by States to discuss the safety and regulation of artificial intelligence, particularly advanced AI systems.

The first AI Safety Summit was convened by the United Kingdom at Bletchley Park in November 2023.

Following the second AI Safety Summit in Seoul on May 21-22 2024, France has been designated to host the third one in February 2025.

Why do the Summits matter?

Major companies are locked in a race to develop increasingly advanced AI systems, which can be misused by malicious groups to carry out cyber attacks on digital infrastructure or to create new biochemical weapons. These technological advances also risk consolidating power into the hands of a limited number of private companies. Government intervention and international coordination are essential to mitigate these risks.

Safety Summits are key opportunities for AI governance: they enable international cooperation on concrete outcomes. For example, one of the objectives of the first AI Safety Summit was to develop a global shared understanding of the risks posed by advanced AI systems. As a result, an international expert panel drafted the International AI Safety Report: an up-to-date scientific report on the safety of advanced AI systems.

What is FLI's engagement with the Summits?

In the run up to the Summits, FLI supports the hosts and other participating countries with key insights on AI progress and recommendations on how to improve cooperation around AI safety.

FLI Recommendations for the UK AI Safety Summit

September 2023

In the the run-up to the United Kingdom’s AI Safety Summit, held at Bletchley Park on November 1st and 2nd 2023, FLI produced and published a document outlining key recommendations. These include:

  • A proposed Declaration on AI Safety, for attendees to sign.
  • Key recommendations of specific governments in advance of the Summit.
  • A proposed Summit agenda, with crucial topics to include.
  • A post-Summit roadmap, which outlined necessary actions to be taken after the event.

View recommendations

FLI Scorecard and Safety Standards Policy

October 2023

FLI also produced a scorecard to map the governance landscape heading into the UK AI Safety Summit, and a Safety Standards Policy (SSP) which provides a regulatory framework for robust safety standards, measures and oversight.

View scorecard and policy

Paris AI Safety Breakfasts

NEW: September 2024

The AI Action Summit will be held in February 2025. This event series aims to stimulate discussion relevant to the Safety Summit for English and French audiences, and to bring together experts and enthusiasts in the field to exchange ideas and perspectives.

Learn more or sign up to be notified about upcoming AI Safety Breakfasts

Ima (Imane Bello) is in charge of the AI Safety Summits for the Future of Life Institute (FLI).

Our content

Related content

If you enjoyed this, you also might like:

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024

Paris AI Safety Breakfast #1: Stuart Russell

The first of our 'AI Safety Breakfasts' event series, featuring Stuart Russell on significant developments in AI, AI research priorities, and the AI Safety Summits.
5 August, 2024

Statement in the run-up to the Seoul AI Safety Summit

We provide some recommendations for the upcoming AI Safety Summit in Seoul, most notably the appointment of a coordinator for collaborations between the AI Safety Institutes.
20 May, 2024
Our content
Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram