Skip to content
project

UK AI Safety Summit

On 1-2 November 2023, the United Kingdom convened the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI produced and published a document outlining key recommendations.

What is the UK AI Safety Summit?

On November 1-2nd 2023, the United Kingdom convened the first ever global government meeting focussed on AI Safety. The meeting was convened by British Prime Minister Rishi Sunak, and took place at Bletchley Park. The Summit sought to achieve five objectives and the UK government provided an outline of what the conference would focus on.

Why does this meeting matter?

Major companies are locked in a race to develop increasingly advanced AI systems, which can be misused by terrorist groups to carry out cyber attacks on digital infrastructure or to create new biochemical weapons. These technological advances also risk consolidating power into the hands of a limited number of private companies. Government intervention and international coordination are essential to mitigate these risks.

What recommendations did FLI provide for the UK AI Safety Summit?

In the the run-up to the United Kingdom’s AI Safety Summit, held at Bletchley Park on November 1st and 2nd 2023, FLI produced and published a document outlining key recommendations. These include:

  • A proposed Declaration on AI Safety, for attendees to sign.
  • Key recommendations of specific governments in advance of the Summit.
  • A proposed Summit agenda, with crucial topics to include.
  • A post-Summit roadmap, which outlined necessary actions to be taken after the event.

FLI Recommendations for the UK AI Safety Summit

September 2023

View recommendations

FLI also produced a scorecard to map the governance landscape heading into the UK AI Safety Summit, and a Safety Standards Policy (SSP) which provides a regulatory framework for robust safety standards, measures and oversight.

FLI Scorecard and Safety Standards Policy

October 2023

View scorecard and policy

Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram