Skip to content

Miles Apart: Comparing key AI Act proposals

Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
Published:
November 21, 2023
Author:
Future of Life Institute

Contents

View as a PDF

The table below provides an analysis of several transatlantic policy proposals on how to regulate the most advanced AI systems. The analysis shows that the recent non-paper circulated by Italy, France, and Germany (as reported by Euractiv) includes the fewest provisions with regards to foundation models or general purpose AI systems, even falling below the minimal standard that was set in a recent U.S. White House Executive Order.

While the non-paper proposes a voluntary code of conduct, it does not include any of the safety obligations required by previous proposals, including by the Council’s own adopted position. Moreover, the non-paper envisions a much lower level of oversight and enforcement than the Spanish Presidency’s compromise proposal and both the Parliament and Council’s adopted positions.

This content was first published at futureoflife.org on November 21, 2023.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Paris AI Safety Breakfast #4: Rumman Chowdhury

The fourth of our 'AI Safety Breakfasts' event series, featuring Dr. Rumman Chowdhury on algorithmic auditing, "right to repair" AI systems, and the AI Safety and Action Summits.
19 December, 2024

AI Safety Index Released

The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.
11 December, 2024

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram