Skip to content

Michael Kleinman reacts to breakthrough AI safety legislation

FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum
Published:
October 3, 2025
Author:
Michael Kleinman
View of California State Capitol from 10th Street, Sacramento. By Andre m - Own work, CC BY-SA 3.0, Link

Contents

Michael Kleinman, Head of US Policy at the Future of Life Institute, released the following statement regarding the signing of SB 53:

We applaud Governor Newsom for signing this vital legislation. Across America, the demand for stronger AI legislation continues to grow, with large majorities of both Republicans and Democrats calling for common-sense AI safeguards, including 82% of Republicans who agree there should be limits on what AI is allowed to do, and more than 70% of voters who support the government taking action to set safety standards.

While more work remains, this is a landmark moment: lawmakers have finally begun establishing basic protections around advanced AI systems — the same safeguards that exist for every other industry, whether pharmaceuticals, aircraft manufacturers, or your local sandwich shop.

This summer, the Senate resoundingly rejected, by a 99-to-1 vote, an attempt to prevent states from taking action. Now, states are stepping up to enact the AI safeguards the American people are demanding. Unless and until there are strong federal AI safety standards to protect our children, our communities and our jobs, both blue and red states will have no choice but to continue filling the void.

This content was first published at futureoflife.org on October 3, 2025.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global think tank with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Context and Agenda for the 2025 AI Action Summit

The AI Action Summit will take place in Paris from 10-11 February 2025. Here we list the agenda and key deliverables.
31 January, 2025

Paris AI Safety Breakfast #4: Rumman Chowdhury

The fourth of our 'AI Safety Breakfasts' event series, featuring Dr. Rumman Chowdhury on algorithmic auditing, "right to repair" AI systems, and the AI Safety and Action Summits.
19 December, 2024

AI Safety Index Released

The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.
11 December, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Creative Contest: Keep The Future Human

$100,000+ in prizes for creative digital media that engages with the essay's key ideas, helps them to reach a wider range of people, and motivates action in the real world.

AI Existential Safety Community

A community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram