Skip to content

Statement on the veto of California bill SB 1047

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments”
Published:
September 30, 2024
Author:
Anthony Aguirre

Contents

CAMPBELL, CA – Future of Life Institute (FLI) Executive Director Anthony Aguirre today released the following statement after Governor Newsom’s veto of California’s landmark AI regulatory bill, SB 1047:

“The governor’s veto of Senate Bill 1047 is incredibly disappointing. SB1047 simply aimed to hold the largest AI companies accountable to the voluntary commitments they have already made: to perform basic safety testing on these massively powerful systems in order to prevent catastrophic accidents or misuse. This veto leaves Californians vulnerable to the considerable risks caused by the rapid, unregulated development of advanced AI systems, including cyberattacks, autonomous crimes, and biological weapons. Furthermore, it imperils the incredible benefits that AI innovation can bring, by massively increasing the likelihood of an AI disaster which could shutter the industry. 

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments and ensure AI systems are developed safely and responsibly.”

Despite overwhelming bipartisan support from the general public, civil society groups, lawmakers and AI experts, this bill was met with intense pushback from Big Tech and its representatives in academia. And even after substantive amendments to accommodate reasonable concerns, the industry continued to push out-of-date and easily falsifiable claims — often with clear financial and personal incentives.

We have created an SB1047 Explainer to address FAQs and debunk some common myths.

This content was first published at futureoflife.org on September 30, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global think tank with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Context and Agenda for the 2025 AI Action Summit

The AI Action Summit will take place in Paris from 10-11 February 2025. Here we list the agenda and key deliverables.
31 January, 2025

Paris AI Safety Breakfast #4: Rumman Chowdhury

The fourth of our 'AI Safety Breakfasts' event series, featuring Dr. Rumman Chowdhury on algorithmic auditing, "right to repair" AI systems, and the AI Safety and Action Summits.
19 December, 2024

AI Safety Index Released

The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.
11 December, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

AI’s Role in Reshaping Power Distribution

Advanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram