Skip to content

Poll Shows Broad Popularity of CA SB1047 to Regulate AI

A new poll from the AI Policy Institute shows broad and overwhelming support for SB1047, a bill to evaluate the risk of catastrophic harm posed by AI models.
Published:
July 23, 2024
Author:
Future of Life Institute
Results from the recent AIPI poll on California bill SB1047.

We are releasing a new poll from the AI Policy Institute (view the executive summary and full survey results) showing broad and overwhelming support for SB1047, Sen. Scott Wiener’s bill to evaluate whether the largest new AI models create a risk of catastrophic harm, which is currently moving through the California state house. The poll shows 59% of California voters support SB1047, while only 20% oppose it, and notably, 64% of respondents who work in the tech industry support the policy, compared to just 17% who oppose it.

Recently, Sen. Wiener sent an open letter to Andreessen Horowitz and Y Combinator dispelling misinformation that has been spread about SB1047, including that it would send model developers to jail for failing to anticipate misuse and that it would stifle innovation. The letter points out that the “bill protects and encourages innovation by reducing the risk of critical harms to society that would also place in jeopardy public trust in emerging technology.” Read Sen. Wiener’s letter in full here

Anthony Aguirre, Executive Director of the Future of Life Institute:

“This poll is yet another example of what we’ve long known: the vast majority of the public support commonsense regulations to ensure safe AI development and strong accountability measures for the corporations and billionaires developing this technology. It is abundantly clear that there is a massive, ongoing disinformation effort to undermine public support and block this critical legislation being led by individuals and companies with a strong financial interest in ensuring there is no regulation of AI technology. However, today’s data confirms, once again, how little impact their efforts to discredit extremely popular measures have been, and how united voters–including tech workers–and policymakers are in supporting SB1047 and in fighting to ensure AI technology is developed to benefit humanity.”

This content was first published at futureoflife.org on July 23, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

AGI Manhattan Project Proposal is Scientific Fraud

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #3: Yoshua Bengio

The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram