Skip to content

Statement: Head of US Policy on the White House AI legislative recommendations

Published:
March 22, 2026
Author:
Taylor Jones

Contents

The White House published it’s long-awaited AI legislative recommendations on Friday, and it still includes a call for Congress to preempt states from regulating AI. Our full statement is below. First, here’s the key context:

  • Preemption is insanely unpopular: poll after poll has shown that no one thinks preemption is a good idea except Big Tech.
  • It’s not as if Congress doesn’t know that: the Senate rejected preemption by a vote of 99-to-1 last summer.
  • And yet, congressional leadership tried to sneak it into the NDAA in November (but that failed too).
  • David Sacks isn’t letting preemption’s toxicity stop him: the White House issued an Executive Order in December essentially trying to do an end-run around Congress on preemption. But without Congress, the executive order is fairly weak, which is why the legislative proposal is now passing the ball back to Congress and asking it to codify the moratorium.
  • Fun fact: The White House, clearly understanding how unpopular preemption is, opted to not mention the provision in their press release at all.
  • Meanwhile, broken promises abound: The White House has repeatedly promised it would not stand in the way of child safety legislation in the states, but Davids Sacks has been directly involved with efforts to kill child safety legislation, most notably in Utah.

To sum it all up…

Michael Kleinman, the Head of US Policy at the Future of Life Institute, issued the following statement in response to the White House’s legislative proposal:

Big Tech and their allies in the administration are desperate to stop states from regulating AI, even as it ravages families, eliminates jobs, and threatens to replace humans wholesale. Huge majorities of Americans, Republican and Democrat alike, are demanding guardrails on this increasingly powerful technology. Instead of listening, David Sacks is working to block states from protecting their own citizens. He’s even tried to block commonsense child safety legislation in conservative states like Utah. It’s as outrageous as it is dangerous.

Any legislative framework that includes federal preemption without meaningful guardrails isn’t serious about protecting Americans. It’s just another handout to Big Tech at the expense of our kids, our communities, and our jobs. David Sacks can rebrand his Big Tech wish list however he likes: Americans aren’t buying it.

This content was first published at futureoflife.org on March 22, 2026.

About the Future of Life Institute

The Future of Life Institute (FLI) is the world’s oldest and largest AI think tank, with a team of 35+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Statement from Max Tegmark on the Department of War’s ultimatum

"Our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law."
27 February, 2026

Michael Kleinman reacts to breakthrough AI safety legislation

FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum
3 October, 2025

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, and actors stand together.
Our work

Sign up for the Future of Life Institute newsletter

Join 70,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram