Skip to content

FLI’s President and CEO on Trump’s support for an AI ‘kill switch’

Published:
April 16, 2026
Author:
Future of Life Institute

Contents

President Trump said during an interview aired yesterday by Fox Business that “there should be” when asked if AI needs safeguards or a ‘kill switch.’ A full statement from FLI President and CEO Anthony Aguirre is below.

Key Context:

  • Anthropic’s Mythos model has rocked the AI and cybersecurity world, demonstrating just how vulnerable the internet, our economy, and our society are to increasingly powerful AI systems.
  • The UK AI Security Institute’s report describing their testing of Mythos showed “a step up over previous frontier models in a landscape where cyber performance was already rapidly improving.”
  • The report also noted that Mythos was able to complete a “32-step corporate network attack simulation spanning initial reconnaissance through to full network takeover, which we estimate to require humans 20 hours to complete.”
  • Meanwhile, an AI arms race is breaking out with countries developing ever more sophisticated AI weapons and AI-integrated weapons platforms.

Anthony Aguirre, President and CEO of the Future of Life Institute, issued the following statement in response to President Trump’s support of AI safeguards and a ‘kill switch’:

President Trump is exactly right: advanced AI systems need a robust off-switch to ensure humans never lose control over them. Claude Mythos demonstrated just how vulnerable our economy and our financial system are to the offensive cyber capabilities of the latest AI systems, to say nothing of more capable systems to come. In pre-release testing, Mythos autonomously discovered zero-day vulnerabilities across every major operating system and web browser, including flaws that had survived decades of human review.

Models like Mythos are nearly superhuman in their ability to find and exploit vulnerabilities in critical systems. If we go forward building highly superhuman AI – which FLI believes we should not – for any hope of control we cannot rely on software-based safety measures alone. We need the ability to manage these systems at the hardware level. This is entirely feasible. The same sort of hardware security measures that keep your face private and allow you to remotely shut down your iPhone exist on AI chips, and can be used to contain AI models and discontinue their operation if needed. FLI has prototyped these capabilities itself, and if we can do it then NVIDIA AI developers surely can as well, at scale, and with increasing robustness in new generations of hardware.

This content was first published at futureoflife.org on April 16, 2026.

About the Future of Life Institute

The Future of Life Institute (FLI) is the world’s oldest and largest AI think tank, with a team of 35+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

FLI CEO’s statement on the attack against Sam Altman’s home

Anthony Aguirre, President and CEO of the Future of Life Institute, issued the following statement in response to the attack […]
10 April, 2026

Statement: Head of US Policy on the White House AI legislative recommendations

The White House published it’s long-awaited AI legislative recommendations on Friday, and it still includes a call for Congress to […]
22 March, 2026

Statement from Max Tegmark on the Department of War’s ultimatum

"Our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law."
27 February, 2026

Michael Kleinman reacts to breakthrough AI safety legislation

FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum
3 October, 2025
Our content

Some of our projects

See some of the projects we are working on in this area:

Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, and actors stand together.
Our work

Sign up for the Future of Life Institute newsletter

Join 70,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram