Skip to content

FLI Statement on Senate AI Roadmap

President and Co-Founder Max Tegmark today released the following statement after Senate Majority Leader Chuck Schumer released the long awaited Senate AI roadmap.
Published:
May 16, 2024
Author:
Max Tegmark

Contents

CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after Senate Majority Leader Chuck Schumer released the long awaited Senate AI Roadmap:

“I applaud Senators Schumer, Rounds, Young, and Heinrich for this important step toward tangible legislation to rein in the AI arms race that is driven by corporate profits, not what’s best for people around the world. It is good that this roadmap recognizes the risks from AGI and other powerful AI systems. However, we need more action as soon as possible.

“The reality is that the United States is already far behind Europe in developing and implementing policies that can make technological innovation sustainable by reducing the threats and harms presented by out-of-control, unchecked AI development. While this report is a good step in the right direction, more steps are urgently needed, including commonsense regulation to ensure that AI remains safe, ethical, reliable, and beneficial. As we have seen this week with OpenAI’s and Google’s release of their latest models, these companies remain locked in an accelerating race to create increasingly powerful and risky systems, without meaningful guardrails or oversight, even as the leaders of these corporations have stated that future more advanced AI could potentially cause human extinction.

“In order to harness the massive benefits of AI and minimize its considerable risks, policymakers and elected officials must be vigilant in the face of Big Tech recklessness and make sure that technological advancement is in the best interests of all – not just a handful of private corporations and billionaires.

Tegmark participated in the Senate’s bipartisan AI Insight Forum in October. He made headlines last year when he led an open letter calling for a six month pause on giant AI experiments.

See Max Tegmark’s full written testimony for the Senate AI Insight Forum.

Max Tegmark is a professor doing AI research at MIT, with more than three hundred technical papers and two bestselling books. He recently made headlines around the world by leading FLI’s open letter calling for a six-month pause on the training of advanced AI systems. It was signed by more than 30,000 experts, researchers, industry figures, and other leaders, and sounded the alarm on ongoing and unchecked AI development.

The Future of Life Institute is a global non-profit organization working to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.

This content was first published at futureoflife.org on May 16, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #3: Yoshua Bengio

The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram