Skip to content

FLI Statement on Senate AI Roadmap

President and Co-Founder Max Tegmark today released the following statement after Senate Majority Leader Chuck Schumer released the long awaited Senate AI roadmap.
Published:
May 16, 2024
Author:
Max Tegmark
Max Tegmark and colleagues meeting with US Congressmembers in Washington, April 2024.

Contents

CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after Senate Majority Leader Chuck Schumer released the long awaited Senate AI Roadmap:

“I applaud Senators Schumer, Rounds, Young, and Heinrich for this important step toward tangible legislation to rein in the AI arms race that is driven by corporate profits, not what’s best for people around the world. It is good that this roadmap recognizes the risks from AGI and other powerful AI systems. However, we need more action as soon as possible.

“The reality is that the United States is already far behind Europe in developing and implementing policies that can make technological innovation sustainable by reducing the threats and harms presented by out-of-control, unchecked AI development. While this report is a good step in the right direction, more steps are urgently needed, including commonsense regulation to ensure that AI remains safe, ethical, reliable, and beneficial. As we have seen this week with OpenAI’s and Google’s release of their latest models, these companies remain locked in an accelerating race to create increasingly powerful and risky systems, without meaningful guardrails or oversight, even as the leaders of these corporations have stated that future more advanced AI could potentially cause human extinction.

“In order to harness the massive benefits of AI and minimize its considerable risks, policymakers and elected officials must be vigilant in the face of Big Tech recklessness and make sure that technological advancement is in the best interests of all – not just a handful of private corporations and billionaires.

Tegmark participated in the Senate’s bipartisan AI Insight Forum in October. He made headlines last year when he led an open letter calling for a six month pause on giant AI experiments.

See Max Tegmark’s full written testimony for the Senate AI Insight Forum.

Max Tegmark is a professor doing AI research at MIT, with more than three hundred technical papers and two bestselling books. He recently made headlines around the world by leading FLI’s open letter calling for a six-month pause on the training of advanced AI systems. It was signed by more than 30,000 experts, researchers, industry figures, and other leaders, and sounded the alarm on ongoing and unchecked AI development.

The Future of Life Institute is a global non-profit organization working to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.

This content was first published at futureoflife.org on May 16, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Future of Life Institute Announces 16 Grants for Problem-Solving AI

Announcing the 16 recipients of our newest grants program supporting research on how AI can be safely harnessed to solve specific, intractable problems facing humanity around the world.
11 July, 2024

Evaluation of Deepfakes Proposals in Congress

How do the leading US legislative proposals on the issue of deepfakes compare?
31 May, 2024

Statement in the run-up to the Seoul AI Safety Summit

We provide some recommendations for the upcoming AI Safety Summit in Seoul, most notably the appointment of a coordinator for collaborations between the AI Safety Institutes.
20 May, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram