Skip to content

FLI Statement on White House National Security Memorandum

Published:
October 28, 2024
Author:
Future of Life Institute
South Facade of the White House, by Matt H. Wade on Wikimedia Commons.

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance to national security agencies regarding:

  • Development of AI procurement and use;
  • Protecting the security of AI infrastructure from foreign interference;
  • Evaluating AI models that pose security and safety risks to the public; and
  • Building AI talent and capacity within the US government. 

The memo also formally authorizes the AI Safety Institute at the U.S. Department of Commerce as the federal government’s primary port of contact for AI evaluations, and emphasizes the need for the National AI Research Resource to share AI development resources with academics, civil society, and communities across the US. 

Please see below for a statement from Future of Life Institute US Policy Specialist Hamza Chaudhry on the new NSM and its implications for safe AI development: 

“The National Security Memo released this week is a critical step toward acknowledging and addressing the risks inherent in unchecked AI development — especially in the areas of defense, national security, and weapons of war.

“This memo contains many commendable actions, efforts, and recommendations which, if implemented, will make the United States and the world safer from the threats and uncertainties of AI — including empowering the U.S. Department Commerce’s AI Safety Institute and increasing AI expertise across key government agencies and departments. 

“At the same time, we express caution against taking a purely competitive approach to AI development. Working actively with strategic competitors, in addition to close allies and partners, is critical to guarantee responsible AI development and advance US national security interests. Lack of cooperation will make it harder to cultivate a stable and responsible framework to advance international AI governance that fosters safe, secure, and trustworthy AI development and use.

“While the release of this memo represents progress in safeguarding AI deployment in national security and beyond, it represents just the start of the urgent action needed. It is vital that the government move from voluntary commitments toward compulsory requirements for safe AI. And it’s essential that the memo’s prescriptions and recommendations are turned into policy — no matter what the results of the upcoming election.”

For more information, please email press@futureoflife.org

This content was first published at futureoflife.org on October 28, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

Paris AI Safety Breakfast #3: Yoshua Bengio

The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024

FLI Statement on Nobel Prize Winners

The Future of Life Institute would like to congratulate John Hopfield and Geoffrey Hinton receiving the 2024 Nobel Prize in […]
10 October, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram