Skip to content

Future of Life Institute Statement on the Pope’s G7 AI Speech

Max Tegmark provides a response to the Pope's remarks on autonomous weapons to G7 leaders.
Published:
June 18, 2024
Author:
Max Tegmark
Pope Francis addressing G7 leaders in Puglia, Italy. Source: Vatican News

CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after the Pope gave a speech at the G7 in Italy, raising the alarm about the risks of out-of-control AI development.

“The Future of Life Institute strongly supports the Pope’s call at the G7 for urgent political action to ensure artificial intelligence acts in service of humanity. This includes banning lethal autonomous weapons and ensuring that future AI systems stay under human control. I urge the leaders of the G7 nations to set an example for the rest of the world, enacting standards that keep future powerful AI systems safe, ethical, reliable, and beneficial.”

This content was first published at futureoflife.org on June 18, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is the world’s oldest and largest AI think tank, with a team of 35+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

FLI’s President and CEO on Trump’s support for an AI ‘kill switch’

President Trump said during an interview aired yesterday by Fox Business that “there should be” when asked if AI needs […]
16 April, 2026

FLI CEO’s statement on the attack against Sam Altman’s home

Anthony Aguirre, President and CEO of the Future of Life Institute, issued the following statement in response to the attack […]
10 April, 2026

Statement: Head of US Policy on the White House AI legislative recommendations

The White House published it’s long-awaited AI legislative recommendations on Friday, and it still includes a call for Congress to […]
22 March, 2026

Statement from Max Tegmark on the Department of War’s ultimatum

"Our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law."
27 February, 2026
Our content

Some of our projects

See some of the projects we are working on in this area:

Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, and actors stand together.
Our work

Sign up for the Future of Life Institute newsletter

Join 70,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram