Skip to content

Our content

The central hub for all of the content we have produced. Here you can browse many of our most popular content, as well as find our most recent publications.
Essentials

Essential reading

We have written a few articles that we believe all people interested in our cause areas should read. They provide a more thorough exploration than you will find on our cause area pages.

Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms -- or the creation of new ones -- will become effective, inexpensive, and pervasive.
November 14, 2018
Mushroom cloud from the nuclear bomb dropped on Nagasaki

The Risk of Nuclear Weapons

Despite the end of the Cold War over two decades ago, humanity still has ~13,000 nuclear weapons on hair-trigger alert. If detonated, they may cause a decades-long nuclear winter that could kill most people on Earth. Yet the superpowers plan to invest trillions upgrading their nuclear arsenals.
November 16, 2015
benefits and risks of artificial intelligence

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
November 14, 2015
Archives

Explore our library of content

Looking for something specific?

You can search our site for any content items that contain your search term, including pages, posts, projects, people, and more.
Most popular

Our most popular content

Posts

Here are some of the most popular posts we have written:
benefits and risks of artificial intelligence

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
November 14, 2015

Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms -- or the creation of new ones -- will become effective, inexpensive, and pervasive.
November 14, 2018

90% of All the Scientists That Ever Lived Are Alive Today

Click here to see this page in other languages: German The following paper was written and submitted by Eric Gastfriend. […]
November 5, 2015

Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

Click here to see this page in other languages : Russian  In the early 1900s, the Italian chemist Giacomo Ciamician recognized that […]
September 30, 2016

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

Existential Risk

Click here to see this page in other languages:  Chinese   French German  Russian An existential risk is any risk that […]
November 16, 2015
Mushroom cloud from the nuclear bomb dropped on Nagasaki

The Risk of Nuclear Weapons

Despite the end of the Cold War over two decades ago, humanity still has ~13,000 nuclear weapons on hair-trigger alert. If detonated, they may cause a decades-long nuclear winter that could kill most people on Earth. Yet the superpowers plan to invest trillions upgrading their nuclear arsenals.
November 16, 2015

Resources

Here are some of the most popular resources we have produced:
US Nuclear Targets

1100 Declassified U.S. Nuclear Targets

The National Security Archives recently published a declassified list of U.S. nuclear targets from 1956, which spanned 1,100 locations across Eastern Europe, Russia, China, and North Korea. The map below shows all 1,100 nuclear targets from that list, and we’ve partnered with NukeMap to demonstrate how catastrophic a nuclear exchange between the United States and Russia could be.
May 12, 2016

Global AI Policy

How countries and organizations around the world are approaching the benefits and risks of AI Artificial intelligence (AI) holds great […]
December 16, 2022

Accidental Nuclear War: a Timeline of Close Calls

The most devastating military threat arguably comes from a nuclear war started not intentionally but by accident or miscalculation. Accidental […]
February 23, 2016

Responsible Nuclear Divestment

Only 30 companies worldwide are involved in the creation of nuclear weapons, cluster munitions and/or landmines. Yet a significant number […]
June 21, 2017

Trillion Dollar Nukes

Would you spend $1.2 trillion tax dollars on nuclear weapons? How much are nuclear weapons really worth? Is upgrading the […]
October 24, 2016
Myth of evil AI

The Top Myths About Advanced AI

Common myths about advanced AI distract from fascinating true controversies where even the experts disagree.
August 7, 2016

AI Policy Challenges

This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence […]
July 17, 2018

Life 3.0

This New York Times bestseller tackles some of the biggest questions raised by the advent of artificial intelligence. Tegmark posits a future in which artificial intelligence has surpassed our own — an era he terms “life 3.0” — and explores what this might mean for humankind.
November 22, 2021
Recently added

Our most recent content

Here are the most recent items of content that we have published:
March 22, 2024
post
February 22, 2024
post
February 14, 2024
post
February 1, 2024

Gradual AI Disempowerment

post
February 1, 2024
post
November 30, 2023
post
November 22, 2023

Protect the EU AI Act

post
November 22, 2023
post
November 21, 2023
post
View all latest content

Latest documents

Here are our most recent policy papers:

Competition in Generative AI: Future of Life Institute’s Feedback to the European Commission’s Consultation

March 2024

European Commission Manifesto

March 2024

Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations

February 2024

FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management

February 2024
View all policy papers

Future of Life Institute Podcast

Conversations with far-sighted thinkers.

Our namesake podcast series features the FLI team in conversation with prominent researchers, policy experts, philosophers, and a range of other influential thinkers.

newsletter

Regular updates about the technologies shaping our world

Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our cause areas, and key updates on the work we do. Subscribe to our newsletter to receive these highlights at the end of each month.

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
Read previous editions
Open letters

Add your name to the list of concerned citizens

We have written a number of open letters calling for action to be taken on our cause areas, some of which have gathered hundreds of prominent signatures. Most of these letters are still open today. Add your signature to include your name on the list of concerned citizens.
Signatories
2672

Open letter calling on world leaders to show long-view leadership on existential threats

The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
February 14, 2024
Signatories
Closed

AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats

This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
October 25, 2023
Signatories
31810

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 22, 2023
Signatories
998

Open Letter Against Reckless Nuclear Escalation and Use

The abhorrent Ukraine war has the potential to escalate into an all-out NATO-Russia nuclear conflict that would be the greatest catastrophe in human history. More must be done to prevent such escalation.
October 18, 2022
All open letters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram