Skip to content

Our content

The central hub for all of the content we have produced. Here you can browse many of our most popular content, as well as find our most recent publications.
Essentials

Essential reading

We have written a few articles that we believe all people interested in our cause areas should read. They provide a more thorough exploration than you will find on our cause area pages.

Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms -- or the creation of new ones -- will become effective, inexpensive, and pervasive.
14 November, 2018
Mushroom cloud from the nuclear bomb dropped on Nagasaki

The Risk of Nuclear Weapons

Despite the end of the Cold War over two decades ago, humanity still has ~13,000 nuclear weapons on hair-trigger alert. If detonated, they may cause a decades-long nuclear winter that could kill most people on Earth. Yet the superpowers plan to invest trillions upgrading their nuclear arsenals.
16 November, 2015
benefits and risks of artificial intelligence

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
14 November, 2015
Most popular

Our most popular content

Posts

Our most popular posts:
benefits and risks of artificial intelligence

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
14 November, 2015

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
30 November, 2023

Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms -- or the creation of new ones -- will become effective, inexpensive, and pervasive.
14 November, 2018

90% of All the Scientists That Ever Lived Are Alive Today

Click here to see this page in other languages: German The following paper was written and submitted by Eric Gastfriend. […]
5 November, 2015

Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

Click here to see this page in other languages : Russian  In the early 1900s, the Italian chemist Giacomo Ciamician recognized that […]
30 September, 2016

Existential Risk

Click here to see this page in other languages:  Chinese   French German  Russian An existential risk is any risk that […]
16 November, 2015
Mushroom cloud from the nuclear bomb dropped on Nagasaki

The Risk of Nuclear Weapons

Despite the end of the Cold War over two decades ago, humanity still has ~13,000 nuclear weapons on hair-trigger alert. If detonated, they may cause a decades-long nuclear winter that could kill most people on Earth. Yet the superpowers plan to invest trillions upgrading their nuclear arsenals.
16 November, 2015

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
21 September, 2023

Resources

Our most popular resources:
US Nuclear Targets

1100 Declassified U.S. Nuclear Targets

The National Security Archives recently published a declassified list of U.S. nuclear targets from 1956, which spanned 1,100 locations across Eastern Europe, Russia, China, and North Korea. The map below shows all 1,100 nuclear targets from that list, and we’ve partnered with NukeMap to demonstrate how catastrophic a nuclear exchange between the United States and Russia could be.
12 May, 2016

Responsible Nuclear Divestment

Only 30 companies worldwide are involved in the creation of nuclear weapons, cluster munitions and/or landmines. Yet a significant number […]
21 June, 2017

Global AI Policy

How countries and organizations around the world are approaching the benefits and risks of AI Artificial intelligence (AI) holds great […]
16 December, 2022

Accidental Nuclear War: a Timeline of Close Calls

The most devastating military threat arguably comes from a nuclear war started not intentionally but by accident or miscalculation. Accidental […]
23 February, 2016

Trillion Dollar Nukes

Would you spend $1.2 trillion tax dollars on nuclear weapons? How much are nuclear weapons really worth? Is upgrading the […]
24 October, 2016
Myth of evil AI

The Top Myths About Advanced AI

Common myths about advanced AI distract from fascinating true controversies where even the experts disagree.
7 August, 2016

Life 3.0

This New York Times bestseller tackles some of the biggest questions raised by the advent of artificial intelligence. Tegmark posits a future in which artificial intelligence has surpassed our own — an era he terms “life 3.0” — and explores what this might mean for humankind.
22 November, 2021

AI Policy Challenges

This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence […]
17 July, 2018
Recently added

Our most recent content

Here are the most recent items of content that we have published:
19 December, 2024
grant-program
19 December, 2024
post
11 December, 2024

AI Safety Index Released

post
11 December, 2024
post
9 December, 2024

Future Of Life Award 2024

fla-award
9 December, 2024
fla-award
5 December, 2024
podcast
4 December, 2024
post
2 December, 2024
newsletter
22 November, 2024
podcast
View all latest content

Latest documents

Here are our most recent policy papers:

FLI AI Safety Index 2024

December 2024

FLI Interim Recommendations for the AI Action Summit

November 2024

EU Scientific Panel Feedback

November 2024

US AI Safety Institute codification (FAIIA vs. AIARA)

November 2024
View all policy papers

Future of Life Institute Podcast

Conversations with far-sighted thinkers.

Our namesake podcast series features the FLI team in conversation with prominent researchers, policy experts, philosophers, and a range of other influential thinkers.

newsletter

Regular updates about the technologies shaping our world

Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our cause areas, and key updates on the work we do. Subscribe to our newsletter to receive these highlights at the end of each month.

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
Read previous editions
Open letters

Add your name to the list of concerned citizens

We have written a number of open letters calling for action to be taken on our cause areas, some of which have gathered hundreds of prominent signatures. Most of these letters are still open today. Add your signature to include your name on the list of concerned citizens.
Signatories
2672

Open letter calling on world leaders to show long-view leadership on existential threats

The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
14 February, 2024
Signatories
Closed

AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats

This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
25 October, 2023
Signatories
31810

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
22 March, 2023
Signatories
998

Open Letter Against Reckless Nuclear Escalation and Use

The abhorrent Ukraine war has the potential to escalate into an all-out NATO-Russia nuclear conflict that would be the greatest catastrophe in human history. More must be done to prevent such escalation.
18 October, 2022
All open letters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram