Our content
Month: January 2016
Over the years we have created a large library of content relating to our cause areas. Here you can browse our archives by topic, search term, content type, and more.
Looking for something in particular?
You can search our site for any content items that contain your search term, including pages, posts, projects, people, and more.
Suggested searches
Here are some common searches that you might like to use:
Overview of all our content
Sort order
Number of results
Category
Cause areas
Content type
April 2, 2024
Future of Life Institute Newsletter: A pause didn’t happen. So what did?
newsletter
April 2, 2024
newsletter
March 20, 2024
Competition in Generative AI: Future of Life Institute’s Feedback to the European Commission’s Consultation
document
March 20, 2024
document
March 16, 2024
video
March 4, 2024
Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
newsletter
March 4, 2024
newsletter
February 27, 2024
Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations
document
February 27, 2024
document
February 21, 2024
FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management
document
February 21, 2024
document
February 21, 2024
FLI Response to NIST: Request for Information on NIST’s Assignments under the AI Executive Order
document
February 21, 2024
document
February 21, 2024
FLI Response to Bureau of Industry and Security (BIS): Request for Comments on Implementation of Additional Export Controls
document
February 21, 2024
document
February 19, 2024
Response to CISA Request for Information on Secure by Design AI Software
document
February 19, 2024
document
February 14, 2024
Carta aberta convocando os líderes mundiais a demonstrarem liderança com visão de longo prazo em relação às ameaças existenciais
open-letter
February 14, 2024
open-letter
February 14, 2024
Offener Brief, der die Staatsoberhäupter und Führungskräfte der Welt auffordert, bei existenziellen Bedrohungen eine langfristig ausgerichtete Führungsrolle zu übernehmen
open-letter
February 14, 2024
open-letter
February 14, 2024
رسالة مفتوحة تدعو قادة العالم إلى إظهار قيادة بعيدة النظر بشأن التهديدات الوجودية
open-letter
February 14, 2024
open-letter
February 14, 2024
Lettre ouverte appelant les dirigeants mondiaux à faire preuve de leadership à long terme face aux menaces existentielles
open-letter
February 14, 2024
open-letter
February 14, 2024
Carta abierta apelando a que líderes mundiales muestren liderazgo a largo plazo frente a las amenazas existenciales
open-letter
February 14, 2024
open-letter
February 14, 2024
Open letter calling on world leaders to show long-view leadership on existential threats
open-letter
February 14, 2024
open-letter
February 14, 2024
Call for proposed designs for global institutions governing AI
grant-program
February 14, 2024
grant-program
February 14, 2024
Call for proposals evaluating the impact of AI on Poverty, Health, Energy and Climate SDGs
grant-program
February 14, 2024
grant-program
February 14, 2024
Realising Aspirational Futures – New FLI Grants Opportunities
project
February 14, 2024
project
February 14, 2024
Realising Aspirational Futures – New FLI Grants Opportunities
post
February 14, 2024
post
February 2, 2024
Future of Life Institute Newsletter: The Year of Fake
newsletter
February 2, 2024
newsletter
January 22, 2024
Why Nearly All Deepfakes Should Be Illegal: Max Tegmark Speaks To Forbes At Davos About AI
video
January 22, 2024
video
January 6, 2024
Mark Brakel on the UK AI Summit and the Future of AI Policy
podcast
January 6, 2024
podcast
December 22, 2023
Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet
newsletter
December 22, 2023
newsletter
December 4, 2023
Future of Life Institute Newsletter: Save the EU AI Act 🇪🇺
newsletter
December 4, 2023
newsletter
November 30, 2023
Exploration of secure hardware solutions for safe AI deployment
post
November 30, 2023
post
November 14, 2023
Artificial Intelligence and Nuclear Weapons: Problem Analysis and US Policy Recommendations
document
November 14, 2023
document
November 1, 2023
Future of Life Institute Newsletter: Everyone’s (Finally) Talking About AI Safety
newsletter
November 1, 2023
newsletter
October 30, 2023
FLI Governance Scorecard and Safety Standards Policy (SSP)
document
October 30, 2023
document
October 25, 2023
AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats
open-letter
October 25, 2023
open-letter
October 24, 2023
Written Statement of Dr. Max Tegmark to the AI Insight Forum
post
October 24, 2023
post
October 17, 2023
Imagine A World: What if AI advisors helped us make better decisions?
podcast
October 17, 2023
podcast
October 10, 2023
Cybersecurity and AI: Problem Analysis and US Policy Recommendations
document
October 10, 2023
document
October 10, 2023
Imagine A World: What if narrow AI fractured our shared reality?
podcast
October 10, 2023
podcast
October 3, 2023
Imagine A World: What if AI enabled us to communicate with animals?
podcast
October 3, 2023
podcast
October 1, 2023
Future of Life Institute Newsletter: Our Pause Letter, Six Months Later
newsletter
October 1, 2023
newsletter
September 29, 2023
Mitigating the Risks of AI Integration in Nuclear Launch
project
September 29, 2023
project
September 26, 2023
Imagine A World: What if some people could live forever?
podcast
September 26, 2023
podcast
September 21, 2023
As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development
post
September 21, 2023
post
September 19, 2023
Imagine A World: What if we had digital nations untethered to geography?
podcast
September 19, 2023
podcast
September 12, 2023
Imagine A World: What if global challenges led to more centralization?
podcast
September 12, 2023
podcast
September 10, 2023
FLI recommendations for the UK Global AI Safety Summit
document
September 10, 2023
document
September 8, 2023
Tom Davidson on How Quickly AI Could Automate the Economy
podcast
September 8, 2023
podcast
September 5, 2023
Imagine A World: What if we designed and built AI in an inclusive way?
podcast
September 5, 2023
podcast
September 5, 2023
Imagine A World: What if new governance mechanisms helped us coordinate?
podcast
September 5, 2023
podcast
September 5, 2023
Future of Life Institute Newsletter: ‘Imagine A World’ is out today!
newsletter
September 5, 2023
newsletter
August 20, 2023
Robert Trager on International AI Governance and Cybersecurity at AI Companies
podcast
August 20, 2023
podcast
August 19, 2023
أوقفوا تجارب الذكاء الاصطناعي العملاقة: رسالة مفتوحة
open-letter
August 19, 2023
open-letter
August 2, 2023
Future of Life Institute Newsletter: Hollywood Talks AI
newsletter
August 2, 2023
newsletter
July 25, 2023
US Senate Hearing ‘Oversight of AI: Principles for Regulation’: Statement from the Future of Life Institute
post
July 25, 2023
post
July 5, 2023
Future of Life Institute Newsletter: Our Most Realistic Nuclear War Simulation Yet
newsletter
July 5, 2023
newsletter
June 29, 2023
How would a nuclear war between Russia and the US affect you personally?
video
June 29, 2023
video
May 31, 2023
Future of Life Institute Newsletter: Progress on the EU AI Act!
newsletter
May 31, 2023
newsletter
May 4, 2023
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
podcast
May 4, 2023
podcast
April 27, 2023
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
podcast
April 27, 2023
podcast
March 31, 2023
FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments
post
March 31, 2023
post
March 31, 2023
Future of Life Institute Newsletter: Pause Giant AI Experiments!
newsletter
March 31, 2023
newsletter
March 30, 2023
Lennart Heim on the AI Triad: Compute, Data, and Algorithms
podcast
March 30, 2023
podcast
March 22, 2023
Stoppons les expérimentations sur les IA : Une Lettre Ouverte
open-letter
March 22, 2023
open-letter
March 16, 2023
Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI
podcast
March 16, 2023
podcast
March 9, 2023
Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence
podcast
March 9, 2023
podcast
March 2, 2023
Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
podcast
March 2, 2023
podcast
March 1, 2023
Future of Life Institute February 2023 Newsletter: Progress on Autonomous Weapons!
newsletter
March 1, 2023
newsletter
February 23, 2023
Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI
podcast
February 23, 2023
podcast
February 16, 2023
Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability
podcast
February 16, 2023
podcast
February 2, 2023
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
podcast
February 2, 2023
podcast
January 26, 2023
Connor Leahy on AI Safety and Why the World is Fragile
podcast
January 26, 2023
podcast
January 19, 2023
Connor Leahy on AI Progress, Chimps, Memes, and Markets
podcast
January 19, 2023
podcast
December 22, 2022
Anders Sandberg on Grand Futures and the Limits of Physics
podcast
December 22, 2022
podcast
December 16, 2022
Characterizing AI Policy using Natural Language Processing
post
December 16, 2022
post
December 9, 2022
FLI November 2022 Newsletter: AI Liability Directive
newsletter
December 9, 2022
newsletter
December 8, 2022
Vincent Boulanin on Military Use of Artificial Intelligence
podcast
December 8, 2022
podcast
December 1, 2022
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
podcast
December 1, 2022
podcast
November 24, 2022
Robin Hanson on Predicting the Future of Artificial Intelligence
podcast
November 24, 2022
podcast
November 17, 2022
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
podcast
November 17, 2022
podcast
November 11, 2022
FLI October 2022 Newsletter: Against Reckless Nuclear Escalation
newsletter
November 11, 2022
newsletter
November 10, 2022
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
podcast
November 10, 2022
podcast
November 3, 2022
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
podcast
November 3, 2022
podcast
November 1, 2022
Emerging Non-European Monopolies in the Global AI Market
document
November 1, 2022
document
October 27, 2022
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
podcast
October 27, 2022
podcast
October 20, 2022
Alan Robock on Nuclear Winter, Famine, and Geoengineering
podcast
October 20, 2022
podcast
October 18, 2022
Open Letter Against Reckless Nuclear Escalation and Use
open-letter
October 18, 2022
open-letter
October 17, 2022
FLI September 2022 Newsletter: $3M Impacts of Nuclear War Grants Program!
newsletter
October 17, 2022
newsletter
October 13, 2022
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
podcast
October 13, 2022
podcast
October 9, 2022
A Proposal for a Definition of General Purpose Artificial Intelligence Systems
document
October 9, 2022
document
October 6, 2022
Philip Reiner on Nuclear Command, Control, and Communications
podcast
October 6, 2022
podcast
September 21, 2022
The ideas behind ‘Slaughterbots – if human: kill()’ | A deep dive interview
video
September 21, 2022
video
April 1, 2022
Response to the RFI: Artificial Intelligence Risk Management Framework
document
April 1, 2022
document
March 24, 2022
Max Tegmark speaks at the European Parliament hearing on the EU AI Act on General Purpose AI
resource
March 24, 2022
resource
February 9, 2022
Anthony Aguirre and Anna Yelizarova on FLI’s Worldbuilding Contest
podcast
February 9, 2022
podcast
January 26, 2022
David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
podcast
January 26, 2022
podcast
December 21, 2021
Panel discussion of ‘Slaughterbots – if human: kill()’ | Perspectives on lethal autonomous weapons
video
December 21, 2021
video
November 22, 2021
Real-Life Technologies that Prove Autonomous Weapons are Already Here
post
November 22, 2021
post
November 2, 2021
Rohin Shah on the State of AGI Safety Research in 2021
podcast
November 2, 2021
podcast
September 28, 2021
Special Newsletter: 2021 Future of Life Award
newsletter
September 28, 2021
newsletter
September 16, 2021
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
podcast
September 16, 2021
podcast
September 7, 2021
James Manyika on Global Economic and Technological Trends
podcast
September 7, 2021
podcast
July 30, 2021
Michael Klare on the Pentagon’s view of Climate Change and the Risks of State Collapse
podcast
July 30, 2021
podcast
July 9, 2021
Avi Loeb on ‘Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
podcast
July 9, 2021
podcast
June 3, 2021
The Future of Life Institute announces grants program for existential risk reduction
post
June 3, 2021
post
June 1, 2021
podcast
May 20, 2021
Bart Selman on the Promises and Perils of Artificial Intelligence
podcast
May 20, 2021
podcast
April 21, 2021
Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century
podcast
April 21, 2021
podcast
April 1, 2021
Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures
podcast
April 1, 2021
podcast
March 20, 2021
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI
podcast
March 20, 2021
podcast
February 25, 2021
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons
podcast
February 25, 2021
podcast
February 9, 2021
John Prendergast on Non-dual Awareness and Wisdom for the 21st Century
podcast
February 9, 2021
podcast
January 22, 2021
Beatrice Fihn on the Total Elimination of Nuclear Weapons
podcast
January 22, 2021
podcast
January 8, 2021
Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
podcast
January 8, 2021
podcast
December 14, 2020
2020 Future of Life Award for saving 200 million lives from smallpox
video
December 14, 2020
video
December 11, 2020
Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox
podcast
December 11, 2020
podcast
December 2, 2020
Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress
podcast
December 2, 2020
podcast
November 17, 2020
Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity
podcast
November 17, 2020
podcast
October 15, 2020
Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism
podcast
October 15, 2020
podcast
September 30, 2020
Kelly Wanser on Climate Change as a Possible Existential Threat
podcast
September 30, 2020
podcast
September 16, 2020
Andrew Critch on AI Research Considerations for Human Existential Safety
podcast
September 16, 2020
podcast
September 3, 2020
Iason Gabriel on Foundational Philosophical Questions in AI Alignment
podcast
September 3, 2020
podcast
August 18, 2020
Peter Railton on Moral Learning and Metaethics in AI Systems
podcast
August 18, 2020
podcast
July 1, 2020
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
podcast
July 1, 2020
podcast
June 24, 2020
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)
podcast
June 24, 2020
podcast
June 15, 2020
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
podcast
June 15, 2020
podcast
June 1, 2020
Sam Harris on Global Priorities, Existential Risk, and What Matters Most
podcast
June 1, 2020
podcast
May 15, 2020
FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church
podcast
May 15, 2020
podcast
April 16, 2020
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
podcast
April 16, 2020
podcast
April 9, 2020
FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre
podcast
April 9, 2020
podcast
April 1, 2020
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
podcast
April 1, 2020
podcast
March 16, 2020
AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre
podcast
March 16, 2020
podcast
February 28, 2020
FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O’Keefe
podcast
February 28, 2020
podcast
February 18, 2020
AI Alignment Podcast: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown
podcast
February 18, 2020
podcast
January 31, 2020
FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre
podcast
January 31, 2020
podcast
January 16, 2020
AI Alignment Podcast: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson
podcast
January 16, 2020
podcast
December 31, 2019
FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark
podcast
December 31, 2019
podcast
December 28, 2019
FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team
podcast
December 28, 2019
podcast
December 16, 2019
AI Alignment Podcast: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike
podcast
December 16, 2019
podcast
December 2, 2019
FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert
podcast
December 2, 2019
podcast
November 26, 2019
Not Cool Ep 26: Naomi Oreskes on trusting climate science
podcast
November 26, 2019
podcast
November 19, 2019
Not Cool Ep 24: Ellen Quigley and Natalie Jones on defunding the fossil fuel industry
podcast
November 19, 2019
podcast
November 15, 2019
AI Alignment Podcast: Machine Ethics and AI Governance with Wendell Wallach
podcast
November 15, 2019
podcast
November 14, 2019
Not Cool Ep 23: Brian Toon on nuclear winter: the other climate change
podcast
November 14, 2019
podcast
November 13, 2019
Not Cool Ep 22: Cullen Hendrix on climate change and armed conflict
podcast
November 13, 2019
podcast
October 31, 2019
Not Cool Ep 19: Ilissa Ocko on non-carbon causes of climate change
podcast
October 31, 2019
podcast
October 31, 2019
FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre
podcast
October 31, 2019
podcast
October 30, 2019
Not Cool Ep 18: Glen Peters on the carbon budget and global carbon emissions
podcast
October 30, 2019
podcast
October 30, 2019
The Psychology of Existential Risk: Moral Judgments about Human Extinction
post
October 30, 2019
post
October 24, 2019
Not Cool Ep 17: Tackling Machine Learning with Climate Change, part 2
podcast
October 24, 2019
podcast
October 22, 2019
Not Cool Ep 16: Tackling Climate Change with Machine Learning, part 1
podcast
October 22, 2019
podcast
October 17, 2019
Not Cool Ep 15: Astrid Caldas on equitable climate adaptation
podcast
October 17, 2019
podcast
October 14, 2019
Not Cool Ep 14: Filippo Berardi on carbon finance and the economics of climate change
podcast
October 14, 2019
podcast
October 8, 2019
Not Cool Ep 12: Kris Ebi on climate change, human health, and social stability
podcast
October 8, 2019
podcast
October 8, 2019
AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell
podcast
October 8, 2019
podcast
October 3, 2019
Not Cool Ep 11: Jakob Zscheischler on climate-driven compound weather events
podcast
October 3, 2019
podcast
October 1, 2019
Not Cool Ep 10: Stephanie Herring on extreme weather events and climate change attribution
podcast
October 1, 2019
podcast
September 30, 2019
FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce
podcast
September 30, 2019
podcast
September 26, 2019
Not Cool Ep 9: Andrew Revkin on climate communication, vulnerability, and information gaps
podcast
September 26, 2019
podcast
September 24, 2019
Not Cool Ep 8: Suzanne Jones on climate policy and government responsibility
podcast
September 24, 2019
podcast
September 19, 2019
Not Cool Ep 7: Lindsay Getschel on climate change and national security
podcast
September 19, 2019
podcast
September 17, 2019
AI Alignment Podcast: Synthesizing a human’s preferences into a utility function with Stuart Armstrong
podcast
September 17, 2019
podcast
September 12, 2019
Not Cool Ep 5: Ken Caldeira on updating infrastructure and planning for an uncertain climate future
podcast
September 12, 2019
podcast
September 10, 2019
Not Cool Ep 4: Jessica Troni on helping countries adapt to climate change
podcast
September 10, 2019
podcast
September 3, 2019
Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change
podcast
September 3, 2019
podcast
September 3, 2019
Not Cool Ep 1: John Cook on misinformation and overcoming climate silence
podcast
September 3, 2019
podcast
August 30, 2019
FLI Podcast: Beyond the Arms Race Narrative: AI & China with Helen Toner & Elsa Kania
podcast
August 30, 2019
podcast
August 19, 2019
New Report: Don’t Be Evil – A Survey of the Tech Sector’s Stance on Lethal Autonomous Weapons
resource
August 19, 2019
resource
August 16, 2019
AI Alignment Podcast: China’s AI Superpower Dream with Jeffrey Ding
podcast
August 16, 2019
podcast
August 1, 2019
The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield
podcast
August 1, 2019
podcast
July 22, 2019
AI Alignment Podcast: On the Governance of AI with Jade Leung
podcast
July 22, 2019
podcast
June 28, 2019
FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell
podcast
June 28, 2019
podcast
May 31, 2019
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
podcast
May 31, 2019
podcast
May 23, 2019
AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson
podcast
May 23, 2019
podcast
May 9, 2019
State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons
resource
May 9, 2019
resource
April 30, 2019
FLI Podcast: The Unexpected Side Effects of Climate Change With Fran Moore and Nick Obradovich
podcast
April 30, 2019
podcast
April 25, 2019
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2)
podcast
April 25, 2019
podcast
April 11, 2019
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 1)
podcast
April 11, 2019
podcast
March 28, 2019
2019 Statement to the United Nations in Support of a Ban on LAWS
post
March 28, 2019
post
March 21, 2019
The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2
post
March 21, 2019
post
March 20, 2019
post
March 19, 2019
The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1
post
March 19, 2019
post
March 13, 2019
Autonomous Weapons Open Letter: Global Health Community
open-letter
March 13, 2019
open-letter
March 6, 2019
post
March 6, 2019
AI Alignment Podcast: AI Alignment through Debate with Geoffrey Irving
podcast
March 6, 2019
podcast
February 28, 2019
FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark
podcast
February 28, 2019
podcast
February 28, 2019
FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark
podcast
February 28, 2019
podcast
February 21, 2019
AI Alignment Podcast: Human Cognition and the Nature of Intelligence with Joshua Greene
podcast
February 21, 2019
podcast
February 5, 2019
The Breakdown of the INF: Who’s to Blame for the Collapse of the Landmark Nuclear Treaty?
post
February 5, 2019
post
January 31, 2019
FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy
podcast
January 31, 2019
podcast
January 30, 2019
AI Alignment Podcast: The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi (Beneficial AGI 2019)
podcast
January 30, 2019
podcast
January 25, 2019
FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang
podcast
January 25, 2019
podcast
January 17, 2019
AI Alignment Podcast: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)
podcast
January 17, 2019
podcast
January 11, 2019
An Open Letter to the United Nations Convention on Certain Conventional Weapons (German)
open-letter
January 11, 2019
open-letter
January 11, 2019
$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa (German)
post
January 11, 2019
post
January 8, 2019
IPCC 2018 Special Report Paints Dire — But Not Completely Hopeless — Picture of Future Russian
post
January 8, 2019
post
January 8, 2019
post
January 8, 2019
Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations German
post
January 8, 2019
post
January 8, 2019
post
December 18, 2018
AI Alignment Podcast: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah
podcast
December 18, 2018
podcast
December 13, 2018
How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer
post
December 13, 2018
post
November 30, 2018
Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes
podcast
November 30, 2018
podcast
November 28, 2018
US Government Releases Its Latest Climate Assessment, Demands Immediate Action
post
November 28, 2018
post
November 26, 2018
Handful of Countries – Including the US and Russia – Hamper Discussions to Ban Killer Robots at UN
post
November 26, 2018
post
October 31, 2018
Podcast: Can We Avoid the Worst of Climate Change? with Alexander Verbeek and John Moorhead
podcast
October 31, 2018
podcast
October 19, 2018
$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa (Russian)
post
October 19, 2018
post
October 19, 2018
Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity (Russian)
post
October 19, 2018
post
October 18, 2018
AI Alignment Podcast: On Becoming a Moral Realist with Peter Singer
podcast
October 18, 2018
podcast
October 16, 2018
IPCC 2018 Special Report Paints Dire — But Not Completely Hopeless — Picture of Future
post
October 16, 2018
post
October 12, 2018
Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett
post
October 12, 2018
post
October 11, 2018
Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More
podcast
October 11, 2018
podcast
October 8, 2018
Accidental Nuclear War: a Timeline of Close Calls (German)
resource
October 8, 2018
resource
October 8, 2018
Cognitive Biases and AI Value Alignment: An Interview with Owain Evans
post
October 8, 2018
post
October 8, 2018
How AI Handles Uncertainty: An Interview With Brian Ziebart Russian
post
October 8, 2018
post
September 27, 2018
Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
podcast
September 27, 2018
podcast
September 26, 2018
$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa
post
September 26, 2018
post
September 18, 2018
AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment with William MacAskill
podcast
September 18, 2018
podcast
September 18, 2018
Can Global Warming Stay Below 1.5 Degrees? Views Differ Among Climate Scientists Russian
post
September 18, 2018
post
September 17, 2018
Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich
post
September 17, 2018
post
September 14, 2018
European Parliament Passes Resolution Supporting a Ban on Killer Robots
post
September 14, 2018
post
September 4, 2018
2018 Statement to United Nations on Behalf of LAWS Open Letter Signatories
open-letter
September 4, 2018
open-letter
August 30, 2018
Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity
post
August 30, 2018
post
August 30, 2018
Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins
podcast
August 30, 2018
podcast
August 16, 2018
AI Alignment Podcast: The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
podcast
August 16, 2018
podcast
July 27, 2018
Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman
post
July 27, 2018
post
July 25, 2018
post
July 16, 2018
AI Alignment Podcast: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
podcast
July 16, 2018
podcast
June 28, 2018
Podcast: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams
podcast
June 28, 2018
podcast
June 20, 2018
How Will the Rise of Artificial Superintelligences Impact Humanity?
post
June 20, 2018
post
June 14, 2018
AI Alignment Podcast: Astronomical Future Suffering and Superintelligence with Kaj Sotala
podcast
June 14, 2018
podcast
June 6, 2018
post
May 31, 2018
Podcast: Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler
podcast
May 31, 2018
podcast
May 28, 2018
Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations Chinese
post
May 28, 2018
post
May 21, 2018
Teaching Today’s AI Students To Be Tomorrow’s Ethical Leaders: An Interview With Yan Zhang
post
May 21, 2018
post
April 27, 2018
Podcast: What Are the Odds of Nuclear War? A Conversation With Seth Baum and Robert de Neufville
podcast
April 27, 2018
podcast
April 4, 2018
AI and Robotics Researchers Boycott South Korea Tech Institute Over Development of AI Weapons Technology
post
April 4, 2018
post
April 4, 2018
AI Alignment Podcast: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell
podcast
April 4, 2018
podcast
March 30, 2018
Podcast: Navigating AI Safety – From Malicious Use to Accidents
podcast
March 30, 2018
podcast
March 29, 2018
How Do We Align Artificial Intelligence with Human Values? German
post
March 29, 2018
post
March 19, 2018
Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa Russian
post
March 19, 2018
post
March 18, 2018
55 Years After Preventing Nuclear Attack, Arkhipov Honored With Inaugural Future of Life Award Russian
post
March 18, 2018
post
March 8, 2018
Developing Ethical Priorities for Neurotechnologies and AI Russian
post
March 8, 2018
post
March 8, 2018
post
March 8, 2018
post
February 28, 2018
Podcast: AI and the Value Alignment Problem with Meia Chita-Tegmark and Lucas Perry
podcast
February 28, 2018
podcast
February 26, 2018
Optimizing AI Safety Research: An Interview With Owen Cotton-Barratt
post
February 26, 2018
post
February 22, 2018
Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants? Russian
post
February 22, 2018
post
February 22, 2018
As Acidification Increases, Ocean Biodiversity May Decline
post
February 22, 2018
post
February 13, 2018
Transparent and Interpretable AI: an interview with Percy Liang
post
February 13, 2018
post
February 6, 2018
As CO2 Levels Rise, Scientists Question Best- and Worst-Case Scenarios of Climate Change
post
February 6, 2018
post
January 31, 2018
Podcast: Top AI Breakthroughs and Challenges of 2017 with Richard Mallah and Chelsea Finn
podcast
January 31, 2018
podcast
January 29, 2018
Is There a Trade-off Between Immediate and Longer-term AI Safety Efforts?
post
January 29, 2018
post
January 10, 2018
AI Should Provide a Shared Benefit for as Many People as Possible
post
January 10, 2018
post
December 6, 2017
MIRI’s December 2017 Newsletter and Annual Fundraiser
newsletter
December 6, 2017
newsletter
November 29, 2017
Podcast: Balancing the Risks of Future Technologies with Andrew Maynard and Jack Stilgoe
podcast
November 29, 2017
podcast
November 20, 2017
Harvesting Water Out of Thin Air: A Solution to Water Shortage Crisis?
post
November 20, 2017
post
November 18, 2017
How Do We Align Artificial Intelligence with Human Values? Japanese
post
November 18, 2017
post
November 16, 2017
15,000 Scientists Sign “Second Notice” Warning About Climate Change
post
November 16, 2017
post
November 15, 2017
Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem
post
November 15, 2017
post
November 14, 2017
AI Researchers Create Video to Call for Autonomous Weapons Ban at UN
post
November 14, 2017
post
October 30, 2017
Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene and Iyad Rahwan
podcast
October 30, 2017
podcast
October 27, 2017
55 Years After Preventing Nuclear Attack, Arkhipov Honored With Inaugural Future of Life Award
post
October 27, 2017
post
October 23, 2017
Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa
post
October 23, 2017
post
October 18, 2017
DeepMind’s AlphaGo Zero Becomes Go Champion Without Human Input
post
October 18, 2017
post
October 12, 2017
Full Transcript: Understanding Artificial General Intelligence — An Interview With Dr. Hiroshi Yamakawa
post
October 12, 2017
post
September 29, 2017
START from the Beginning: 25 Years of US-Russian Nuclear Weapons Reductions
post
September 29, 2017
post
September 29, 2017
The Future of Humanity Institute Releases Three Papers on Biorisks
post
September 29, 2017
post
September 29, 2017
Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer
podcast
September 29, 2017
podcast
September 24, 2017
Can AI Remain Safe as Companies Race to Develop It? Chinese
post
September 24, 2017
post
September 24, 2017
Preparing for the Biggest Change in Human History Chinese
post
September 24, 2017
post
September 8, 2017
Understanding the Risks and Limitations of North Korea’s Nuclear Program
post
September 8, 2017
post
September 1, 2017
An Open Letter to the United Nations Convention on Certain Conventional Weapons (Japanese)
open-letter
September 1, 2017
open-letter
September 1, 2017
An Open Letter to the United Nations Convention on Certain Conventional Weapons (Russian)
open-letter
September 1, 2017
open-letter
August 29, 2017
Transcript: Life 3.0: Being Human in the Age of Artificial Intelligence
post
August 29, 2017
post
August 29, 2017
Podcast: Life 3.0 – Being Human in the Age of Artificial Intelligence
podcast
August 29, 2017
podcast
August 26, 2017
How Do We Align Artificial Intelligence with Human Values? Russian
post
August 26, 2017
post
August 25, 2017
How to Design AIs That Understand What Humans Want: An Interview with Long Ouyang
post
August 25, 2017
post
August 20, 2017
An Open Letter to the United Nations Convention on Certain Conventional Weapons
open-letter
August 20, 2017
open-letter
August 20, 2017
Leaders of Top Robotics and AI Companies Call for Ban on Killer Robots
post
August 20, 2017
post
August 20, 2017
An Open Letter to the United Nations Convention on Certain Conventional Weapons (Chinese)
open-letter
August 20, 2017
open-letter
August 20, 2017
Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons
post
August 20, 2017
post
August 3, 2017
Op-ed: On AI, Prescription Drugs, and Managing the Risks of Things We Don’t Understand – Chinese
post
August 3, 2017
post
July 31, 2017
post
July 31, 2017
Podcast: The Art of Predicting with Anthony Aguirre and Andrew Critch
podcast
July 31, 2017
podcast
July 7, 2017
Podcast: Banning Nuclear and Autonomous Weapons with Richard Moyes and Miriam Struyk
podcast
July 7, 2017
podcast
June 30, 2017
post
June 26, 2017
post
June 14, 2017
post
June 12, 2017
How Do We Align Artificial Intelligence with Human Values? Chinese
post
June 12, 2017
post
June 1, 2017
Podcast: Creative AI with Mark Riedl & Scientists Support a Nuclear Ban
podcast
June 1, 2017
podcast
May 31, 2017
Op-ed: On AI, Prescription Drugs, and Managing the Risks of Things We Don’t Understand
post
May 31, 2017
post
May 18, 2017
The U.S. Worldwide Threat Assessment Includes Warnings of Cyber Attacks, Nuclear Weapons, Climate Change, etc.
post
May 18, 2017
post
May 10, 2017
post
April 27, 2017
Podcast: Climate Change with Brian Toon and Kevin Trenberth
podcast
April 27, 2017
podcast
April 13, 2017
Op-ed: Poll Shows Strong Support for AI Regulation Though Respondents Admit Limited Knowledge of AI
post
April 13, 2017
post
March 29, 2017
Testimony by Sue Coleman-Haseldine, Nuclear Bomb Testing Survivor
post
March 29, 2017
post
March 27, 2017
Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations
post
March 27, 2017
post
February 28, 2017
Podcast: UN Nuclear Weapons Ban with Beatrice Fihn and Susi Snyder
podcast
February 28, 2017
podcast
January 31, 2017
Podcast: Top AI Breakthroughs, with Ian Goodfellow and Richard Mallah
podcast
January 31, 2017
podcast
January 31, 2017
Transcript: AI Breakthroughs with Ian Goodfellow and Richard Mallah
post
January 31, 2017
post
January 17, 2017
FLI 2016 in Review: AI Safety Research to Nuclear Weapons
newsletter
January 17, 2017
newsletter
December 8, 2016
Effective Altruism and Existential Risks: a talk with Lucas Perry
post
December 8, 2016
post
November 30, 2016
2300 Scientists from All Fifty States Pen Open Letter to Incoming Trump Administration
post
November 30, 2016
post
November 30, 2016
Autonomous Weapons: an Interview With the Experts – Heather Roff and Peter Asaro
podcast
November 30, 2016
podcast
November 3, 2016
Accidental Nuclear War: a Timeline of Close Calls (Mandarin)
resource
November 3, 2016
resource
October 7, 2016
Sam Harris TED Talk: Can We Build AI Without Losing Control Over It?
post
October 7, 2016
post
September 30, 2016
Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?
post
September 30, 2016
post
September 29, 2016
Former Defense Secretary William Perry Launches MOOC on Nuclear Risks
post
September 29, 2016
post
August 31, 2016
Transcript: Concrete Problems in AI Safety with Dario Amodei and Seth Baum
post
August 31, 2016
post
August 30, 2016
Podcast: Concrete Problems in AI Safety with Dario Amodei and Seth Baum
podcast
August 30, 2016
podcast
August 3, 2016
Analysis: Clopen AI – Openness in Different Aspects of AI Development
post
August 3, 2016
post
July 20, 2016
Op-Ed: If AI Systems Can Be “Persons,” What Rights Should They Have?
post
July 20, 2016
post
July 15, 2016
Congress Subpoenas Climate Scientists in Effort to Hamper ExxonMobil Fraud Investigation
post
July 15, 2016
post
July 9, 2016
US Mayors Commend Nuclear Divestment and Other FLI Highlights
newsletter
July 9, 2016
newsletter
July 1, 2016
post
June 23, 2016
post
June 20, 2016
Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?
post
June 20, 2016
post
June 11, 2016
Accidental Nuclear War: a Timeline of Close Calls (Spanish)
resource
June 11, 2016
resource
June 9, 2016
post
June 8, 2016
The Vicious Cycle of Ocean Currents and Global Warming: Slowing Thermohaline Circulation
post
June 8, 2016
post
May 12, 2016
post
May 6, 2016
Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society
post
May 6, 2016
post
April 20, 2016
post
April 11, 2016
Cambridge Divests $1 Billion From Nukes Following Grassroots Campaign
post
April 11, 2016
post
April 4, 2016
Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons
post
April 4, 2016
post
March 27, 2016
post
March 16, 2016
Cheap Lasers and Bad Math; The Coming Revolution in Robot Perception
post
March 16, 2016
post
March 11, 2016
Who’s to Blame (Part 6): Potential Legal Solutions to the AWS Accountability Problem
post
March 11, 2016
post
March 10, 2016
post
March 2, 2016
Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons
post
March 2, 2016
post
February 28, 2016
X-risk News of the Week: Ocean Warming and Nuclear Protests
post
February 28, 2016
post
February 25, 2016
Secretary William Perry Talks at Google: My Journey at the Nuclear Brink
post
February 25, 2016
post
February 24, 2016
Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?
post
February 24, 2016
post
February 19, 2016
X-risk News of the Week: AAAI, Beneficial AI Research, a $5M Contest, and Nuclear Risks
post
February 19, 2016
post
February 17, 2016
Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict
post
February 17, 2016
post
February 17, 2016
AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research
post
February 17, 2016
post
February 13, 2016
X-risk News of the Week: Nuclear Winter and a Government Risk Report
post
February 13, 2016
post
February 9, 2016
Autonomous Weapons Open Letter: AI & Robotics Researchers
open-letter
February 9, 2016
open-letter
February 9, 2016
Autonomous Weapons Open Letter: AI & Robotics Researchers – Signatories List
open-letter
February 9, 2016
open-letter
February 4, 2016
Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons
post
February 4, 2016
post
January 25, 2016
A survey of research questions for robust and beneficial AI
resource
January 25, 2016
resource
January 12, 2016
The Future of AI: Quotes and highlights from Monday’s NYU symposium
post
January 12, 2016
post
December 27, 2015
Highlights and impressions from NIPS conference on machine learning
post
December 27, 2015
post
December 22, 2015
What’s so exciting about AI? Conversations at the Nobel Week Dialogue
post
December 22, 2015
post
December 17, 2015
The AI Wars: The Battle of the Human Minds to Keep Artificial Intelligence Safe
post
December 17, 2015
post
December 3, 2015
$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University
post
December 3, 2015
post
December 1, 2015
From the MIRI Blog: “Formalizing Convergent Instrumental Goals”
post
December 1, 2015
post
November 30, 2015
Risks From General Artificial Intelligence Without an Intelligence Explosion
post
November 30, 2015
post
November 17, 2015
From The New Yorker: Will Artificial Intelligence Bring Us Utopia or Dystopia?
post
November 17, 2015
post
November 16, 2015
New report: “Leó Szilárd and the Danger of Nuclear Weapons”
post
November 16, 2015
post
October 28, 2015
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
open-letter
October 28, 2015
open-letter
October 28, 2015
New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial
post
October 28, 2015
post
October 23, 2015
From Global News Canada: Former Greenpeace president supports biotechnology
post
October 23, 2015
post
October 12, 2015
MIRI: Artificial Intelligence: The Danger of Good Intentions
post
October 12, 2015
post
May 4, 2015
Chinese Scientists Report Unsuccessful Attempt to Selectively Edit Disease Gene in Human Embryos
post
May 4, 2015
post
April 11, 2015
Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?
post
April 11, 2015
post
November 6, 2014
Martin Rees: Catastrophic Risks: The Downsides of Advancing Technology
event
November 6, 2014
event
September 4, 2014
Nick Bostrom: Superintelligence — Paths, Dangers, Strategies
event
September 4, 2014
event
October 22, 2013
Bringing biotechnology into the home: Cathal Garvey at TEDxDublin
video
October 22, 2013
video