Skip to content

📣 Just announced: Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, actors, and more. Join them as a signatory today.
All Newsletters

MIRI March 2017 Newsletter

Published
16 March, 2017
Author
Rob Bensinger

Contents

Research updates

General updates

  • Why AI Safety?: A quick summary (originally posted during our fundraiser) of the case for working on AI risk, including notes on distinctive features of our approach and our goals for the field.
  • Nate Soares attended “Envisioning and Addressing Adverse AI Outcomes,” an event pitting red-team attackers against defenders in a variety of AI risk scenarios.
  • We also attended an AI safety strategy retreat run by the Center for Applied Rationality.

News and links

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Over 65,000 Sign to Ban the Development of Superintelligence

Plus: Final call for PhD fellowships and Creative Contest; new California AI laws; FLI is hiring; can AI truly be creative?; and more.
1 November, 2025

AI at the Vatican

Plus: Fellowship applications open; global call for AI red lines; new polling finds 90% support for AI rules; register for our $100K creative contest; and more.
1 October, 2025

RAISE-ing the Bar for AI Companies

Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.
4 September, 2025
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram