Contents
Our 2017 fundraiser was a huge success, with 341 donors contributing a total of $2.5 million!
Some of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a matching challenge), and the Berkeley Existential Risk Initiative. Thank you to everyone who contributed!
Research updates
- The winners of the first AI Alignment Prize include Scott Garrabrant’s Goodhart Taxonomy and recent IAFF posts: Vadim Kosoy’s Why Delegative RL Doesn’t Work for Arbitrary Environments and More Precise Regret Bound for DRL, and Alex Mennen’s Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems and Learning Goals of Simple Agents.
- New at AI Impacts: Human-Level Hardware Timeline; Effect of Marginal Hardware on Artificial General Intelligence
- We’re hiring for a new position at MIRI: ML Living Library, a specialist on the newest developments in machine learning.
General updates
- From Eliezer Yudkowsky: A Reply to Francois Chollet on Intelligence Explosion.
- Counterterrorism experts Richard Clarke and R. P. Eddy profile Yudkowsky in their new book Warnings: Finding Cassandras to Stop Catastrophes.
- There have been several recent blog posts recommending MIRI as a donation target: from Ben Hoskin, Zvi Mowshowitz, Putanumonit, and the Open Philanthropy Project’s Daniel Dewey and Nick Beckstead.
News and links
- A generalization of the AlphaGo algorithm, AlphaZero, achieves rapid superhuman performance on Chess and Shogi.
- Also from Google DeepMind: “Specifying AI Safety Problems in Simple Environments.”
- Viktoriya Krakovna reports on NIPS 2017: “This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.”
- 80,000 Hours interviews Phil Tetlock and investigates the most important talent gaps in the EA community.
- From Seth Baum: “A Survey of AGI Projects for Ethics, Risk, and Policy.” And from the Foresight Institute: “AGI: Timeframes & Policy.”
- The Future of Life Institute is collecting proposals for a second round of AI safety grants, due February 18.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
Maggie Munro
2 December, 2024
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
All Newsletters