Contents
The following post was originally published here.
Our newest publication, “Cheating Death in Damascus,” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning.In other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month.
Research updates
- New at IAFF: “Formal Open Problem in Decision Theory“
- New at AI Impacts: “Trends in Algorithmic Progress“; “Progress in General-Purpose Factoring“
- We ran a weekend workshop on agent foundations and AI safety.
General updates
- Our annual review covers our research progress, fundraiser outcomes, and other take-aways from 2016.
- We attended the Colloquium on Catastrophic and Existential Risk.
- Nate Soares weighs in on the Future of Life Institute’s Risk Principle.
- “Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse” features quotes from Eliezer Yudkowsky, Demis Hassabis, Mark Zuckerberg, Peter Thiel, Stuart Russell, and others.
News and links
- The Open Philanthropy Project and OpenAI begin a partnership: Holden Karnofsky joins Elon Musk and Sam Altman on OpenAI’s Board of Directors, and Open Philanthropy contributes $30M to OpenAI’s research program.
- Open Philanthropy has also awarded $2M to the Future of Humanity Institute.
- Modeling Agents with Probabilistic Programs: a new book by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan.
- New from OpenAI: “Evolution Strategies as a Scalable Alternative to Reinforcement Learning“; “Learning to Communicate“; “One-Shot Imitation Learning“; and from Paul Christiano, “Benign Model-Free RL.”
- Chris Olah and Shan Carter discuss research debt as an obstacle to clear thinking and the transmission of ideas, and propose Distill as a solution.
- Andrew Trask proposes encrypting deep learning algorithms during training.
- Roman Yampolskiy seeks submissions for a book on AI safety and security.
- 80,000 Hours has updated their problem profile on positively shaping the development of AI, a solid introduction to AI risk — which 80K now ranks as the most urgent problem in the world. See also 80K’s write-up on in-demand skill sets at effective altruism oragnizations.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters