The following post was originally published here.
Our newest publication, “Cheating Death in Damascus,” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning.In other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month.
- New at IAFF: “Formal Open Problem in Decision Theory“
- New at AI Impacts: “Trends in Algorithmic Progress“; “Progress in General-Purpose Factoring“
- We ran a weekend workshop on agent foundations and AI safety.
- Our annual review covers our research progress, fundraiser outcomes, and other take-aways from 2016.
- We attended the Colloquium on Catastrophic and Existential Risk.
- Nate Soares weighs in on the Future of Life Institute’s Risk Principle.
- “Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse” features quotes from Eliezer Yudkowsky, Demis Hassabis, Mark Zuckerberg, Peter Thiel, Stuart Russell, and others.
News and links
- The Open Philanthropy Project and OpenAI begin a partnership: Holden Karnofsky joins Elon Musk and Sam Altman on OpenAI’s Board of Directors, and Open Philanthropy contributes $30M to OpenAI’s research program.
- Open Philanthropy has also awarded $2M to the Future of Humanity Institute.
- Modeling Agents with Probabilistic Programs: a new book by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan.
- New from OpenAI: “Evolution Strategies as a Scalable Alternative to Reinforcement Learning“; “Learning to Communicate“; “One-Shot Imitation Learning“; and from Paul Christiano, “Benign Model-Free RL.”
- Chris Olah and Shan Carter discuss research debt as an obstacle to clear thinking and the transmission of ideas, and propose Distill as a solution.
- Andrew Trask proposes encrypting deep learning algorithms during training.
- Roman Yampolskiy seeks submissions for a book on AI safety and security.
- 80,000 Hours has updated their problem profile on positively shaping the development of AI, a solid introduction to AI risk — which 80K now ranks as the most urgent problem in the world. See also 80K’s write-up on in-demand skill sets at effective altruism oragnizations.