Contents
Eliezer Yudkowsky has written a new book on civilizational dysfunction and outperformance: Inadequate Equilibria: Where and How Civilizations Get Stuck. The full book will be available in print and electronic formats November 16. To preorder the ebook or sign up for updates, visit equilibriabook.com.
We’re posting the full contents online in stages over the next two weeks. The first two chapters are:
- Inadequacy and Modesty (discussion: LessWrong, EA Forum, Hacker News)
- An Equilibrium of No Free Energy (discussion: LessWrong, EA Forum)
Research updates
- A new paper: “Functional Decision Theory: A New Theory of Instrumental Rationality” (arXiv), by Eliezer Yudkowsky and Nate Soares.
- New research write-ups and discussions: Comparing Logical Inductor CDT and Logical Inductor EDT; Logical Updatelessness as a Subagent Alignment Problem; Mixed-Strategy Ratifiability Implies CDT=EDT
- New from AI Impacts: Computing Hardware Performance Data Collections
- The Workshop on Reliable Artificial Intelligence took place at ETH Zürich, hosted by MIRIxZürich.
General updates
- DeepMind announces a new version of AlphaGo that achieves superhuman performance within three days, using 4 TPUs and no human training data. Eliezer Yudkowsky argues that AlphaGo Zero provides supporting evidence for his position in the AI foom debate; Robin Hanson responds. See also Paul Christiano on AlphaGo Zero and capability amplification.
- Yudkowsky on AGI ethics: “The ethics of bridge-building is to not have your bridge fall down and kill people and there is a frame of mind in which this obviousness is obvious enough. How not to have the bridge fall down is hard.”
- Nate Soares gave his ensuring smarter-than-human AI has a positive outcome talk at the O’Reilly AI Conference (slides).
News and links
- “Protecting Against AI’s Existential Threat“: a Wall Street Journal op-ed by OpenAI’s Ilya Sutskever and Dario Amodei.
- OpenAI announces “a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks”.
- DeepMind’s Viktoriya Krakovna reports on the first Tokyo AI & Society Symposium.
- Nick Bostrom speaks and CSER submits written evidence to the UK Parliament’s Artificial Intelligence Commitee.
- Rob Wiblin interviews Nick Beckstead for the 80,000 Hours podcast.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons
Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.
Maggie Munro
3 May, 2024
All Newsletters