Contents
Eliezer Yudkowsky has written a new book on civilizational dysfunction and outperformance: Inadequate Equilibria: Where and How Civilizations Get Stuck. The full book will be available in print and electronic formats November 16. To preorder the ebook or sign up for updates, visit equilibriabook.com.
We’re posting the full contents online in stages over the next two weeks. The first two chapters are:
- Inadequacy and Modesty (discussion: LessWrong, EA Forum, Hacker News)
- An Equilibrium of No Free Energy (discussion: LessWrong, EA Forum)
Research updates
- A new paper: “Functional Decision Theory: A New Theory of Instrumental Rationality” (arXiv), by Eliezer Yudkowsky and Nate Soares.
- New research write-ups and discussions: Comparing Logical Inductor CDT and Logical Inductor EDT; Logical Updatelessness as a Subagent Alignment Problem; Mixed-Strategy Ratifiability Implies CDT=EDT
- New from AI Impacts: Computing Hardware Performance Data Collections
- The Workshop on Reliable Artificial Intelligence took place at ETH Zürich, hosted by MIRIxZürich.
General updates
- DeepMind announces a new version of AlphaGo that achieves superhuman performance within three days, using 4 TPUs and no human training data. Eliezer Yudkowsky argues that AlphaGo Zero provides supporting evidence for his position in the AI foom debate; Robin Hanson responds. See also Paul Christiano on AlphaGo Zero and capability amplification.
- Yudkowsky on AGI ethics: “The ethics of bridge-building is to not have your bridge fall down and kill people and there is a frame of mind in which this obviousness is obvious enough. How not to have the bridge fall down is hard.”
- Nate Soares gave his ensuring smarter-than-human AI has a positive outcome talk at the O’Reilly AI Conference (slides).
News and links
- “Protecting Against AI’s Existential Threat“: a Wall Street Journal op-ed by OpenAI’s Ilya Sutskever and Dario Amodei.
- OpenAI announces “a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks”.
- DeepMind’s Viktoriya Krakovna reports on the first Tokyo AI & Society Symposium.
- Nick Bostrom speaks and CSER submits written evidence to the UK Parliament’s Artificial Intelligence Commitee.
- Rob Wiblin interviews Nick Beckstead for the 80,000 Hours podcast.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters