Contents
We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up why he’s donating to MIRI this year. (Donation page.)Research updates
- New at IAFF: postCDT: Decision Theory Using Post-Selected Bayes Nets; Predicting HCH Using Expert Advice; Paul Christiano’s Recent Posts
- New at AI Impacts: Joscha Bach on Remaining Steps to Human-Level AI
- We ran our ninth workshop on logic, probability, and reflection.
General updates
- We teamed up with a number of AI safety researchers to help compile a list of recommended AI safety readings for the Center for Human-Compatible AI. See this page if you would like to get involved with CHCAI’s research.
- Investment analyst Ben Hoskin reviews MIRI and other organizations involved in AI safety.
News and links
- “The Off-Switch Game“: Dylan Hadfield-Manell, Anca Dragan, Pieter Abbeel, and Stuart Russell show that an AI agent’s corrigibility is closely tied to the uncertainty it has about its utility function.
- Russell and Allan Dafoe critique an inaccurate summary by Oren Etzioni of a new survey of AI experts on superintelligence.
- Sam Harris interviews Russell on the basics of AI risk (video). See also Russell’s new Q&A on the future of AI.
- Future of Life Institute co-founder Viktoriya Krakovna and FHI researcher Jan Leike join Google DeepMind’s safety team.
- GoodAI sponsors a challenge to “accelerate the search for general artificial intelligence”.
- OpenAI releases Universe, “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games”. Meanwhile, DeepMind has open-sourced their own platform for general AI research, DeepMind Lab.
- Staff at GiveWell and the Centre for Effective Altruism, along with others in the effective altruism community, explain where they’re donating this year.
- FHI is seeking AI safety interns, researchers, and admins: jobs page.
This newsletter was originally posted here.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters