Contents
Research updates
- New at IAFF: The Ubiquitous Converse Lawvere Problem; Two Major Obstacles for Logical Inductor Decision Theory; Generalizing Foundations of Decision Theory II.
- New at AI Impacts: Guide to Pages on AI Timeline Predictions
- “Decisions Are For Making Bad Outcomes Inconsistent“: Nate Soares dialogues on some of the deeper issues raised by our “Cheating Death in Damascus” paper.
- We ran a machine learning workshop in early April.
- “Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome“: Nate’s talk at Google (video) provides probably the best general introduction to MIRI’s work on AI alignment.
General updates
- Our strategy update discusses changes to our AI forecasts and research priorities, new outreach goals, a MIRI/DeepMind collaboration, and other news.
- MIRI is hiring software engineers! If you’re a programmer who’s passionate about MIRI’s mission and wants to directly support our research efforts, apply here to trial with us.
- MIRI Assistant Research Fellow Ryan Carey has taken on an additional affiliation with the Centre for the Study of Existential Risk, and is also helping edit an issue of Informatica on superintelligence.
News and links
- DeepMind researcher Viktoriya Krakovna lists security highlights from ICLR.
- DeepMind is seeking applicants for a policy research position “to carry out research on the social and economic impacts of AI”.
- The Center for Human-Compatible AI is hiring an assistant director. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.
- 80,000 Hours lists other potentially high-impact openings, including ones at Stanford’s AI Index project, the White House OSTP, IARPA, and IVADO.
- New papers: “One-Shot Imitation Learning” and “Stochastic Gradient Descent as Approximate Bayesian Inference.”
- The Open Philanthropy Project summarizes its findings on early field growth.
- The Centre for Effective Altruism is collecting donations for the Effective Altruism Funds in a range of cause areas.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters