Contents
Research updates
- New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory
- New at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research
- MIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley’s Center for Human-Compatible AI, helping launch the research program there.
- “Using Machine Learning to Address AI Risk“: Jessica Taylor explains our AAMLS agenda (in video and blog versions) by walking through six potential problems with highly performing ML systems.
General updates
- Why AI Safety?: A quick summary (originally posted during our fundraiser)Â of the case for working on AI risk, including notes on distinctive features of our approach and our goals for the field.
- Nate Soares attended “Envisioning and Addressing Adverse AI Outcomes,” an event pitting red-team attackers against defenders in a variety of AI risk scenarios.
- We also attended an AI safety strategy retreat run by the Center for Applied Rationality.
News and links
- Ray Arnold provides a useful list of ways the average person help with AI safety.
- New from OpenAI: attacking machine learning with adversarial examples.
- OpenAI researcher Paul Christiano explains his view of human intelligence:
- I think of my brain as a machine driven by a powerful reinforcement learning agent. The RL agent chooses what thoughts to think, which memories to store and retrieve, where to direct my attention, and how to move my muscles.The “I” who speaks and deliberates is implemented by the RL agent, but is distinct and has different beliefs and desires. My thoughts are outputs and inputs to the RL agent, they are not what the RL agent “feels like from the inside.”
- Christiano describes three directions and desiderata for AI control: reliability and robustness, reward learning, and deliberation and amplification.
- Sarah Constantin argues that existing techniques won’t scale up to artificial general intelligence absent major conceptual breakthroughs.
- The Future of Humanity Institute and the Centre for the Study of Existential Risk ran a “Bad Actors and AI” workshop.
- FHI is seeking interns in reinforcement learning and AI safety.
- Michael Milford argues against brain-computer interfaces as an AI risk strategy.
- Open Philanthropy Project head Holden Karnofsky explains why he sees fewer benefits to public discourse than he used to.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters