https://futureoflife.org/wp-content/uploads/2015/11/miri_horizontal_1000px-e1447624329777.png 284 900 Rob Bensinger https://futureoflife.org/wp-content/uploads/2015/10/FLI_logo-1.png Rob Bensinger2017-03-16 18:32:422017-03-16 18:36:24MIRI March 2017 Newsletter
- New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory
- New at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research
- MIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley’s Center for Human-Compatible AI, helping launch the research program there.
- “Using Machine Learning to Address AI Risk“: Jessica Taylor explains our AAMLS agenda (in video and blog versions) by walking through six potential problems with highly performing ML systems.
- Why AI Safety?: A quick summary (originally posted during our fundraiser) of the case for working on AI risk, including notes on distinctive features of our approach and our goals for the field.
- Nate Soares attended “Envisioning and Addressing Adverse AI Outcomes,” an event pitting red-team attackers against defenders in a variety of AI risk scenarios.
- We also attended an AI safety strategy retreat run by the Center for Applied Rationality.
News and links
- Ray Arnold provides a useful list of ways the average person help with AI safety.
- New from OpenAI: attacking machine learning with adversarial examples.
- OpenAI researcher Paul Christiano explains his view of human intelligence:
- I think of my brain as a machine driven by a powerful reinforcement learning agent. The RL agent chooses what thoughts to think, which memories to store and retrieve, where to direct my attention, and how to move my muscles.The “I” who speaks and deliberates is implemented by the RL agent, but is distinct and has different beliefs and desires. My thoughts are outputs and inputs to the RL agent, they are not what the RL agent “feels like from the inside.”
- Christiano describes three directions and desiderata for AI control: reliability and robustness, reward learning, and deliberation and amplification.
- Sarah Constantin argues that existing techniques won’t scale up to artificial general intelligence absent major conceptual breakthroughs.
- The Future of Humanity Institute and the Centre for the Study of Existential Risk ran a “Bad Actors and AI” workshop.
- FHI is seeking interns in reinforcement learning and AI safety.
- Michael Milford argues against brain-computer interfaces as an AI risk strategy.
- Open Philanthropy Project head Holden Karnofsky explains why he sees fewer benefits to public discourse than he used to.