https://futureoflife.org/wp-content/uploads/2015/11/miri_horizontal_1000px-e1447624329777.png 284 900 Rob Bensinger https://futureoflife.org/wp-content/uploads/2015/10/FLI_logo-1.png Rob Bensinger2016-09-13 14:13:002016-09-13 14:15:03MIRI September 2016 Newsletter
- New at IAFF: Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning; Simplified Explanation of Stratification
- New at AI Impacts: Friendly AI as a Global Public Good
- We ran two research workshops this month: a veterans’ workshop on decision theory for long-time collaborators and staff, and a machine learning workshop focusing on generalizable environmental goals, impact measures, and mild optimization.
- AI researcher Abram Demski has accepted a research fellowship at MIRI, pending the completion of his PhD. He’ll be starting here in late 2016 / early 2017.
- Data scientist Ryan Carey is joining MIRI’s ML-oriented team this month as an assistant research fellow.
- MIRI’s 2016 strategy update outlines how our research plans have changed in light of recent developments. We also announce a generous $300,000 gift — our second-largest single donation to date.
- We’ve uploaded nine talks from CSRBAI’s robustness and preference specification weeks, including Jessica Taylor on “Alignment for Advanced Machine Learning Systems” (video), Jan Leike on “General Reinforcement Learning” (video), Paul Christiano on “Training an Aligned RL Agent” (video), and Dylan Hadfield-Menell on “The Off-Switch” (video).
- MIRI COO Malo Bourgon has been co-chairing a committee of IEEE’s Global Initiative for Ethical Considerations in the Design of Autonomous Systems. He recently moderated a workshop on general AI and superintelligence at the initiative’s first meeting.
- We had a great time at Effective Altruism Global, and taught at SPARC.
- We hired two new admins: Office Manager Aaron Silverbook, and Communications and Development Strategist Colm Ó Riain.
News and links
- The Open Philanthropy Project awards $5.6 million to Stuart Russell to launch an academic AI safety research institute: the Center for Human-Compatible AI.
- “Who Should Control Our Thinking Machines?“: Jack Clark interviews DeepMind’s Demis Hassabis.
- Elon Musk explains: “I think the biggest risk is not that the AI will develop a will of its own, but rather that it will follow the will of people that establish its utility function, or its optimization function. And that optimization function, if it is not well-thought-out — even if its intent is benign, it could have quite a bad outcome.”
- Modeling Intelligence as a Project-Specific Factor of Production: Ben Hoffman compares different AI takeoff scenarios.
- Clopen AI: Viktoriya Krakovna weighs the advantages of closed vs. open AI.
- Google X director Astro Teller expresses optimism about the future of AI in a Medium post announcing the first report of the Stanford AI100 study.
- Buzzfeed reports on efforts to prevent the development of lethal autonomous weapons systems.
- In controlled settings, researchers find ways to detect keystrokes via distortions in WiFi signals and jump air-gaps using hard drive actuator noises.
- Solid discussions on the EA Forum: Should Donors Make Commitments About Future Donations? and Should You Switch Away From Earning to Give?
See the original newsletter on MIRI’s website.