Contents
Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research.
Research updates
- A new paper: “Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making“
- New at IAFF: Pursuing Convergent Instrumental Subgoals on the User’s Behalf Doesn’t Always Require Good Priors; Open Problem: Thin Logical Priors
- MIRI has a new research advisor: Google DeepMind researcher Jan Leike.
- MIRI and the Center for Human-Compatible AI are looking for research interns for this summer. Apply by March 1!
General updates
- We attended the Future of Life Institute’s Beneficial AI conference at Asilomar. See Scott Alexander’s recap. MIRI executive director Nate Soares was on a technical safety panel discussion with representatives from DeepMind, OpenAI, and academia (video), also featuring a back-and-forth with Yann LeCun, the head of Facebook’s AI research group (at 22:00).
- MIRI staff and a number of top AI researchers are signatories on FLI’s new Asilomar AI Principles, which include cautions regarding arms races, value misalignment, recursive self-improvement, and superintelligent AI.
- The Center for Applied Rationality recounts MIRI researcher origin stories and other cases where their workshops have been a big assist to our work, alongside examples of CFAR’s impact on other groups.
- The Open Philanthropy Project has awarded a $32,000 grant to AI Impacts.
- Andrew Critch spoke at Princeton’s ENVISION conference (video).
- Matthew Graves has joined MIRI as a staff writer. See his first piece for our blog, a reply to “Superintelligence: The Idea That Eats Smart People.”
- The audio version of Rationality: From AI to Zombies is temporarily unavailable due to the shutdown of Castify. However, fans are already putting together a new free recording of the full collection.
News and links
- An Asilomar panel on superintelligence (video) gathers Elon Musk (OpenAI), Demis Hassabis (DeepMind), Ray Kurzweil (Google), Stuart Russell and Bart Selman (CHCAI), Nick Bostrom (FHI), Jaan Tallinn (CSER), Sam Harris, and David Chalmers.
- Also from Asilomar: Russell on corrigibility (video), Bostrom on openness in AI (video), and LeCun on the path to general AI (video).
- From MIT Technology Review‘s “AI Software Learns to Make AI Software”:
- Companies must currently pay a premium for machine-learning experts, who are in short supply. Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.
- AlphaGo quietly defeats the world’s top Go professionals in a crushing 60-win streak. AI also bests the top human players in no-limit poker.
- More signs that artificial general intelligence is becoming a trendier goal in the field: FAIR proposes an AGI progress metric.
- Representatives from Apple and OpenAI join the Partnership on AI, and MIT and Harvard announce a new Ethics and Governance of AI Fund.
- The World Economic Forum’s 2017 Global Risks Report includes a discussion of AI safety: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”
- On the other hand, the JASON advisory group reports to the US Department of Defense that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”
- Data scientist Sarah Constantin argues that ML algorithms are exhibiting linear or sublinear performance returns to linear improvements in processing power, and that deep learning represents a break from trend in image and speech recognition, but not in strategy games or language processing.
- New safety papers discuss human-in-the-loop reinforcement learning and ontology identification, and Jacob Steinhardt writes on latent variables and counterfactual reasoning in AI alignment.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
Maggie Munro
2 December, 2024
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
All Newsletters