Contents
Research updates
- A new paper: “Alignment for Advanced Machine Learning Systems.” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda.
- New at AI Impacts: Returns to Scale in Research
- Evan Lloyd represented MIRIxLosAngeles at AGI-16 this month, presenting “Asymptotic Logical Uncertainty and the Benford Test” (slides).
- We’ll be announcing a breakthrough in logical uncertainty this month, related to Scott Garrabrant’s previous results.
General updates
- Our 2015 in review, with a focus on the technical problems we made progress on.
- Another recap: how our summer colloquium series and fellows program went.
- We’ve uploaded our first CSRBAI talks: Stuart Russell on “AI: The Story So Far” (video), Alan Fern on “Toward Recognizing and Explaining Uncertainty” (video), and Francesca Rossi on “Moral Preferences” (video).
- We submitted our recommendations to the White House Office of Science and Technology Policy, cross-posted to our blog.
- We attended IJCAI and the White House’s AI and economics event. Furman on technological unemployment (video) and other talks are available online.
- Talks from June’s safety and control in AI event are also online. Speakers included Microsoft’s Eric Horvitz (video), FLI’s Richard Mallah (video), Google Brain’s Dario Amodei (video), and IARPA’s Jason Matheny (video).
News and links
- Complexity No Bar to AI: Gwern Branwen argues that computational complexity theory provides little reason to doubt that AI can surpass human intelligence.
- Bill Nordhaus, the world’s leading climate change economist, writes a paper on the economics of singularity scenarios.
- The Open Philanthropy Project has awarded Robin Hanson a three-year $265,000 grant to study multipolar AI scenarios. See also Hanson’s new argument for expecting a long era of whole-brain emulations prior to the development of AI with superhuman reasoning abilities.
- “Superintelligence Cannot Be Contained” discusses computability-theoretic limits to AI verification.
- The Financial Times runs a good profile of Nick Bostrom.
- DeepMind software reduces Google’s data center cooling bill by 40%.
- In a promising development, US federal regulators argue for the swift development and deployment of self-driving cars to reduce automobile accidents: “We cannot wait for perfect. We lose too many lives waiting for perfect.”
See the original newsletter on MIRI’s website.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters