Contents
Research updates
- New papers: “Formalizing Convergent Instrumental Goals” and “Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.” These papers have been accepted to the AAAI-16 workshop on AI, Ethics and Society.
- New at AI Impacts: Recently at AI Impacts
- New at IAFF: A First Look at the Hard Problem of Corrigibility; Superrationality in Arbitrary Games; A Limit-Computable, Self-Reflective Distribution; Reflective Oracles and Superrationality: Prisoner’s Dilemma
- Scott Garrabrant joins MIRI’s full-time research team this month.
General updates
- Our Winter Fundraiser is now live, and includes details on where we’ve been directing our research efforts in 2015, as well as our plans for 2016. The fundraiser will conclude on December 31.
- A 2014 collaboration between MIRI and the Oxford-based Future of Humanity Institute (FHI), “The Errors, Insights, and Lessons of Famous AI Predictions,” is being republished next week in the anthology Risks of Artificial Intelligence. Also included will be Daniel Dewey’s important strategic analysis “Long-Term Strategies for Ending Existential Risk from Fast Takeoff” and articles by MIRI Research Advisors Steve Omohundro and Roman Yampolskiy.
- We recently spent an enjoyable week in the UK comparing notes, sharing research, and trading ideas with FHI. During our visit, MIRI researcher Andrew Critch led a “Big-Picture Thinking” seminar on long-term AI safety (video).
News and links
- In collaboration with Oxford, UC Berkeley, and Imperial College London, Cambridge University is launching a new $15 million research center to study AI’s long-term impact: the Leverhulme Centre for the Future of Intelligence.
- The Strategic Artificial Intelligence Research Centre, a new joint initiative between FHI and the Cambridge Centre for the Study of Existential Risk, is accepting applications for three research positions between now and January 6: research fellows in machine learning and the control problem, in policy work and emerging technology governance, and in general AI strategy. FHI is additionally seeking a research fellow to study AI risk and ethics. (Full announcement.)
- FHI founder Nick Bostrom makes Foreign Policy‘s Top 100 Global Thinkers list.
- Bostrom (link), IJCAI President Francesca Rossi (link), and Vicarious co-founder Dileep George (link) weigh in on AI safety in a Washington Post series.
- Future of Life Institute co-founder Viktoriya Krakovna discusses risks from general AI without an intelligence explosion.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: 2024 in Review
Reflections on another massive year; major AI companies score disappointing safety grades; our 2024 Future of Life Award winners; and more!
Maggie Munro
31 December, 2024
Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
Maggie Munro
2 December, 2024
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
All Newsletters