Contents
Research updates
- A new paper: “Defining Human Values for Value Learners“
- New at IAFF: Analysis of Algorithms and Partial Algorithms; Naturalistic Logical Updates;Notes from a Conversation on Act-Based and Goal-Directed Systems; Toy Model: Convergent Instrumental Goals
- New at AI Impacts: Global Computing Capacity
- A revised version of “The Value Learning Problem” (pdf) has been accepted to a AAAI spring symposium.
General updates
- MIRI and other Future of Life Institute (FLI) grantees participated in a AAAI workshop on AI safety this month.
- MIRI researcher Eliezer Yudkowsky discusses Ray Kurzweil, the Bayesian brain hypothesis, and an eclectic mix of other topics in a new interview.
- Alexei Andreev and Yudkowsky are seeking investors for Arbital, a new technology for explaining difficult topics in economics, mathematics, computer science, and other disciplines. As a demo, Yudkowsky has written a new and improved guide to Bayes’s Rule.
News and links
- Should We Fear or Welcome the Singularity? (video): a conversation between Kurzweil, Stuart Russell, Max Tegmark, and Harry Shum.
- The Code That Runs Our Lives (video): Deep learning pioneer Geoffrey Hinton expresses his concerns about smarter-than-human AI (at 10:00).
- The State of AI (video): Russell, Ya-Qin Zhang, Matthew Grob, and Andrew Moore share their views on a range of issues at Davos, including superintelligence (at 21:09).
- Bill Gates discusses AI timelines.
- Paul Christiano proposes a new AI alignment approach: algorithm learning by bootstrapped approval-maximization.
- Robert Wiblin asks the effective altruism community: If tech progress might be bad, what should we tell people about it?
- FLI collects introductory resources on AI safety research.
- Raising for Effective Giving, a major fundraiser for MIRI and other EA organizations, is seeking a Director of Growth.
- Murray Shanahan answers questions about the new Leverhulme Centre for the Future of Intelligence. Leverhulme CFI is presently seeking an Executive Director.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
Maggie Munro
2 December, 2024
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
All Newsletters