- A new paper: “Defining Human Values for Value Learners“
- New at IAFF: Analysis of Algorithms and Partial Algorithms; Naturalistic Logical Updates;Notes from a Conversation on Act-Based and Goal-Directed Systems; Toy Model: Convergent Instrumental Goals
- New at AI Impacts: Global Computing Capacity
- A revised version of “The Value Learning Problem” (pdf) has been accepted to a AAAI spring symposium.
- MIRI and other Future of Life Institute (FLI) grantees participated in a AAAI workshop on AI safety this month.
- MIRI researcher Eliezer Yudkowsky discusses Ray Kurzweil, the Bayesian brain hypothesis, and an eclectic mix of other topics in a new interview.
- Alexei Andreev and Yudkowsky are seeking investors for Arbital, a new technology for explaining difficult topics in economics, mathematics, computer science, and other disciplines. As a demo, Yudkowsky has written a new and improved guide to Bayes’s Rule.
News and links
- Should We Fear or Welcome the Singularity? (video): a conversation between Kurzweil, Stuart Russell, Max Tegmark, and Harry Shum.
- The Code That Runs Our Lives (video): Deep learning pioneer Geoffrey Hinton expresses his concerns about smarter-than-human AI (at 10:00).
- The State of AI (video): Russell, Ya-Qin Zhang, Matthew Grob, and Andrew Moore share their views on a range of issues at Davos, including superintelligence (at 21:09).
- Bill Gates discusses AI timelines.
- Paul Christiano proposes a new AI alignment approach: algorithm learning by bootstrapped approval-maximization.
- Robert Wiblin asks the effective altruism community: If tech progress might be bad, what should we tell people about it?
- FLI collects introductory resources on AI safety research.
- Raising for Effective Giving, a major fundraiser for MIRI and other EA organizations, is seeking a Director of Growth.
- Murray Shanahan answers questions about the new Leverhulme Centre for the Future of Intelligence. Leverhulme CFI is presently seeking an Executive Director.