Contents
Research updates
- Two new papers split logical uncertainty into two distinct subproblems: “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.”
- New at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time Hierarchy Theorems for Distributional Estimation Problems
- We will be presenting “The Value Learning Problem” at the IJCAI-16 Ethics for Artificial Intelligence workshop instead of the AAAI Spring Symposium where it was previously accepted.
General updates
- We’re launching a new research program with a machine learning focus. Half of MIRI’s team will be investigating potential ways to specify goals and guard against errors in advanced neural-network-inspired systems.
- We ran a type theory and formal verification workshop this past month.
News and links
- The Open Philanthropy Project explains its strategy of high-risk, high-reward hits-based givingand its decision to make AI risk its top focus area this year.
- Also from OpenPhil: Is it true that past researchers over-hyped AI? Is there a realistic chance of AI fundamentally changing civilization in the next 20 years?
- From Wired: Inside OpenAI, and Facebook is Building AI That Builds AI.
- The White House announces a public workshop series on the future of AI.
- The Wilberforce Society suggests policies for narrow and general AI development.
- Two new AI safety papers: “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” and “The AGI Containment Problem.”
- Peter Singer weighs in on catastrophic AI risk.
- Digital Genies: Stuart Russell discusses the problems of value learning and corrigibility in AI.
- Nick Bostrom is interviewed at CeBIT (video) and also gives a presentation on intelligence amplification and the status quo bias (video).
- Jeff MacMahan critiques philosophical critiques of effective altruism.
- Yale political scientist Allan Dafoe is seeking research assistants for a project on political and strategic concerns related to existential AI risk.
- The Center for Applied Rationality is accepting applicants to a free workshop for machine learning researchers and students.
This newsletter was originally posted here.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters