Contents
Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.Since we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!
Research updates
- Critch gave an introductory talk on logical induction (video) for a grad student seminar, going into more detail than our previous talk.
- New at IAFF: Logical Inductor Limts Are Dense Under Pointwise Convergence; Bias-Detecting Online Learners; Index of Some Decision Theory Posts
- We ran a second machine learning workshop.
General updates
- We ran an “Ask MIRI Anything” Q&A on the Effective Altruism forum.
- We posted the final videos from our Colloquium Series on Robust and Beneficial AI, including Armstrong on “Reduced Impact AI” (video) and Critch on “Robust Cooperation of Bounded Agents” (video).
- We attended OpenAI’s first unconference; see Viktoriya Krakovna’s recap.
- Eliezer Yudkowsky spoke on fundamental difficulties in aligning advanced AI at NYU’s “Ethics of AI” conference.
- A major development: Barack Obama and a recent White House report discuss intelligence explosion, Nick Bostrom’s Superintelligence, open problems in AI safety, and key questions for forecasting general AI. See also the submissions to the White House from MIRI, OpenAI, Google Inc., AAAI, and other parties.
News and links
- The UK Parliament cites recent AI safety work in a report on AI and robotics.
- The Open Philanthropy Project discusses methods for improving individuals’ forecasting abilities.
- Paul Christiano argues that AI safety will require that we align a variety of AI capacities with our interests, not just learning — e.g., Bayesian inference and search.
- See also new posts from Christiano on reliability amplification, reflective oracles, imitation + reinforcement learning, and the case for expecting most alignment problems to arise first as security problems.
- The Leverhulme Centre for the Future of Intelligence has officially launched, and is hiring postdoctoral researchers: details.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters