MIRI'S November 2016 Newsletter
Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.Since we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!
- Critch gave an introductory talk on logical induction (video) for a grad student seminar, going into more detail than our previous talk.
- New at IAFF: Logical Inductor Limts Are Dense Under Pointwise Convergence; Bias-Detecting Online Learners; Index of Some Decision Theory Posts
- We ran a second machine learning workshop.
- We ran an “Ask MIRI Anything” Q&A on the Effective Altruism forum.
- We posted the final videos from our Colloquium Series on Robust and Beneficial AI, including Armstrong on “Reduced Impact AI” (video) and Critch on “Robust Cooperation of Bounded Agents” (video).
- We attended OpenAI’s first unconference; see Viktoriya Krakovna’s recap.
- Eliezer Yudkowsky spoke on fundamental difficulties in aligning advanced AI at NYU’s “Ethics of AI” conference.
- A major development: Barack Obama and a recent White House report discuss intelligence explosion, Nick Bostrom’s Superintelligence, open problems in AI safety, and key questions for forecasting general AI. See also the submissions to the White House from MIRI, OpenAI, Google Inc., AAAI, and other parties.
News and links
- The UK Parliament cites recent AI safety work in a report on AI and robotics.
- The Open Philanthropy Project discusses methods for improving individuals’ forecasting abilities.
- Paul Christiano argues that AI safety will require that we align a variety of AI capacities with our interests, not just learning — e.g., Bayesian inference and search.
- See also new posts from Christiano on reliability amplification, reflective oracles, imitation + reinforcement learning, and the case for expecting most alignment problems to arise first as security problems.
- The Leverhulme Centre for the Future of Intelligence has officially launched, and is hiring postdoctoral researchers: details.