The Machine Intelligence Research Institute (MIRI) just completed its most recent round of fundraising, and with that Jed McCaleb wrote a brief post explaining why MIRI and their AI research is so important. You can find a copy of that message below, followed by MIRI’s January newsletter, which was put together by Rob Bensinger.
A few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century.
Similarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.
As AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies.
The past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results.
By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well.
- A new paper: “Proof-Producing Reflection for HOL”
- A new analysis: Safety Engineering, Target Selection, and Alignment Theory
- New at IAFF: What Do We Need Value Learning For?; Strict Dominance for the Modified Demski Prior;Reflective Probability Distributions and Standard Models of Arithmetic; Existence of Distributions That Are Expectation-Reflective and Know It; Concise Open Problem in Logical Uncertainty
- Our Winter Fundraiser is over! A total of 176 people donated $351,411, including some surprise matching donors. All of you have our sincere thanks.
- Jed McCaleb writes on why MIRI matters, while Andrew Critch writes on the need to scale MIRI’s methods.
- We attended NIPS, which hosted a symposium on the “social impacts of machine learning” this year. Viktoriya Krakovna summarizes her impressions.
- We’ve moved to a new, larger office with the Center for Applied Rationality (CFAR), a few floors up from our old one.
- Our paper announcements now have their own MIRI Blog category.
News and links
- “The 21st Century Philosophers”: AI safety research gets covered in OZY.
- Sam Altman and Elon Musk have brought together leading AI researchers to form a new $1 billion nonprofit, OpenAI. Andrej Karpathy explains OpenAI’s plans (link), and Altman and Musk provide additional background (link).
- Alphabet chairman Eric Schmidt and Google Ideas director Jared Cohen write on the need to “establish best practices to avoid undesirable outcomes” from AI.
- A new Future of Humanity Institute (FHI) paper: “Learning the Preferences of Ignorant, Inconsistent Agents.”
- Luke Muehlhauser and The Telegraph signal-boost FHI’s AI safety job postings (deadline Jan. 6). The Global Priorities Project is also seeking summer interns (deadline Jan. 10).
- CFAR is running a matching fundraiser through the end of January.