The following article and video were originally posted here. I recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators’ goals: The talk was inspired by “AI Alignment: Why It’s Hard, and Where to Start,” and serves as an introduction to the subfield of alignment research in AI. A
About Nate Soares
This author has yet to write their bio.
Meanwhile lets just say that we are proud Nate Soares contributed a whooping 3 entries.
Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more “robust and beneficial.” In this post, I distinguish three kinds of direct
Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from highly advanced AI agents, including those capable of recursive self-improvement. In this guest blog post, he delves into MIRI’s new research agenda. MIRI’s current research agenda — summarized