MIRI’s October Newsletter collects recent news and links related to the long-term impact of artificial intelligence. Highlights:
— New introductory material on MIRI can be found on our information page.
— An Open Philanthropy Project update discusses investigations into global catastrophic risk and U.S. policy reform.
— “Research Suggests Human Brain Is 30 Times As Powerful As The Best Supercomputers.” Tech Times reports on new research by the AI Impacts project, which has “developed a preliminary method for comparing AI to a brain, which they call traversed edges per second, or TEPS. TEPS essentially determines how rapidly information is passed along a system.”
— MIRI research associates develop a new approach to logical uncertainty in software agents. “The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false. […] By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it. However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.”
— Tom Dietterich and Eric Horvitz discuss the rise of concerns about AI. “[W]e believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.” See also Luke Muehlhauser’s response.