Professor Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk in Cambridge, UK. In this talk, he discusses key research problems in keeping future AI beneficial, such as containment and value alignment, and addresses many common misconceptions about the risks from AI.
“The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.”