Contents
FLI June, 2020 Newsletter
Steven Pinker/Stuart Russell Podcast, AI Regulation Open Letter & More
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Listen here.
You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play and Stitcher.
Policy & Advocacy Efforts
Open Letter: Foresight in AI Regulation
FLI published an open letter to the European Commission, which hopes to pass AI regulatory legislation early next year. The letter has over 120 signatories, including a number of the world’s top AI researchers. Find an excerpt below, or read the full letter here.
“We applaud the European Commission for tackling the challenge of determining the role that government can and should play and support meaningful regulations of AI systems in high-risk application areas. The stakes are high, and the potential ability of AI to remake institutions means that it is wise to consider novel approaches to governance and regulation, rather than assuming that existing structures will suffice. The Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests, which in some cases involve weakening regulation and downplaying potential risks related to AI. We hope that the Commission will stand firm in doing neither. […] The EU has already shown foresight and clear leadership in adopting meaningful regulations in other technology issues. We, the co-signed experts, support the Commission in taking a meaningful, future-oriented approach regarding the effects of AI systems on the rights and safety of EU citizens.” |
Read more about our letter and the
European Commission in AI: Decoded, Politico
Europe’s artificial intelligence newsletter.
More Podcast Episodes
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
It’s well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn’t fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI. Listen here.
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)
Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named “Utility.” Sam’s artistic excellence, motivated by blissful visions of the future, and David’s philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David’s work, how it informed his music production, and Sam and David’s optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Listen here.
FLI in the News
THE HILL: Pandemic is showing us we need safe and ethical AI more than ever
Opinion piece by Jessica Cussins Newman, FLI’s AI Policy Specialist
BIG THINK:Â Is AI a species-level threat to humanity?
Video interview with Max Tegmark, FLI’s President
UNESCO:Â Implementation of the UN Secretary-General’s Roadmap on Digital Cooperation