Skip to content
All Newsletters

FLI June, 2020 Newsletter

Published
July 15, 2020
Author
Revathi Kumar

Contents

FLI June, 2020 Newsletter

Steven Pinker/Stuart Russell Podcast, AI Regulation Open Letter & More

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI


Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Listen here.

You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloudiTunesGoogle Play and Stitcher.

Policy & Advocacy Efforts

Open Letter: Foresight in AI Regulation


FLI published an open letter to the European Commission, which hopes to pass AI regulatory legislation early next year. The letter has over 120 signatories, including a number of the world’s top AI researchers. Find an excerpt below, or read the full letter here.

“We applaud the European Commission for tackling the challenge of determining the role that government can and should play and support meaningful regulations of AI systems in high-risk application areas. The stakes are high, and the potential ability of AI to remake institutions means that it is wise to consider novel approaches to governance and regulation, rather than assuming that existing structures will suffice.

The Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests, which in some cases involve weakening regulation and downplaying potential risks related to AI. We hope that the Commission will stand firm in doing neither. […] The EU has already shown foresight and clear leadership in adopting meaningful regulations in other technology issues. We, the co-signed experts, support the Commission in taking a meaningful, future-oriented approach regarding the effects of AI systems on the rights and safety of EU citizens.”

Read more about our letter and the

European Commission in AI: Decoded, Politico

Europe’s artificial intelligence newsletter.

More Podcast Episodes


Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It’s well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn’t fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI. Listen here.


Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named “Utility.” Sam’s artistic excellence, motivated by blissful visions of the future, and David’s philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David’s work, how it informed his music production, and Sam and David’s optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Listen here.

FLI in the News


THE HILL: Pandemic is showing us we need safe and ethical AI more than ever
Opinion piece by Jessica Cussins Newman, FLI’s AI Policy Specialist

BIG THINK: Is AI a species-level threat to humanity?
Video interview with Max Tegmark, FLI’s President

UNESCO: Implementation of the UN Secretary-General’s Roadmap on Digital Cooperation

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram