Skip to content
All Newsletters

FLI September, 2020 Newsletter

September 5, 2020
Anna Yelizarova

FLI September, 2020 Newsletter

New Podcasts: Climate Change, AI Existential Safety & More

Kelly Wanser on Climate Change as a Possible Existential Threat

Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Listen here.

You can also check out a video recording of the podcast here on our YouTube channel. Kelly shows some slides during the conversation, and these can be seen in the video version. (The video podcast’s audio and content is unedited, so it’s a bit longer than the audio-only version and contains some sound hiccups and more filler words.)

You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloudiTunesGoogle Play and Stitcher.

More Podcast Episodes

Andrew Critch on AI Research Considerations for Human Existential Safety

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives. Listen here.

Iason Gabriel on Foundational Philosophical Questions in AI Alignment

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment. Listen here.

FLI in the News


Our content

Related posts

If you enjoyed this, you also might like:

Future of Life Institute Newsletter: Pause Giant AI Experiments!

Welcome to the Future of Life Institute newsletter. Every month, we bring 25,000+ subscribers the latest news on how emerging […]
March 31, 2023

Future of Life Institute February 2023 Newsletter: Progress on Autonomous Weapons!

Welcome to the Future of Life Institute newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging […]
March 1, 2023

FLI November 2022 Newsletter: AI Liability Directive

Welcome to the Future of Life Institute Newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging technologies are transforming our […]
December 9, 2022

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram