Skip to content
All Newsletters

FLI September, 2020 Newsletter

Published
5 September, 2020
Author
Anna Yelizarova

Contents

FLI September, 2020 Newsletter

New Podcasts: Climate Change, AI Existential Safety & More

Kelly Wanser on Climate Change as a Possible Existential Threat


Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Listen here.

You can also check out a video recording of the podcast here on our YouTube channel. Kelly shows some slides during the conversation, and these can be seen in the video version. (The video podcast’s audio and content is unedited, so it’s a bit longer than the audio-only version and contains some sound hiccups and more filler words.)

You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play and Stitcher.

More Podcast Episodes


Andrew Critch on AI Research Considerations for Human Existential Safety

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives. Listen here.


Iason Gabriel on Foundational Philosophical Questions in AI Alignment

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment. Listen here.

FLI in the News

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: California Pushes for AI Legislation

A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
5 July, 2024

Future of Life Institute Newsletter: Notes on the AI Seoul Summit

Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
31 May, 2024

Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons

Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.
3 May, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram