Contents
FLI September, 2020 Newsletter
New Podcasts: Climate Change, AI Existential Safety & More
Kelly Wanser on Climate Change as a Possible Existential Threat
Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Listen here.
You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play and Stitcher.
More Podcast Episodes
Andrew Critch on AI Research Considerations for Human Existential Safety
In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives. Listen here.
Iason Gabriel on Foundational Philosophical Questions in AI Alignment
In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment. Listen here.