Skip to content
All Newsletters

FLI September, 2019 Newsletter

Published
23 October, 2019
Author
Revathi Kumar

Contents

FLI September, 2019 Newsletter

NOT COOL: A Climate Podcast, Preparing for Global Catastrophe & More


In case you missed it: We’ve launched a new podcast…

Climate change, to state the obvious, is a huge and complicated problem. That’s why we’ve launched a new series in our podcast line-up: Not Cool: A Climate Podcast. We started this series because the news about climate change seems to get worse with each new article and report — but the solutions, at least as reported, remain vague and elusive. In this new series, hosted by Ariel Conn, we’ll hear directly from climate experts from around the world as they answer every question we can think of about the climate crisis.

You can find a short trailer here that highlights what we’ll be covering in the coming months. And of course you can jump right in to the first episode — we’ve listed the available episodes below, and they can also be found at futureoflife.org/notcool. You can also always listen to all FLI podcasts on any of your favorite podcast platforms just by searching for “Future of Life Institute.” The Not Cool podcasts are all there, and we’ll be releasing new episodes every Tuesday and Thursday for at least the next couple of months. We hope these interviews will help you better understand the science and policies behind the climate crisis and what we can all do to prevent the worst effects of climate change.

This week, we’ll be focusing on extreme events and compounds events. Tuesday’s episode will feature Stephanie Herring of the National Oceanic & Atmospheric Administration; On Thursday, we’ll be joined Jakob Zscheischler from Switzerland’s University of Bern. Future topics include ocean acidification, climate economics, adverse health effects of climate change, how to adapt, using machine learning to tackle climate change, and much more.



Episode 1: John Cook on misinformation and overcoming climate silence



Episode 2: Joanna Haigh on climate modeling and the history of climate change



Episode 3: Tim Lenton on climate tipping points



Episode 4: Jessica Troni on helping countries adapt to climate change



Episode 5: Ken Caldeira on updating infrastructure and planning for an uncertain climate future



Episode 6: Alan Robock on geoengineering



Episode 7: Lindsay Getschel on climate change and national security



Episode 8: Suzanne Jones on climate policy and government responsibility



Episode 9: Andrew Revkin on climate communication, vulnerability, and information gaps

More Podcast Episodes





FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce

Most of us working on catastrophic and existential threats focus on trying to prevent them — not on figuring out how to survive the aftermath. But what if, despite everyone’s best efforts, humanity does undergo such a catastrophe? This month’s podcast is all about what we can do in the present to ensure humanity’s survival in a future worst-case scenario. Ariel is joined by Dave Denkenberger and Joshua Pearce, co-authors of the book Feeding Everyone No Matter What, who explain what would constitute a catastrophic event, what it would take to feed the global population, and how their research could help address world hunger today. They also discuss infrastructural preparations, appropriate technology, and why it’s worth investing in these efforts. Listen here.





AI Alignment Podcast: Synthesizing a human’s preferences into a utility function with Stuart Armstrong

In his Research Agenda v0.9: Synthesizing a human’s preferences into a utility function, Stuart Armstrong develops an approach for generating friendly artificial intelligence. His alignment proposal can broadly be understood as a kind of inverse reinforcement learning where most of the task of inferring human preferences is left to the AI itself. It’s up to us to build the correct assumptions, definitions, preference learning methodology, and synthesis process into the AI system such that it will be able to meaningfully learn human preferences and synthesize them into an adequate utility function. In order to get this all right, his agenda looks at how to understand and identify human partial preferences, how to ultimately synthesize these learned preferences into an “adequate” utility function, the practicalities of developing and estimating the human utility function, and how this agenda can assist in other methods of AI alignment. Listen here.


You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play, and Stitcher.

What We’ve Been Up to This Month


Max Tegmark & Meia Chita-Tegmark organized a workshop together with Emilia Javorsky on lethal autonomous weapons August 28-29, hosting an ideologically diverse group of world experts to see if they’d agree on more than a blank page. They produced a roadmap with fascinating ideas for first steps, including a 5-year use moratorium and strategies for verification, non-proliferation and de-escalation: https://www.cc.gatech.edu/ai/robot-lab/online-publications/AWS.pdf

Richard Mallah participated in the Partnership on AI’s All Partners Meeting in London.

FLI in the News


CORNELL CHRONICLE: AI helps shrink Amazon dams’ greenhouse gas emissions

C4ISRNET: How the Pentagon is tackling deepfakes as a national security problem




We’re new to instagram!
Please check out our profile and give us a follow:
@futureoflifeinstitute


Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram