Skip to content
13 minute read

FLI December, 2019 Newsletter

January 10, 2020
Revathi Kumar

FLI December, 2019 Newsletter

Yuval Noah Harari Podcast & More 2019 Highlights

FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise –– in physics, artificial intelligence, history, philosophy and anthropology –– to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Listen here.

You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. All of our podcasts are also now on Spotify and iHeartRadio, or find us on SoundCloudiTunesGoogle Play and Stitcher.

December Highlights

Sandra Faber Joins FLI Board

FLI is thrilled to welcome Dr. Sandra Faber to our Scientific Advisory Board, where she will fill the vacancy left by the late Stephen Hawking. Dr. Faber is the University Professor of Astronomy and Astrophysics at the University of California, Santa Cruz, where she was the first woman to join the Lick Observatory. She received the National Medal of Science from President Obama, and she is the namesake for a minor planet. In recent years Dr. Faber has turned her attention and cosmic perspective to the question of the long-term future of humanity and life on Earth. Her unique expertise and creative insight will be an enormous asset to FLI, and we are honored to have her join our team.

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 comes to an end and the opportunities of 2020 begin to emerge, it’s a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that could lead to the extinction of Earth-originating intelligent life. While this is important, much has been done at FLI and in the broader world to address these issues, and it can be useful to reflect on this progress to see how far we’ve come, to develop hope for the future, and to map out our path ahead. This podcast is a special end-of-year episode focused on meeting and introducing the FLI team, discussing what we’ve accomplished and are working on, and sharing our feelings and reasons for existential hope in 2020 and beyond. Listen here.

AI Alignment Podcast: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan’s journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he’s optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind. Listen here.

Gene Drives: Assessing the Benefits & Risks

Over the course of the 20th century, malaria claimed an estimated 150 million to 300 million lives. Many researchers believe CRISPR gene drives could be key to eradicating the disease, saving millions of lives and trillions of dollars in associated healthcare costs. But in order to wipe it out, we would need to use anti-malaria gene drives to force three species into extinction. This would be one of the most audacious attempts by humans to engineer the planet’s ecosystem, a realm where we already have a checkered past. Regardless of whether the technology is being deployed to save a species or to force it into extinction, a number of scientists are wary. Gene drives will permanently alter an entire population. In many cases, there is no going back. If scientists fail to properly anticipate all of the effects and consequences, the impact on a particular ecological habitat — and the world at large — could be dramatic. Read more here.

2019 Highlights

Beneficial AGI 2019

FLI brought together a prominent group of AI researchers from academia and industry, along with thought leaders in economics, law, policy, ethics and philosophy, for five days dedicated to beneficial AGI.

Lethal Autonomous Weapons Video

Some of the world's leading AI researchers came together in one short video to explain some of their reasons for supporting a ban on lethal autonomous weapons.

The 2019 Future of Life Award

Dr. Matthew Meselson became the third recipient of the $50,000 Future of Life Award. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity.

The FLI Podcast

FLI's namesake podcast took on everything from nuclear weapons testing and lethal autonomous weapons to effective altruism and existential hope. This year's most popular episodes were The Unexpected Side Effects of Climate Change and Applying AI Safety & Ethics Today.

The AI Alignment Podcast

Host Lucas Perry continued to explore the problem of AI alignment on this series blending science, philosophy, ethics and more. This year's most popular episodes: On Consciousness, Qualia, & Meaning and An Overview of Technical AI Alignment (Part 1).

Not Cool: A Climate Podcast

Host Ariel Conn interviewed more than 30 climate experts for this short podcast series on the climate crisis. The most popular episodes: Tim Lenton on climate tipping points and John Cook on misinformation, social consensus, & overcoming climate silence.

Women for the Future

This Women’s History Month, FLI celebrated with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all.

Short Fiction Contest

In honor of Earth Day, FLI teamed up with Sapiens Plurum to sponsor a short fiction writing contest. Read the winning story here.

2019 News Highlights


Our content

Related posts

If you enjoyed this, you also might like:

FLI October 2022 Newsletter: Against Reckless Nuclear Escalation

Welcome to the Future of Life Institute Newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging technologies […]
November 11, 2022

FLI September 2022 Newsletter: $3M Impacts of Nuclear War Grants Program!

Welcome to the FLI newsletter! Every month, we bring 24,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.
October 17, 2022

FLI August 2022 Newsletter

Nuclear Winter Deservedly Back in the Public Eye On 6th August, the Future of Life Institute proudly announced the winners […]
September 14, 2022

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram