Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise –– in physics, artificial intelligence, history, philosophy and anthropology –– to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you’ll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Listen here.
FLI is thrilled to welcome Dr. Sandra Faber to our Scientific Advisory Board, where she will fill the vacancy left by the late Stephen Hawking. Dr. Faber is the University Professor of Astronomy and Astrophysics at the University of California, Santa Cruz, where she was the first woman to join the Lick Observatory. She received the National Medal of Science from President Obama, and she is the namesake for a minor planet. In recent years Dr. Faber has turned her attention and cosmic perspective to the question of the long-term future of humanity and life on Earth. Her unique expertise and creative insight will be an enormous asset to FLI, and we are honored to have her join our team.
As 2019 comes to an end and the opportunities of 2020 begin to emerge, it’s a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that could lead to the extinction of Earth-originating intelligent life. While this is important, much has been done at FLI and in the broader world to address these issues, and it can be useful to reflect on this progress to see how far we’ve come, to develop hope for the future, and to map out our path ahead. This podcast is a special end-of-year episode focused on meeting and introducing the FLI team, discussing what we’ve accomplished and are working on, and sharing our feelings and reasons for existential hope in 2020 and beyond. Listen here.
Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan’s journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he’s optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind. Listen here.
Over the course of the 20th century, malaria claimed an estimated 150 million to 300 million lives. Many researchers believe CRISPR gene drives could be key to eradicating the disease, saving millions of lives and trillions of dollars in associated healthcare costs. But in order to wipe it out, we would need to use anti-malaria gene drives to force three species into extinction. This would be one of the most audacious attempts by humans to engineer the planet’s ecosystem, a realm where we already have a checkered past. Regardless of whether the technology is being deployed to save a species or to force it into extinction, a number of scientists are wary. Gene drives will permanently alter an entire population. In many cases, there is no going back. If scientists fail to properly anticipate all of the effects and consequences, the impact on a particular ecological habitat — and the world at large — could be dramatic. Read more here.
FLI brought together a prominent group of AI researchers from academia and industry, along with thought leaders in economics, law, policy, ethics and philosophy, for five days dedicated to beneficial AGI.
Dr. Matthew Meselson became the third recipient of the $50,000 Future of Life Award. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity.