Entries by Lucas Perry

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Future of Life Institute · FLI Podcast: On Global Priorities, Existential Risk, and What Matters Most with Sam Harris Find the podcast on: Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and […]

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Find the podcast on: Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life […]

AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Find the podcast on: Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the […]

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

Find the podcast on: The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting […]

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Find the podcast on: Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity” has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar […]

AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre

Find the podcast on: Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines […]

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O’Keefe

Find the podcast on: As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, […]

AI Alignment Podcast: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

Find the podcast on: From Max Tegmark’s Life 3.0 to Stuart Russell’s Human Compatible and Nick Bostrom’s Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But […]

AI Alignment Podcast: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

 Find the podcast on: In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates […]

FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Find the podcast on: Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial […]

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

Find the podcast on: We could all be more altruistic and effective in our service of others, but what exactly is it that’s stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we […]

AI Alignment Podcast: Machine Ethics and AI Governance with Wendell Wallach

 Wendell Wallach has been at the forefront of contemporary emerging technology issues for decades now. As an interdisciplinary thinker, he has engaged at the intersections of ethics, governance, AI, bioethics, robotics, and philosophy since the beginning formulations of what we now know as AI alignment were being codified. Wendell began with a broad interest […]

FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre

There exist many facts about the nature of reality which stand at odds with our commonly held intuitions and experiences of the world. Ultimately, there is a relativity of the simultaneity of events and there is no universal “now.” Are these facts baked into our experience of the world? Or are our experiences and intuitions […]

AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell

 Stuart Russell is one of AI’s true pioneers and has been at the forefront of the field for decades. His expertise and forward thinking have culminated in his newest work, Human Compatible: Artificial Intelligence and the Problem of Control. The book is a cornerstone piece, alongside Superintelligence and Life 3.0, that articulates the civilization-scale […]

AI Alignment Podcast: Synthesizing a human’s preferences into a utility function with Stuart Armstrong

In his Research Agenda v0.9: Synthesizing a human’s preferences into a utility function, Stuart Armstrong develops an approach for generating friendly artificial intelligence. His alignment proposal can broadly be understood as a kind of inverse reinforcement learning where most of the task of inferring human preferences is left to the AI itself. It’s up to […]