Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

 Topics discussed in this episode include:

  • The projects of awakening and growing the wisdom with which to manage technologies
  • What might be possible of embarking on the project of waking up
  • Facets of human nature that contribute to existential risk
  • The dangers of the problem solving mindset
  • Improving the effective altruism and existential risk communities

 

Timestamps: 

0:00 Intro

3:40 Albert Einstein and the quest for awakening

8:45 Non-self, emptiness, and non-duality

25:48 Stephen’s conception of awakening, and making the wise more powerful vs the powerful more wise

33:32 The importance of insight

49:45 The present moment, creativity, and suffering/pain/dukkha

58:44 Stephen’s article, Embracing Extinction

1:04:48 The dangers of the problem solving mindset

1:26:12 Improving the effective altruism and existential risk communities

1:37:30 Where to find and follow Stephen

 

Citations:

Stephen’s website

Stephen’s teachings and courses

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, we have a special episode for you with Stephen Batchelor. Stephen is a secular and skeptical Buddhist teacher and practitioner with many years under his belt in a variety of different Buddhist traditions. You’ve probably heard often on this podcast about the dynamics of the race between the power of our technology and the wisdom with which we manage it. This podcast is primarily centered around the wisdom portion of this dynamic and how we might cultivate wisdom, and how that relates to the growing power of our technology. Stephen and I get into discussing the cultivation of wisdom, what awakening might entail or look like. And also his views on embracing existential risk and existential threats. As for a little bit more background, we can think of ourselves as contextualized in a world of existential threats that are primarily created due to the kinds of minds that people have and how we behave. Particularly how we decide to use industry and technology and science and the kinds of incentives and dynamics that are born of that. And so cultivating wisdom here in this conversation is seeking to try to and understand how we might better gain insight into and grow beyond the worst parts of human nature. Things like hate, greed, and delusion, which motivate and help to cultivate the manifestation of existential risks. The flipside of understanding the ways in which hate, greed, and delusion motivate and lead to the manifestation of existential risk is also uncovering and being interested in the project of human awakening and developing into our full potential. So, this just means that whatever idealized kind of version you think you might want to be or that you might strive to be there is a path to getting there and this podcast is primarily interested in that path and how that path relates to living in a world of existential threat and how we might relate to existential risk and its mitigation. This podcast contains a bit of Buddhist jargon in it. I do my best in this podcast to define the words to the best of my ability. I’m not an expert but I think that these definitions will help to bring a bit of context and understanding to some of the conversation. 

Stephen Batchelor is a contemporary Buddhist teacher and writer, best known for his secular or agnostic approach to Buddhism. Stephen considers Buddhism to be a constantly evolving culture of awakening rather than a religious system based on immutable dogmas and beliefs. Through his writings, translations and teaching, Stephen engages in a critical exploration of Buddhism’s role in the modern world, which has earned him both condemnation as a heretic and praise as a reformer. And with that, let’s get into our conversation with Stephen Batchelor. 

Thanks again so much for coming on. I’ve been really excited and looking forward to this conversation. I just wanted to start it off here with a quote by Albert Einstein that I thought would set the mood and the context. “A human being is a part of the whole called by us universe, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest, a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures, and the whole of nature and its beauty. Nobody is able to achieve this completely. But the striving for such achievement is in itself a part of the liberation and a foundation of inner security.”

This quote to me is compelling, because, one, it comes from someone who is celebrated as one of the greatest scientists that have ever lived. In that sense, it’s a calling for the spiritual journey, it seems, from someone who, for people who are skeptical of something like the project of awakening or whatever a secular Dharma might be or look like. I think it sets up well the project. I mean, he talks about here how this idea of separation is a kind of optical delusion of his consciousness. He sets it up as the problem of trying to arrive at experiential truth and this project of self-improvement. It’s in the spirit of this, I think, seeking to become and live an engaged and fulfilled life that I am interested and motivated in having this conversation with you.

With that in mind, the problem, it seems, that we have currently in the 21st century is what Max Tegmark and others have called the race between the power of our technology and the wisdom with which we manage it. I’m basically interested in discussing and exploring how to grow wisdom and about how to grow into and develop full human potential so that we can manage powerful things like technology.

Stephen Batchelor: I love the quote. I think I’ve heard it before. I’ve come across a number of similar statements that Einstein has made over the years of his life. I’ve always been impressed by that. As you say, this is a man who’s not regarded remotely as a religious or a spiritual figure. Yet, obviously, a highly sensitive man, a man who has plumbed the depths of physics in a way that has transformed our world. Clearly, someone with enormous insight and understanding of the kind of universe we live in. Yet, at the same time, in these sorts of passages, we realize that he’s not just the stereotyped, detached scientist separated out from the world looking at things clinically and trying to completely subtract his own subjectivity.

This, I think, is often the problem with scientific approaches. The idea is that you have to get yourself out of the way in order to somehow see things as they really are. Einstein breaks that stereotype very well and recognizes the need that if we are to evolve as human beings and not just as scientists who get increasingly clear and maybe very deep understandings into the workings of the universe, something else is needed. Of course, Einstein himself does not seem to really have any kind of methodology as to how that might be achieved. He seems to be calling upon something he may consider to be an innate human capacity or quality. His words resonate very much in terms of certain philosophies, certain spiritual and religious traditions, but we don’t really see any kind of program or practice that would actually lead to what he recognizes to be so crucial.

I found the final comment he makes a bit deflating, he says, he seems to think it has to do with inner security, which is a highly subjective, and I would think rather limited goal to achieve, given what he’s just made out as his vision.

Lucas Perry: Yeah, that’s wonderfully said. You can help unpack and show the depths of your skepticism, particularly about Buddhism, but also your interest in creating what you call a secular Dharma. We’re on a planet with ancient wisdom traditions, which have a lot to say about human subjectivity. Einstein is setting up this project of seeing through what he says and takes to be a kind of delusion of consciousness, the sense of experiencing oneself and thoughts and feelings as something separate and this restricting us to caring most about our personal desires and to affection.

I mean, here, he seems to be very explicitly tapping into concepts in Buddhism, which have been explored like non-self and emptiness and non-duality. Hey this is post-podcast Lucas here and I just wanted to try and explain a few terms that were introduced here, like “non-self,” “emptiness,” and “non-duality.” And I’ll do my best to explain them but I’m not an expert and other people who think about this kind of stuff might have a different take or give a different explanation, but I’ll do my best. So, I think it’s best to first think about the universe 13.7 billion years ago as an unfolding continuous process, which contextualized within the unfolding of the universe is the process of evolution which has led to human beings. There’s this deep grounded connection of the human mind and human nature, and just being human with the very ground of being and the unfolding of everything. Yet, in that unfolding there is this construction of a dualistic world model, which is very fitness enhancing where you are constructing this model of self and a model of the world. This self-other dualistic conceptual framework born of this evolutionary process is fitness enhancing, it’s helpful, and it’s useful, yet it is a fabrication, an epistemic construction which is imposed upon a process for survival reasons. And so non-duality comes in and simply rejects this dualistic construction and says things are not separate things are not two, one undivided without a second. This means that once one sees all this dualistic construction and fabrication as it is then one enters, what I might say is something more like a don’t know mind, where one isn’t relying on this conceptual dualistic fabrication to ultimately know. Or one doesn’t take it as what reality ultimately is, as divided into all of these things like self and other and tables and chairs and stars and galaxy. Non-self and emptiness are both very much related to this.

Non-self is the view that the self is also this kind of construction or fabrication, which under experiential and conceptual analysis just falls apart and reveals that there is no core or essence to you, that there is nothing to find, that there is no self, but merely the continual unfolding of empty ephemeral conditioned phenomena where emptiness here means that the self and all objects that you think exist are empty of intrinsic existence and are merely these kinds of ephemeral appearances based on causes and conditions which when those causes and conditions no longer sustain for that thing to appear in that current form the thing dissolves. In non-duality there’s this sense of no coming, no going, there’s no real start, beginning or end to anything, there is this continual unfolding process where something like birth and death are abstractions or constructions which are imposed on a non-dual continuous process. And so the claims of non-self emptiness and non-duality are all both ontological claims about how the universe actually is, but they’re also experiential claims about how we can shift our consciousness into a more awake state where we have insight into the nature of our experience and to the nature of things and we’re able to shift into a clear non-conceptual seeing of something like non-dual awareness or emptiness, or non-self. This might entail something like noticing the sense of self becoming an object of witnessing. There’s no longer an identification with self and so there’s space from it. There might eventually be a dropping away of a sense of self where all that’s left is consciousness and contents without a center where there isn’t a distance between witnesser and what is perceived. All there is is consciousness and everything is perceived infinitely close, where everything is basically just made of consciousness and where consciousness is no longer structured by this dualistic frame work.

And so a layer of fabrication drops away and there’s just consciousness and this deep sense of interconnectivity and being. This is what I think Eisnstein is pointing to when he says that our experience of ourselves as separated from the rest of the universe is a kind of optical delusion of our consciousness. I think he is pointing towards how the constructed sense of self and the dualistic fabrication of a self-other world model populated by objects and things and other people and then buying into these constructions as a kind of an ultimate representation of how things are where there are all these things with intrinsic independent existence, with a kind of essence, rather than being this non-dual, undifferentiated, unfolding, continuous process, which can be said to be neither same nor different, which there can neither be said to not be a self or be a self, and so I think this is what he is pointing in the direction of and why I point out that there are other wisdom traditions which have been thinking about and practicing cultivating these kinds of insights and awareness for many years. So, back to the conversation. As you said, he doesn’t have a practice or a system for arriving at an experiential understanding of these things. Yet, there are traditions which have long studied and practiced this project.

Stephen Batchelor: Yes, this is absolutely correct. Myself and many of my peers and colleagues and friends have spent their lives exploring these wisdom traditions. In my own case, this has been various forms of Buddhism primarily. I think we also find these traditions within our own culture. I think we find something very similar in the Socratic tradition. We find something likewise in the Hellenistic philosophies, which also recognize that human flourishing, which is a term I very much like, is essentially an ethical practice. It’s a way of being in which we take our own subjective assumptions to task.

We don’t just assume that everything we think and feel is the way things actually are but we begin to look more critically. We pursue what Socrates calls an examined life. Remember, an unexamined life is not worth living, he said. Then, what is this examined life? Perhaps like yourself and others, I found, in a way, more richness in Asian traditions because they don’t just talk about these things, but they have actual living methodologies and practices that, if followed, can lead us to a radical change of mind can begin to unfold different layers of human experience that are often blocked and somehow ignored, and cut off from what we experience from moment to moment.

Lucas Perry: Yeah, exactly. There’s this valid project then of exploring, I think, the internal and the subjective point of view in a rigorous way, which leads to a project of something like living an examined life. From that perspective, one can come to experiential kinds of wisdom and I think can get in touch with kinds of skillfulness and wisdom, which an overreliance or a sole reliance on the conceptualization of the dualistic mind would fail at, thinking about like compassion or discovering something like Buddha nature, which I had been very skeptical of for a long time, but less so now, and heart-mind, and heart wisdom.

And that awakening is, I think, a valid project, and something that is real and authentic. I think that that’s what Einstein helps to explain. His credentials, I think, helped to beef this up a bit for people who may be skeptical of that project. I mean, I view this partially as, for thousands and thousands of years, people have been struggling just to meet their basic needs. As these basic needs, even just material needs, keep getting met, we have more and more space for authentic human awakening. Today, in the 21st century, we’re better positioned than ever to have the time, space and the information to live deeply and to live an examined life and to explore what is possible of the depths of human compassion and well-being and connection.

Thinking about the best moment of the best day of your life and if it’s possible to stabilize in that kind of introspection and compassion and way of being.

Stephen Batchelor: I broadly go along with exactly what you’re saying. It almost seems self-evident, I think, for those of us who have been involved in this process for a number of years. On the other hand, it’s very easy to sort of talk about emptiness and non-duality and Buddha nature and so on. What really makes the difference is how we actually internalize those ideas both conceptually. I think, it is important that we have a clear, rational understanding of what these ideas convey. Also, of course, through actual forms of spiritual practice, by performing spiritual exercises as we find already in the Greeks, that will hopefully actually lead to significant changes in how we experience life and experience ourselves.

What we’ve said so far is still leaving this somewhat at a level of abstraction. I’ve been involved in Buddhism now full time for the last 45 years. If I’m entirely honest with myself, I have to acknowledge that at many levels, my consciousness seems to be much the same. There are moments in my practice in which I’ve gained what I would consider to be insights. I like to think that my practice of Dharma has also made me more sensitized, more empathetic to the needs of others. I like to think that I’ve committed myself to a way of life in which I put aside the conventional ambitions of most people of my generation.

Yet, I can also see that I still suffer from anxieties and high moods and low moods and I get irritated and can behave very selfishly at times. I see, in many ways, that what this practice leads me to is not a transcendence of these limiting factors that Einstein refers to, but let’s say a greater clarity and a greater humility in acknowledging and accepting these limitations. That I think is, if anything, where the practice of awakening goes to. Not so much to gaining a breakthrough into some transcendental reality, but rather to gain a far more intimate and real encounter with the limitations of one’s own experience as a human being.

I think we can sometimes lose touch with the need for this moment to moment humility, this recognition that we are, I think, to a considerable degree, built as biological organisms to maintain a certain kind of consciousness that will, I suspect, be with us largely until we die. I would say the same also about the Buddhist teachers that I’ve met over the decades that however insightful their teachings may be, however fine examples they are of what a human life can be, the more time I spend with them on a day to day basis. I have done with Tibetan lamas, with Zen masters, I’ve got to know them quite well.

I discover not so much a person who is almost as it were out of this world, but rather someone who carries still with them the same kinds of human traits and quirkiness and has good days and bad days like the rest of us. I would be cautious, in a way, setting up another kind of divide, another kind of duality between the unenlightened and the enlightened. One of the things I like very much about Zen is that it’s quite conscious of this problem. One of my favorite citations is from the Platform Sutra of the Sixth Patriarch called Hui-neng, who says, “When an ordinary person becomes awakened, we call them a Buddha. When a Buddha becomes deluded, we call them an ordinary person.”

This is a way of thinking that is perhaps more close to Taoism than it might be to the Indian Buddhist traditions, which to me, tend to operate rather more explicitly within the domain of there being enlightenment on the one hand, and delusion on the other, awakening on the one hand, non-awakening on the other. Then, there’s a kind of a step-by-step path that leads from one unenlightened to enlightened. The Zen tradition is suspicious of that kind of mental description of that kind of frame and recognizes that awakening is not something remote or transcendent. Awakening is a capacity that is open to us in each moment.

It’s not so much about gaining insight into some deep ontology into the nature of how things really are. It’s understood far more ethically. In other words, if I respond to life in a more awake way that can occur for me as well as the Buddha or anyone else in any given moment. In the next moment, I may have slipped back into my old neurotic self and recognize, in fact, that I don’t respond appropriately to the situation’s I face. I’m a little bit wary of this language. A lot of the language we find in the Indian Buddhist traditions we find also in Advaita Vedanta and so on do tend to set up a kind of a polarity. That’s kind of unavoidable, because given what we’re talking about, we need to have some idea of what it is that we are aspiring to achieve.

The danger is we set up another dualism and that’s problematic. I feel that we need a discourse that is able to affirm awakening and enlightenment in the midst of the every day, in the midst of the messiness of my own mind now. The challenge is not so much to become an enlightened person, but it’s to live a more awake and responsive existence in each situation that I find myself dealing with in my life. At the moment, my practice is talking to you, is having this conversation. What matters is not how deeply I may have understood emptiness, but how appropriately, given our conversation, I can take this conversation forward in a way that may be a benefit to others.

I might even learn something myself in this conversation, maybe you will too. That’s where I would like to locate the idea of awakening, the idea of enlightenment, in how I respond to the given situation, I find myself at any moment.

Lucas Perry: Yeah. One thing that comes to mind that I really like and I think might resonate with you, given what you said, I think, Ram Dass said something like, “I’ve become a connoisseur of my neuroses.” I think that there’s just this distinction here between the immersion in neuroses, and then this more awake state with choice where you can become a connoisseur of your neuroses. It’s not that you’ve gotten rid of every bad possible thought that you can have, but that there’s a freedom of non-reactivity in relation to them. That gives a freedom of experience and a freedom of choice.

I think you very much well set up a pragmatic approach to whatever awakening might be. Given the project of living in a world with many other actors with much pain and with much suffering and with much ignorance and delusion, I’m curious to know how you think one might approach spreading something like a secular Dharma. There’s kind of two approaches here, one where we might make the wise more powerful and one where we might be making the powerful more wise.

If listeners take anything away today, in terms of wisdom, how would you suggest that wisdom be shared and embodied and spread in the world, given these two directions of either making the wise more powerful or making the powerful more wise?

Stephen Batchelor: Okay. To answer that question, I think, I have to maybe flesh out more clearly what I understand by awakening. My understanding of awakening is rooted in the conclusion to the Buddha’s first discourse, or what’s regarded as the Buddha’s first discourse. There, he says very clearly that he could not consider himself to be fully awake until he had recognized, performed and mastered four tasks. The first task is that of embracing life fully. The second task is about letting reactivity be, or letting it go, I think letting it be is probably better. The third is about seeing for oneself, the stopping of reactivity, or seeing for oneself a nonreactive space of mind. Then, from that nonreactive space of mind, being able to respond to life in such a way that you actually open up another way of being in the world.

That’s called as the fourth task creating a path or actualizing a path. That path is understood not just as a spiritual path, but one that engages how we think and how we speak and how we work and how we are engaged with the world. What’s being presented here as awakening is not reducible to gaining a privileged insight into say the nature of emptiness or into the nature of the divine, or into something transcendent or into the unconditioned. Buddha doesn’t speak that way. These early Buddhist texts on which I base what I’m saying have somehow been relegated to the sidelines. Instead, we find a Buddhist tradition today speaking of awakening or sometimes they’ll use the word enlightenment, as basically a kind of mystical breakthrough into seeing a transcendent or truer reality, which is often called an absolute truth or an ultimate truth.

Again, these are words the Buddha never used. In my own approach, I’m very keen to try to recover what seems to me and I admit that this is my own project. I don’t think it’s a widespread view. I find this very, very helpful, because awakening is now understood not as a state that some people have arrived at and other people haven’t, which I think sets up too harsh a split, a duality, but rather, awakening begins to be understood as a process. It begins to be understood as part and parcel of how we lead our lives from moment to moment in the world. We can look at this in four phases as embracing life, of letting our reactivity be, seeing the stopping of reactivity and then responding in an appropriate way.

In practice, this process is going on so rapidly that it’s effectively a single task. It’s a single task with what we might call four facets. Here, I would come to what you talk about as wisdom. Wisdom, in this sense, is not reducible to some cognitive understanding. It has to do with the way in which we engage with life as a whole, as an embodied, enacted person. Again, it reflect somewhat the Zen quotation I already mentioned. It has to do with the whole of ourselves, the whole of the way we are in the world as an embodied creature.

I feel that to make the wise more powerful, by wise, I would mean people who actually have given the totality of their life, not just in terms of years, but I mean, in terms of their varying skill sets that they have as human beings, their emotional life, their intuitive life, their physical life, their cognitive life, that all of these elements begin to become more integrated, the person becomes less divided between the spiritual part of themselves and the material part of themselves, for example.

The whole of one’s being is then drawn into the unfolding of this process within the framework of this fourfold task. I feel that if we are to make the powerful more wise, one would require the powerful to make, not just changes in how they might think about things or even gain mystical insights, but actually to have the courage to embark on another way of being in the world at all levels. That is a much greater challenge, I feel. I think we also need the humility to recognize that although, as you say, we do have at our time in the 21st century now, access to traditions of practice, philosophies, we have leisure, we have the times and places to pursue these sorts of practices, we should be wary of the hubris that thereby we can somehow, by mastering these different approaches, we can thereby be in a far better position to solve the problems that the world presents to us.

I think that may be true in a very general sense. But I feel that what’s really called for is a fundamental change in our perspective, with regard to how we not only think of ourselves, but how we actually behave in relationship to the life that we share on this planet with other beings and this planet that we are endangering through our activities. The other dimension, of course, is this is not going to be something that any particular individual alone will be able to accomplish but it requires a societal shift in perspective. It requires communities. It requires institutions to seek to model themselves more on this kind of awake way of living, so that we can more effectively collaborate.

I think the Buddhists and the Advaitists and the Sufis and the Taoists and so on certainly have a great deal to offer to this conversation. I feel that if we’re really to make a difference, their insights have to be incorporated into a much wider and more comprehensive rethinking of the human situation. That’s not something I can see taking place in a short term. I feel that the degree of change required will probably require generations, if I’m really honest about this.

Lucas Perry: You see living an awake life to be something more like a way of being, a way of engaging with the world, a kind of non-reactivity and a freedom from reactivity and habitual conditioned modes of being and a freedom to act in ways which are ethical and aligned and which are conducive to living an examined and fulfilling life.

Stephen Batchelor: Exactly. I couldn’t have put it better myself, Luke. That’s kind of my perspective. Yes.

Lucas Perry: I wanted to put up a little bit of a defense here of insight because you said that it’s not about pursuing some kind of transcendent experience. Having insight can be such a paradigm shift that it is seemingly transcendent in some way. Or it can also be ordinary and not contain something like spiritual fireworks, but still be such a paradigm shift that you’ll never be the same ever again. I don’t expect you’ll disagree with this. In defense of insight into impermanence, it’s like our brains are hooked up to the world where there’s just all this form and sense data and we’re engaging with many things as if they were to bring us lasting satiation and happiness but they can never do that.

Insight into impermanence, I mean, impermanence, conceptually is so obvious and trivial everyone. Everything is impermanent, everyone understands this. If at a deep, intuitive and experiential and non-conceptual level one embodies impermanence, one doesn’t grasp or interact in the world in the same way, because you just can’t, because you see how things are. Then similarly, if one is living their life from within immersion in conceptual thought and ego identification, the capacity to drop back into witnessing the self and for that to dissolve and drop away, and then for all to remain is consciousness and its contents without a center. Hey this is  post-podcast Lucas again and I just wanted to unpack a little bit here what I meant by “immersion in conceptual thought” and “ego-identification.” By ego identification I mean this kind of immersion and identification with thought where one generates and reifies the sense of self as being in the head, and as the haver of an experience, as someone having an experience in the head, and thinking the thoughts and executing all of the commands of the mind and body. And this stands in distinction with a capacity to unhook awareness from that process and to witness the self as a object of perception, an object to be witness, rather than as the center of identity, and for that to then create a distance between the foundation of witnessing and the process of being identified with the ego or reifying and constructing an ego, which is the beginning of shifting towards a perception of consciousness and it’s contents without a center. So, back to the conversation.  That kind of insight and the stabilization in that can be a foundation for loving-kindness and openness and connection.

People experience a great immense of relief, like, oh, my God, I was a poor little self in this world, and I thought I was going to die. Now, I’ve had this insight into non-self, which can be practiced and stabilized, even in a normal day to day life through the practices of, for example, Dzogchen and Mahamudra. This is just my little defense of insight as I think also offering a lot of freedom and capacity for like an authentic shift in consciousness, which I think is part of awakening and that it’s likely not just a way of being in the world. Do you have any reactions to that?

Stephen Batchelor: I have plenty of reactions to that.

Lucas Perry: Yeah.

Stephen Batchelor: I don’t disagree with you, clearly. Of course, there are moments in people’s lives, whether they’re practicing Buddhism or whatever it is. Sometimes, they’re not doing any kind of formal spiritual practice whatsoever but life itself is a great teacher and shows them something about themselves or about life that hits you very powerfully, and does have a transformative effect. Those moments are often the keynotes that define how we then are in the world. Perhaps my work has tended to be recently at least somewhat of a reaction against the over privileging of these special moments.

And attempt to recover a much more integrated understanding of being awake is about being awake moment to moment in our daily lives. Let me give you a couple of examples. With hand on heart, I can say that I have had one experience which would fit your definition probably of what we might call a mystical experience. This occurred when I was a young Tibetan Buddhist monk, I was probably 22 or 23 years old. And I was living in Dharamsala, where the Dalai Lama has his residence. I was studying Tibetan Buddhism. I was very deeply involved in it. One day, I was out in the forest in the huts near where I lived and I went to get some water. Coming back to my hut with a bucket full of water, I suddenly was stopped in my tracks by this extraordinary realization that there was anything at all, rather than just nothing, a sense of total and utter astonishment that this was happening.

It was a moment that maybe lasted a few minutes and all of its intensity. What it revealed to me was not some ultimate truth and not certainly anything like emptiness or pristine awareness. What it revealed to me was the fundamentally questionable nature of experience itself. That experience, I will fully accord with what you have said, has changed my life, it continues to do so now. Yes, I do feel very strongly that deep personal experience of this sort of nature can have a profoundly transformative effect on the way we then inhabit the value system, what we regard as really being important in life.

As for impermanence, for me, the most effective meditations on impermanence were those on death. Again, this is from my Tibetan Buddhist training. Impermanence is of key significance regarding the fact that I am impermanent, that you are impermanent, the people I love are . When I was a young monk, every day, I had to spend at least 30 minutes meditating on what they call in, Tibetan, chiwa’i mitagpa: The impermanence, which is death, and you contemplate reflectively the certainty of death, the uncertainty of its time, and the fact that since death is certain and its time is uncertain, then how should I live this life? The paradox I find with death meditation was not that it made me feel gloomy or pessimistic or anything like that.

In fact, it had the very opposite effect. It made me feel totally alive. It made me realize that I was a living being. This kind of meditation is just one I did for many, many years. That likewise did have a very transformative effect on my life and it continues to do so. Yes, I agree with you. I think that it is important that we experience ourselves, our existence, our life, our consciousness, from a perspective that suddenly reveals it to be quite other than what we had thought. It’s more than just as we thought, it’s also as we felt. Once these kind of insight become internalized and they become part and parcel of who we are, that I feel is what contributes very much to enabling this being in the world version of awakening.

In the Korean Zen tradition in which I trained, they used to speak about sudden awakening, followed by gradual practice. In other words, they understood that this process that we’re involved in of what we loosely call the spiritual path or awakening is comprised both of moments of deep insight that may be very brief in terms of their duration. But if they’re not somehow followed through with a gradual moment-to-moment commitment to living differently. Then, they can have relatively little impact. The danger I feel of making too big a deal out of these moments of insight is that they come to be regarded as the summum bonum of what it’s all about.

Whereas, I don’t think it is, I think, that they are moments within a much richer and more complex process of living. I don’t think they should be somehow given excessive importance in that overall scheme of things.

Lucas Perry: That makes a lot of sense. This is very much rings true from your conversation you had with Sam Harris and the back and forth that you had there is that you put a very certain kind of emphasis on the teachings. Many of the things that I might respond with over the next hour, you would probably agree with. You see them as not the end goal, you would deemphasize them in relation to this living an authentic, fulfilled, awakened life as a mode of being.

Stephen Batchelor: I think that is broadly correct. I honor the insights that come from all traditions, really. I don’t think Buddhism has a monopoly on these things at all.

Lucas Perry: You’re talking about death. I’ve been listening to a lot of Thích Nhất Hạnh recently. I mean, even insight will change one’s relationship with death. He talks about a lot no coming, no going, no birth, no death and the wave like nature of things. Insight into that plus I find this daily mahamudra practice of dropping back to pristine awareness or I think in Dzogchen, they call Rigpa, and glimpses of non-duality and the nature of things. All this coming together can lead to a very beautiful life where I feel like these peak spiritual experience moments are a part of the ordinary, part of the mystery and part of checking and seeing how things are.

I guess, just the last thing I’m trying to emphasize here is that I think the project does lead to a totally new way of being new paradigm shifts much more well-being and capacity for loving-kindness and discovering parts of you that you never knew existed or that the possible. Like, if you’ve spent your whole life using conceptual thought to know everything and you’ve been within ego identification, and you didn’t know that there was anything else, changing from that and arriving at something like heart mindfulness changes the way you’re going to live the whole rest of your life and with much greater ease and well-being.

Hey it’s post-podcast Lucas back again and just wanted to define a few words here that were introduced that might be interesting and helpful to know. The first two are prestine awareness or rigpa. Rigpa is a Dzogchen word for this. I think they both point toward the same thing, which is this original and ground of consciousness which is this pure witnessing or this original wakefulness of personal experience, which all content and form or perceptions or even the sense of self and experience of self appear in relation to this witnessing or pure knowing, which is the ground of consciousness. One can be caught up in ego-identification or can be obscured and lost in thought or anything like this, but this pristine awareness or rigpa or this witnessing is always there underneath whatever form and phenomena are obscuring it. This is related to glimpses, I mentioned glimpses here, and these are pointing out instructions for noticing this aspect of mind. And if that’s something that you’re interested in doing, I highly recommend the work and books of Loch Kelly. He teaches very skillful pointing out instruction which, I think, help to demonstrate and pointing towards pristine awareness or rigpa, which he calls awake awareness. And I also brought up heart-mind here. Heart-mind is a part which is something that one arrives at by unhooking from conceptual thinking and dropping awareness down into the center of the chest where one finds non-conceptual knowing, effortless loving-kindness, a sense of okay-ness, non-judgement, and a place of continuous intuition from which to operate. But that can use conceptual dualistic thought, but doesn’t need to and also understands when conceptual dualistic thought is useful and when it is not. So, alright, back to the episode.

Stephen Batchelor: I don’t disagree with what you have been saying. I feel somehow that we haven’t really found the right language to talk about it in. We’re still falling back on ideas that are effectively jargon terms, in many ways, that people who are involved in Buddhism and Eastern spirituality will understand. If you haven’t had exposure to those traditions, a lot of this, I think, will sound a little bit obscure, maybe very tantalizing. So many of these words are never really very clearly defined.

I feel that there’s a risk there that we create a kind of a spiritual bubble, in which a certain kind of privileged group of initiates, as it were, are able to discuss and talk around these things. It’s a language that as it stands at the present, I think, excludes a great many people. This is what brings me to my other point. Again, you were talking in terms of well-being, in terms of living at ease, in terms of being more fulfilled, but what does that mean? Words that haven’t yet come up in our conversation are those of imagination, those of creativity, we haven’t touched upon the arts.

I’m always rather surprised, to be honest, in these kinds of discussions to hear very little about the arts and imagination and creativity. For myself, my practice is effectively my art. I do work as an artist. That’s been my vocation since a teenager. It got sidetracked by Buddhism for about 20 years. To me, the creative process, you were saying, we come to experience our ways in ways we’ve never suspected before, that we have a much less central insistence on our ego, we’re less preoccupied with concepts. This is all very good. To me, that’s only, in a way, establishing a foundation or a ground for us to be able to actively and creatively imagine another world, another future, another way in which we could be.

For me, ethics is not about adhering to certain precepts, it’s about becoming the kind of person one aspires to be. That, you can extend socially as well what kind of society do I wish there to be on this earth?

Lucas Perry: There’s this emphasis you come at this with, it’s about this mode of being and acting and living an ethical life, which is like awakened being. Then, I’m like, well, the present moment is so much better. There’s this sense where we want to arrive in the present moment without being extended into the past or the future, experientially, so that right now is the point. Also, you’re emphasizing this way of being where we’re deeply ethically mindful about the kind of world that we’re trying to bring into being.

I just want to, as we pivot into your article on extinction, unify this. The present moment is the point and there’s a way to arrive in it so fully and with such insight that you’re tapping into depths of well-being and compassion that you always wish you would have known were there. Also, with this examined and ethical nature, where you are not just sitting in your cave, but you’re helping to liberate other people from suffering using creativity to imagine a better world and helping to manifest moments to come that are beautiful and magnificent and worthy of life. That doesn’t have to mean that you’re an anxious little self caught up in your head worried about the future.

Stephen Batchelor: I don’t actually believe in the present moment.

Lucas Perry: Okay.

Stephen Batchelor: Quite seriously, nor does Nagarjuna. I’ve never been able to find the present moment, I’ve looked and looked and looked a long time.

Lucas Perry: It’s very slippery, it’s always gone when you check.

Stephen Batchelor: Arguably, it’s only a conceptual device to basically describe what is neither gone nor what is yet to come. There’s no point, there’s no actual present moment, there is only flux and process and change. It’s continuous. It is ongoing. I’m a little bit wary of actually even using the term present moment. I would use it as a useful tool in meditation instruction, come back to the present moment, everyone knows pretty much what that means.

I wouldn’t want to make it into an axiom of how I understand this process as something highly privileged and special. It’s to me more important to somehow engage with the whole flow of my temporality with everything that has gone, with everything that is to come and I’d rather focus my practice really within that flow, rather than singling out any particular moment, the present or any other, as having a kind of privileged position. I’m not so sure about that.

Lucas Perry: Okay.

Stephen Batchelor: Also, creativity, I don’t think is just some sort of useful way whereby we might think of a better world in the future. To me, creativity is built into the very fabric of the practice itself. It’s the capacity in each moment to be open to responding to this conversation, for example, in a way that I’m not held back by my fears and my attachments, and so on and so forth, but have found in this flow an openness to thinking differently, to imagine differently, to communicate, to embody what I believe in ways that I cannot necessarily foresee that I can only work towards.

That’s really where I feel most fully alive. I’d much rather use that expression, a sense of total aliveness. That’s really what I value and what I aspire to, is what are the moments in which I really feel that I’m totally alive? That’s what is to me of such great value. I’m also not sure that by doing all these practices that you find deep happiness and so forth and so on. I would not say that for myself. I’ve certainly experienced periods of great sadness, sometimes of something close to depression, anxiety. These, again, are part and parcel of what it is to be human.

I like Ram Dass’s expression becoming a connoisseur of one’s neuroses. I think that’s also very true. I’m afraid that the language of enlightenment and so forth often tends to give you the impression that if you get enlightened, you won’t feel any of these things anymore. Arguably, you’ll feel them more acutely, I think, particularly as we talk about compassion or loving kindness or bodhichitta, we are effectively opening ourselves to a life of even greater suffering. When we truly empathize with the suffering of maybe those close to us, or the suffering that we are inflicting upon the planet, this is not something that is going to make us feel happy or even at ease. Hey it’s post-podcast Lucas here and just wanted to jump in to define a term here that Stephen brings in which is bodhicitta. Which is a mind that is striving for awakening or enlightenment for the benefit of all sentient beings that they also achieve freedom or awakening and liberation from suffering. Alright, back to the conversation. 

I feel that these kinds of forms of compassion are actually inseparable from experiencing a deep pain, something that’s very hard to bear. I’m afraid that that side of things can easily be somehow marginalized in favor of these moments of deep illumination and insight and so forth and so on.

Lucas Perry: Yeah. I mean, pain and pleasure are inevitable. I think it’s very true that suffering is optional and … Okay, yeah.

Stephen Batchelor: Again, what you’ve just said is one of the cliches that we get a lot. A lot of this has come from out of the mindfulness world. The pain is somehow unavoidable but suffering is optional. I find that very difficult to understand.

Lucas Perry: The direction that I’m going is there’s this kind of loving kindness that is always accessible and I think this fundamental sense of okayness, so that there can be mental anguish and pain and all these things, but they don’t translate into suffering, where I would call suffering, the immersion inside of the thing. If there is always a witnessing of the content of consciousness from, for example, heart-mind, there is this okayness and maybe at worst, bittersweet sadness and compassion, which transforms these things into something that is not I would call suffering.

You also gain the degree of skillfulness to work with the mud of pain and suffering to transform it into what Thich Nhat Hanh would say like a lotus.

Stephen Batchelor: Again, we might be on a semantic thing here.

Lucas Perry: I see.

Stephen Batchelor: If we go back to the early Buddhist texts, or most Buddhists, they have this one word dukkha, they don’t have a separate word for pain or for suffering. This is an intervention that’s come along more recently in the last 20 or 30 years, I think, this distinction. There is dukkha. The first task of the four tasks is to embrace dukkha. Dukkha includes pain, it includes suffering, it includes anything. It has to do with being capable of embracing the tragic nature of our existence. It has to do with being able to confront and be open to death, to sickness, to aging, to extinction, as we’re going to go on and talk.

I find it difficult personally, to somehow imagine we can do all of that without suffering. I don’t know what you mean by suffering but it looks to me as though you’ve defined it in a fairly narrow way, in order to separate it off from pain. In other words, suffering becomes mental anguish. They often talk of this image of the second arrow. The first arrow is the physical pain. Then, you add on to that all of the worries about it and all of the oh, how poor me, and all that kind of stuff. That’s psychologically true. I accept that.

That’s a way too narrow way of talking about dukkha. Dukkha, there is a grandeur and a beauty in dukkha. I know that sounds strange. For me, that’s really, really important not to feel that these spiritual practices somehow can alleviate human suffering in the way that it’s often presented, and that we become everyone smiling and happy, would you get that too. I mean, Thích Nhất Hạnh’s approach. There’s a kind of saccharin sweetness in this approach, which I find kind of false. That’s one of the reasons also like a lot of the Christian tradition, the image of Christ on the cross is not the image of a happy at ease kind of person. There’s a deep tragedy in this dimension of love that I’m very wary of somehow discounting in favor of a kind of enlightened mind that really is happy and at ease all the time.

Of all the different teachers and people I’ve met, I’ve never met anyone like that. It’s a nice idea. I don’t know whether it’s terribly realistic or whether it actually corresponds to how Buddhists, Hindus, Jains, and others have lived over the last centuries.

Lucas Perry: All right. Your skepticism is really refreshing and I love it. I wish we could talk about just this part forever, but let’s move on to extinction. You have an article that you wrote called Embracing Extinction. You talk a lot about these three poisons leading to everything burning, everything being on fire. Would you like to unpack a little bit of your framing here for this article? How is it that everything in the world is burning? What is it that it’s all consuming? How does this relate to extinction?

Stephen Batchelor: Okay. I start this article, which was published in the summer edition of Tricycle, this year, by quoting the famous statement of the Buddha, we find in what’s called the Fire Sermon, where he says, the world is burning, the eyes are burning, the ears are burning, et cetera, et cetera, the senses are burning, the mind is burning. Then he asked, burning with what? The answer is burning with greed, burning with hatred, burning with confusion. That’s his way of speaking about what I would call reactivity.

In other words, when the organism encounters its environment, it’s a bit like a match encountering a match box. That causes certain reactive patterns to flare up. These are almost certainly the result of our evolutionary biology that we have managed to survive as a race, as a species, so successfully, because we’ve been very good for getting what we want. We’ve been very good at getting rid of things that have gotten in our away. We’ve been very good at stabilizing our sense of me and us at the expense of others, by having a very strong sense of ego, a very strong sense of me.

These are understood as fires in the earliest texts and then later, Buddhism, begins to think of them more are toxins, as viruses, as poison that contaminate the whole system, as it were once they have taken hold. What I find quite striking is how this metaphor of fire, which was probably spoken by the Buddha about 500 B.C., a long, long time ago. Yet, when we read that today, it’s very difficult not to hear it as a rather prescient insight into the literal heating up of the physical environment, that through living a life of industrial technology, basically, whereby we have managed very successfully to develop, as we call it, industries and great cities and systems of transport and electricity, all this kind of stuff.

The consequence has been that we’re actually now poisoning the very environment that we depend upon in order to live. For that reason, I feel that there’s something in the Buddhist Dharma that recognizes the heating up that occurs when we lead a life that is driven by our reactive habits, our reactive patterns. The second of the four tasks is to let those be, is to let them go, is to find a way of leading a life that is not conditioned by greed and by hatred and by egoism and confusion. That’s the challenge.

Of course, on an individual level, we can do the best we can. If it’s going to have any lasting impact on the condition of life on earth, then this has to be a societal cultural movement. This comes back to something we already talked about before. If we’re going to make a difference to our future, if we’re going to stave off what might turn out to be rapid extinction, not only of other species, but possibly even of the human species, and not within billions of years, but possibly within the next century, then we have as a human community, a global, to really alter the ways in which we live.

I do think that spiritual traditions, Buddhism and others, offer us a framework in which we can work with these destructive emotions. Hopefully, in our own lives, maybe in the lives of those who we’re able to affect closely, maybe in the lives of people who are listening to this podcast, can ripple out and maybe, in the long term, diminish the kinds of powers that are at work, that in many ways seem unstoppable. At one level, I can be optimistic. I can see that we do have the understanding of what’s creating the problem. There is amongst more and more people, I think, a genuine commitment to lead lives that do not contribute to such a crisis.

I’m also aware, both in myself and many others, I know that we are complicit in this process. Each time we take a plane, each time we put on our heating system. I had a mango last night and I realized it came from the Ivory Coast. I mean that’s entirely unnecessary, yet I still go out and get the things. Again, it’s humility to recognize that I can have all these very high minded ecological ideas, but how am I actually changing the way I live? What am I doing in my life that will help others to likewise take those steps? I feel the power of evolution, the power of greed, hatred, and delusion, which I think are really just the instinctual forces that have got human beings to where they are, are very, very forceful.

They’re the armies of Mara, the Buddha used to call them. He says, there’s nothing in this world as powerful as the armies of Mara. Mara being the demonic or the devil. I wonder for many reasons, whether in fact, we are capable as a human community of restraining such instincts and impulses. I hope so. I’m not totally optimistic.

Lucas Perry: Right. Wisdom, as we would have understood it from the beginning of our conversation would be an understanding of these three poisons of hate, greed and delusion. It’s coming to understand them from this mind of non-reactivity and awareness, one can see their arising and by witnessing, disdentify with them, and then have choice over what qualities will be expressed in the world. One thing that I really liked about your article was how you talk about this problem solving mode of being. Many of our listeners, and myself included, and I think this especially comes from the computer science mindset, is this very strong reliance on conceptual dualistic thought as the only mode of knowing.

From the head, one is embedded in this duality with the world where one is in problem solving mode. You talk about how the world becomes an object of problems to be solved by conceptual thinking. This isn’t, as you say, to vilify conceptual thinking or to vilify and attack something like technology, but is to become aware of where it is skillful and where it is unskillful. To use it in ways, which will bring about better worlds, I’m wondering if you can help to articulate the danger of being in the problem solving mode of being where we lack connection with interdependence with the outside world, where there’s perhaps a strong sense of self from the problem solving mode of being.

Just to finish this off, I’m quoting you here, you say, “Such alienation allows us to regard the world either as a resource for the gratification of our longings or as a set of problems to be solved for the alleviation of our discontents.”

Stephen Batchelor: Yes, okay. To me, this is a very important point. I’m inspired in this thinking by a Martin Heidegger, who’s a very controversial thinker. But someone I feel who did have some considerable insight into this process long before anybody else. I cite him in the article. His point, which I completely agree with, actually, is that the problem with technology is not the technological machines and computers and so forth in themselves but it’s the mindset that, in a way, justifies and enables those kinds of technological behaviors to happen.

As you said, this is effectively a mindset that is cut off from the natural world. I think we can see this beginning in about the 18th century with Descartes and others, whereby we set up the idea that there is a world out there and that there is an internal subject, a consciousness that is able to distance itself from the natural world in order to have the objectivity and the clarity to be able to then manipulate it to suit our particular desires and to ward off our particular fears. Now, one of the things that often disturbs me is that this technological language is often used to describe these spiritual practices as spiritual technologies.

This is a term I hear quite a lot actually, or our rather, unthinking an uncritical use of the word technique, the technique of mindfulness, the techniques of meditation, meditational techniques. As long as we’re not thinking critically around that term technique, I think, very often we are unconsciously perpetuating precisely the distinction that we often, in another part of our mind, we’re trying to overcome, namely this notion of separation. We see this in meditation. If you see your mind as it is, if you recognize what are the destructive emotions, then you can get to the root of them, get rid of them, and then you’ll be happy.

That, again, carries with it a certain mindset, which is so much part, not only of our modern western culture, but I feel is part of the human condition. I think, we’re very deeply primed to think of the world as something out there and ourselves as something in here. We find in eastern religions, for example, the idea of rebirth that when we die, we don’t really die, our mind will sort of go on somewhere else, which again, I think reinforces this notion that there is a duality. There’s a spiritual inside and there is a material outside. That’s just simply the way things are. Many of the people who teach Dzogchen and Vipassana and Mahamudra believe very strongly in there being a mind that is not part of the physical world, that somehow transcends the physical world, that gives us the opt-out clause, that when we die, we don’t really die.

Something mysterious will carry on. To that extent, I feel that I’ll stick to Buddhism, because it’s the one I know, I think Buddhism can actually, again, reinforce this technological mindset as an inner technology. I think that’s a very dangerous idea. If I go back to that experience I had in the woods in Dharamsala when I was 22 years old, it was that idea that was really overthrown. It was a recognition of the mystery that I am part and parcel of I cannot meaningfully separate my experience from what is going on around me. Again, it’s easy to say that, it’s another thing altogether to really feel that in your bones.

I think that requires a lifelong practice, a refinement of sensitivity. It also requires, I think, a much more critical way of thinking about so many of the ideas that we take on board without really examining to see whether they are in fact tacitly reinforcing certain mindsets that we will probably not be happy to endorse. I think, all of this goes together. If we are to engage with this environmental crisis, which undeniably is the consequence of our industrial technologies, then we have to also see to what extent we are complicit, not just as consumers in buying mangoes from the Ivory Coast, but also as subjects. As a subjective conscious beings who are at one level still buying into the mind-matter split.

I get into a lot of trouble with Buddhists because I reject the idea of reincarnation and the mind goes on somewhere after death precisely because I feel it is a dualism that actually undergirds our sense, the core difference, I would say, that separates us from being participants in the natural world. I cannot think of birth, sickness, aging and death as problems to be overcome. Yet, that is quite clearly the goal of Buddhism. It’s to bring the end of suffering, which doesn’t mean just the ending of mental anguish, which is in a sense, just scratching at the surface. It’s the ending of birth, the ending of sickness, the ending of aging, and the ending of death. It’s a total transcendence of an embodied life.

For this reason, I feel that it’s very helpful to replace the idea of solving a problem with the idea of penetrating a mystery. Because birth, sickness, aging and death are not problems, they’re mysteries and they’re mysteries because I cannot separate myself from death. I am the one who is going to die. I cannot separate myself from aging. I am the one who is aging, and so on. To do that is to acknowledge these things cannot be solved in the way that problems are solved. Likewise, confusion and greed and hatred, these are not problems to be solved, as Buddhism would often make us believe, they are mysteries too because I am greedy, I am hateful, I am confused.

They’re part and parcel of the kind of being that evolution has brought about that I am one of million examples. By making that change, and I think that change for me was put into practice by doing Zen meditation, primarily, the meditation of asking the question, what is this? That is a Koan or a hwadu, literally. It’s a practice that I trained in for four years in Korea. I did something like seven, three months retreats, just asking the question, what is this? In other words, getting myself to experience in an embodied, in an emotive way, the fact that I am inseparable from the mystery that is life.

That, for me, is the kind of foundation that can lead us into a profoundly different relationship with the natural world. Again, I need to emphasize that this is working against very profoundly rooted human attachments and beliefs. I think the Four Noble Truths is, again, it’s a problem solving paradigm. Suffering is the problem, ignorance is its cause, get rid of the cause, you get rid of the problem. That’s Nirvana. I think that shows that this problem solving mindset is not just modern technology from the 18th century in Europe, as Heidegger seems to think. It goes back to something way deeper. It seems to be built into the human consciousness itself, maybe even into the structures of our neurology. I can’t really speak with any authority on this thing. That’s my sense.

Lucas Perry: I can also hear the transhumanists and techno-optimists screaming, who want to problem solve the bad things that evolution has given us, like hate, anger and greed. You just find those in the genetics and the conditioning of evolution and snip them out and replace them with awakening or enlightenment, that sounds much better. Sorry, can you more fully unpack this mystery mode of being and what it solves, that being embedded as a subjective creature, who is witnessing things as a mystery rather than as viewing them as a problem to be solved?

Stephen Batchelor: Again, my emphasis in what I just said was effectively to swing the pendulum back to a perspective that’s usually ignored. In practice, we need both obviously. It would be absurd to be just an out and out technophobe and to reject technology and to reject problem solving per se. That would be silly. Technologies have been enormously beneficial to us in so many ways. Look at the current pandemic, it’s quite amazing how we’ve been able to identify the virus so quickly, how we’ve been able to then proceed towards developing vaccines. This is all because of our extraordinary medical technologies. That’s great. I’ve no problem with that at all.

The real issue is when we start to think that a technological way of thinking is the only way of thinking. In the same way that you said earlier that we tend to think that conceptuality and duality and egos are the only ways of being. I think we have to add to that a technological mindset. I think that is just as much part of the problematic with greed, hatred and delusion. I think, it is a form of delusion. It’s a very primary form of delusion. In practice, that’s the challenge is to differentiate between those areas of our life when it is useful to stand apart from let’s say, a novel coronavirus and look at it under a microscope, very useful, very necessary. Not to let that way of thinking become normative to the whole way we lead our lives and to open ourselves to the possibility of encountering the world and ourselves, our mental states, other people, not as problems but as mysteries.

To be able to value that dimension of our experience without reducing it to a technological kind of thinking, but to honor it for what it is, as something that cannot be captured by concepts, by language.

Lucas Perry: Yet, they go together. I think the distinction here is subtle. People might be reacting to this and maybe a little bit confused about the efficacy of relating to things as a mystery as I am a little bit. Let me see if I am capturing this correctly. I can sense myself suffering right now taking the attitude and view that my neuroses and my sufferings and my pain are problems to be solved. It creates a duality between me and them. It creates this adversarial relationship. I’m not willing to be with them or to experience them. It’s this sense of striving or craving for them to go away or to be other than what they are. I think that’s why I am sensing myself suffering right now taking on the problem solving sense of view.

If I disdentify with that and I begin witnessing that, and I shift to these are mysteries and there is this sense of beauty that is compelling to you, which I can sense and this kindness and compassion and ease towards them. This doesn’t mean that the problem solving sense goes away. There’s more of a dropping into heart, into being, into willingness to be with them and explore them and to be skillful in their unfolding and change. That is in its sense still a kind of problem solving. I mean, there are parts of me that are unwanted, but there is a way of coming to issues which are unwanted and seeing them as mysteries and being with them in an experiential way other than the industrial 20th century ego-identification, conceptual thought problem solving mode of being, which feels quite samsaric, like you’re in a hell realm of hungry ghosts in the mind, everything needs to be different.

How do I think all the right thoughts to change the atoms to make the problems go away? Hey it’s post-podcast Lucas here, which I mean as a conditioned pattern of thought which is motivated and structured by ignorance or confusion, as well as craving. And so I see this kind of structure also applying to the problem solving mode of thought which has this element of craving and confusion of separateness that leads to this sense of suffering or disease. It seems to me subtle like that, does this capture what you’re pointing toward?

Stephen Batchelor: I think it is very subtle. Again, I would also concur that yes, there are parts of our inner life, our psychology, that can be effectively dealt with by inner techniques. Like, for example, if we’re extremely distracted all the time, if we train ourselves to be more focused, if we do concentration exercises, do Shamatha practice, over time, we can get better at not being distracted. That’s the application of a technique. There are aspects of spiritual practice, not a term I’m terribly fond of, but let’s stick with it.

I think the cultivation of mindfulness, the cultivation of concentration, cultivation of application, for example, all of these things have a technical aspect to them. If I do therapy I’ve got some neuroses like chronic anxiety. I’m not going to resolve that by saying how mysterious, wow, this is wonderful being in the mystery of anxiety. That’s not what I meant. What I meant is that that is something that we can recognize as being a problem, a legitimate problem.

Lucas Perry: It’s unwanted.

Stephen Batchelor: Yeah, it’s unwanted. It’s unwanted for good reasons because it prevents us from living fully, from being fully alive. It constrains us from living. It keeps us locked up in a little bubble of our own neurotic thoughts. We can find technologies, psychotherapies, that if we apply them can actually effectively help get rid of that problem. Although, as both Freud and Jung were quite clear, the problem will not just evaporate, it’ll still be there, but we’ll be able to live with it better. Jung’s idea was that we get to the point where instead of the neurosis having you, you have the neurosis.

In some ways, I think, a lot of these neuroses are going to be around, whether we like it or not. We can, in a way, have them rather than them having us. That is a form of therapy. That is a form of cure. When we come to these deeper spiritual values, let’s say wisdom, or compassion, or love, I find it very difficult to understand how these are qualities that we can arrive at by simply pursuing a set of technological procedures. I think and I’ve, again, witnessed this in myself in others, colleagues, friends, monks and whatnot, people who’ve dedicated years and years and years and years and years to cultivating these inequalities of mind, but in some ways don’t really seem to have become significantly wiser or more loving.

I really question whether wisdom is something that can be produced by becoming an expert in certain meditation techniques or love. I think these are qualities that are meta-technical, they’re beyond the reach of technique. I think suffering in the deepest sense of existential suffering, which is effectively what I think the Buddha is primarily concerned with, is birth, sickness, aging and death. Birth, sickness, aging and death, likewise, I do not think can be resolved by finding a solution that can render them no more problematic. Even if you follow the traditional Buddhist way of describing this, that’s effectively what happens. It’s only when you’re dead that you are freed from birth, sickness, aging.

Birth, sickness, aging and death are mysteries, but a great amount of what we suffer from within our inner lives, within our social lives, within our world. Our problems, if correctly identified as such, that can be dealt with through applying techniques. The challenge, and this is, I think, perhaps where you talk of subtlety is to be able to differentiate between what is actually a mystery and cannot be solved as to what is a problem and can be solved. Western technological society particularly, really, has no room at all for this mystery focused way of life.

We might get it in church on Sundays, a little bit of it, but we seem to have almost disconnected from that whole side of life. I feel that one of the reasons we’re drawn to some of these eastern spiritualities is because they seem to bring us back to that quality of awareness. If you don’t like the word mystery, and a lot of people feel a little bit uncomfortable with it, just think of it as I do a lot of the time that we live in an incredibly strange world that is extremely weird that you and I are having this conversation.

I never cease to be utterly astonished and amazed by the most banal things. I think it’s to be able to recover a sense of the extra ordinary, within the utterly ordinary, that enables us to begin to have a very different relationship to the natural world that we’re threatening. I feel that if we haven’t embodied that sense of strangeness of … not only strangeness but the same recognition that I cannot separate myself from these things, I cannot distance myself from these things, they are infinitely close. That’s another definition of mystery.

Lucas Perry: The ground of your being.

Stephen Batchelor: Yeah, if you want. Remember, that this is a term coined by Paul Tillich, the Christian theologian in the 1960s. He understood the ground of being to be a groundless ground that is beautiful. A ground which is like an abyss literally in German. If we talk of ground of being be very careful not to make the ground too solid, it’s a ground which is no ground. That, again, is very close to Buddhist thinking.

Lucas Perry: Yeah, it seems subtle in the way that you’re still solving problems from this way of being. From embodying this experiential relationship and subjectivity in the world, it changes and modifies in skillful ways, perhaps the three poisons and it allows you to be more skillful is what you’re saying. It’s not like you pretend like problems don’t exist. It’s not like you stop solving problems. It’s that there’s a lot of skillfulness in the way that this modification of your own subjectivity leads to your own being in the world. I’d love to wrap up here with you then on talking about effective altruism in this field.

The Future of Life Institute is concerned with all kinds of different existential risk. We’re contextualized in the effective altruism movement, which is interested in helping all sentient beings everywhere, basically, by doing whatever is most effective in that pursuit and which leads to the alleviation of suffering and the promotion of well-being as potentially narrowly construed. Though that might not be the only ethical framework, you might decide what would be effective interventions in the world. What this has led to is what we’ve already talked about here, which is this extremely problem solving kind of mind. People are like very in their heads and interested and reliant on conceptual thought to basically solve everything.

Ethics is a problem to be solved. If you can just get everyone to do the right things, the animals will be better off, there will be less factory farms. We’ll get rid of existential threats. We can work on global poverty to do things that are really effective. This has been very successful to a certain degree. With this approach, tremendous suffering has already been alleviated and hopefully still will be. But it lacks many of these practices that you talked about, perhaps it suffers from some of the unskillfulness of the problem solving mindset. There isn’t any engagement in finding natural loving kindness, which already exists in us or cultivating loving kindness in our activities.

There’s not much emotional connection to the benefactors of the altruism. There’s not sufficient, perhaps, emotional satisfaction felt from the good deeds that are performed. There’s also lots of biases that I could mention that exist in general in the human species, like we care about people who are closer to us, rather than people who are far away. That’s a kind of bias. Children are drowning in shallow ponds all over the world, and no one’s really doing anything about it. Shallow ponds being places of easy intervention like you could easily save that child.

This conversation we’re having about wisdom, I think, for me, means that if effective altruism were potentially able to have its participants shift into a non-conceptual experiential embodying of perhaps kinds of insights or a way of being that you might support as living an examined life and as a method of awakening and perhaps insight into emptiness and impermanence and not-self and suffering, I think this could lead to transformative growth that might upgrade our ethics and experience of the world and the way of being and could de-bias some of these biases which lead to ineffective altruism in the world.

I think that seeing through non-self that really kind of annihilates caring about people closer to you rather than far away from you or people who are far away in time for those who are interested in existential threat. I’m curious if you have any reactions or perspective here about how the insights and wisdom of wisdom traditions and perhaps a secular Buddhism and secular Dharma could contribute to this community.

Stephen Batchelor: I have to confess that when confronted with these kinds of problems, the ones you just very clearly present, I really see considerable shortcomings in both the Buddhist community and in this broader spiritual community that we might feel we’re part of. Because in the end, a lot of these practices are effectively things we do on our own and we may do them within a small Sangha or small community. We may write books. We might get more and more people practicing mindfulness. That is all very well. I’m not actually convinced that simply by changing individual minds, and if we change enough individual minds, we’ll suddenly find ourselves in a much healthier world.

I think the problems are systemic. They are built into the structures of our human societies. They’re not intelligible purely as the collective number of individual deluded or undeluded minds. I think we’re going into the sort of territory of systems theory, whereby groups and systems do not behave in such a way that can be predicted by analyzing the behavior of the individual members of that system. I think, if I’m getting that correct. Again, I’ll just speak about the Buddhist community but it of course, probably refers to others as well.

I think the great challenge of the Buddhist community is that it has to come up with a social theory. It has to come up with a way of thinking that goes beyond the person and that is able to think more systemically. Now there are Buddhist thinkers who are trying to do that people like David Loy would be a very good example. Nonetheless, I don’t feel that we’ve really grappled with this question adequately. I have to admit to my own confusions and limitations in this area too. I feel that my writing, which is my main work, is slowly evolving in this direction. What really pushed me in this direction was an essay by Catherine Ingram, who you may have heard of, called Facing Extinction. I borrowed it effectively, my essay called Embracing Extinction is an acknowledgement of my debt to her.

I had been part of the green movement for the last 30 odd years or so. It was only on reading Catherine’s piece that I suddenly was struck viscerally by the fact of our creating a world that could well lead to the extinction of all species within the next century or so. I think we thereby need to be able to respond to these dilemmas at the same pitch and at the same level. In other words, the visceral level in which these questions are beginning to emerge in ourselves. Again, I go back to Zen, one of the favorite sayings of my teacher was great questioning, great awakening, little questioning, little awakening, no questioning, no awakening.

In other words, our capacity to be awake is correlated to our capacity to ask questions in a particular way. If we have intellectual questions or let’s say, problem solving questions, then we can resolve those questions by coming up with solutions. They’ll be at one level operating at the same pitch. In other words, they are conceptual problems, they’re intellectual problems. Great awakening arises because we’re able to ask questions at a deeper level. If you take the Legend of the Buddha, the young prince who goes out of the palace, he encounters a sick person, an aging person and a corpse, and that is what triggers within him what in Zen is called great questioning, or great doubt, great perplexity.

The practice of Zen is actually to stay with those great questions and to embody them to get them to actually penetrate into your flesh and bones. Then, within such a perspective, one then creates the conditions for a comparable level of visceral awakening. That now I feel has to be extended on a communal level. We have as a community, whether it’s a small, intentional community of Buddhists, or a larger human community, be able to actually ask these questions at a visceral level. The kind of empathy you speak of, I feel also has to come from this degree of questioning.

I think there’s often too much of an understandable sense of urgency in a lot of these questions. That urgency often just causes us to immediately try to go out and figure what we can do. That’s probably a good thing. We maybe do not allow enough time to really allow these questions to land at a deep visceral level within ourselves such that answers can then begin to emerge from that same depth. That is the kind of depth I feel that if we are to come up with a more systemic philosophy, a social theory, maybe an economic theory that is grounded in such depth that will perhaps be able to guide us more effectively towards being effectively altruistic.

That’s kind of really where I’m at with this at the moment. My work as it’s evolving in what I’m writing now, for example, I’m writing a book called the Ethics of Uncertainty where I’m trying to flesh this out more fully. This is where I feel my life is going. I don’t know whether I’ll live long enough to actually do more than climb a few more steps if I’m lucky. I’m very moved by my colleagues and friends who were very much involved in the extinction rebellion demonstrations, particularly in London. I have a number of close friends who are very involved with that.

That likewise, I found a great source of inspiration and something towards which I would very much hope for my writing in my philosophy to be able to contribute. That’s kind of where I’m going. I think that humanity does face an existential crisis of a major order at the moment. I see all kinds of forces that are railed that are sent not in our favor, the least of which is the four year election cycle. I just wonder how national governments who are in effect beholden to electorates whose needs are probably largely about can I get work? Can my kids get a good school and good health care system? That’s going to be the priority for most people frankly.

It’s all very well talking about saving the environment. When push comes to shove, again, your bias will be basically my kids, my immediate community, or my nation. We have to get beyond that. We can’t think in national terms anymore. There are transnational movements. I think that they certainly need to be developed and further strengthened. Can such transnational movements ever achieve the kinds of power that will enable changes to occur on a global level? I can’t see that happening in our current world, I’m afraid. I find very distraught by that.

When you see some of these right wing populists, they’re effectively pushing back in the other direction, and that, unfortunately, on the ascendant, I do not feel at all optimistic, given our situation. As a person who tries to lead a life governed by care and compassion and altruism, I cannot but seek ways of embodying those feelings in actions. As a writer, that’s what I’m probably best at doing. I’m very glad I’ve had the opportunity to be able to speak to you and to the Future of Life community about my ideas that I don’t know whether I really have a great deal to say that’s really going to change the paradigm that we are, I think all of us, are working towards another paradigm altogether.

Lucas Perry: Thank you, Stephen. I’ve really, really enjoyed this conversation. To just close things off here, instead of making powerful people more wise or wise people more powerful, maybe we’ll take the wise people and get them to address systemic issues, which lead to and help manifest things like existential risk and animal suffering and global poverty.

Stephen Batchelor: That would be great. That would be wonderful. Thank you very much, Luke. It’s been a lovely conversation. I really wish you all the best and all of those of you who are listening to this likewise.

Lucas Perry: Yeah, thanks so much, Stephen. If people want to follow you or to check out more of your work, I’ve really enjoyed your books on Audible. If people want to follow you or find more of your work, where the best places to do that?

Stephen Batchelor: I have a website, which is www.stephenbatchelor.org or the main institution I’m involved with is called Bodhi College, B-O-D-H-I, hyphen college.org. There, you’ll find information on the courses that I lead through them. Next year, in 2021, I’m leading a series of 12 seminars on Secular Dharma, which I’ll be addressing a lot of the questions that have come up in this podcast. It will be an online course, once a week for 12 weeks, 12 three-hour seminars.

It’ll be publicized in the next few weeks. We’re just finalizing that program as of now. Thank you.

Lucas Perry: All right. Thank you, Stephen. It’s been wonderful.

 

Kelly Wanser on Climate Change as a Possible Existential Threat

 Topics discussed in this episode include:

  • The risks of climate change in the short-term
  • Tipping points and tipping cascades
  • Climate intervention via marine cloud brightening and releasing particles in the stratosphere
  • The benefits and risks of climate intervention techniques
  • The international politics of climate change and weather modification

 

Timestamps: 

0:00 Intro

2:30 What is SilverLining’s mission?

4:27 Why is climate change thought to be very risky in the next 10-30 years?

8:40 Tipping points and tipping cascades

13:25 Is climate change an existential risk?

17:39 Earth systems that help to stabilize the climate

21:23 Days where it will be unsafe to work outside

25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in

41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?

50:20 International politics of weather modification

53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?

57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?

59:33 What are the main points of persons skeptical of climate intervention approaches

01:13:21 The international problem of coordinating on climate change

01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks?

01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?

01:37:48 What can listeners do to help with this issue?

01:40:00 Climate change and mars colonization

01:44:55 Where to find and follow Kelly

 

Citations:

SilverLining

Kelly’s Twitter

Kelly’s LinkedIn

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. In this episode, we have Kelly Wanser joining us from SilverLining. SilverLining is a non-profit that is focused on ensuring a safe climate due to the risks of near-term catastrophic climate change. Given that we may fail to reduce CO2 emissions sufficiently, it may be necessary to take direct action to promote cooling of the planet to stabilize both human and Earth systems. This conversation centrally focuses how we might intervene in the climate by brightening marine clouds to reflect sunlight and thus cool the planet down and offset global warming. This episode also explores other methods of climate intervention, like releasing particles in the stratosphere, their risks and benefits, and we also get into how climate change fits into global catastrophic and existential risk thinking.

There is a video recording of this podcast conversation uploaded to our Youtube channel. You can find a link in the description. This is the first in a series of video uploads of the podcast to see if that’s something that listeners might find valuable. Kelly shows some slides during our conversation and those are included in the video version. The video podcast’s audio and content is unedited, so it’s a bit longer than the audio only version and contains some sound hiccups and more filler words.

Kelly Wanser is an innovator committed to pursuing near-term options for ensuring a safe climate. In her role as Executive Director of SilverLining, she oversees the organization’s efforts to promote scientific research, science-based policy, and effective international cooperation in rapid responses to climate change. Kelly co-founded—and currently serves as Senior Advisor to—the University of Washington Marine Cloud Brightening Project, an effort to research and understand one possible form of climate intervention: the cooling effects of particles on clouds. She also holds degrees in economics and philosophy from Boston College and the University of Oxford.

And with that, let’s get into our conversation with Kelly Wanser

Let’s kick things off here with just a simple introductory question. So could you give us a little bit of background about SilverLining and what is its mission?

Kelly Wanser: Sure Lucas. I’m going to start by thanking you for inviting me to talk with you and your community, because the issue of existential threats is not an easy one. So our approach at SilverLining I think overlaps with some of the kinds of dialogue that you’re having, where we’re really concerned about this sort of catastrophic risks that we may have with regards to climate change in the next 10 to 30 years. So SilverLining was started specifically to focus on near term climate risk and the uncertainty you have about climate system instability, runaway climate change, and the kinds of things we don’t have insurance policies against yet. My background is from the technology sector. I worked in areas of complex systems analysis and IT infrastructure. And so I came into this problem, looking at it primarily from a risk point of view, and the fact that the kind of risks that we currently have exposure to is an unacceptable one.

So we need to expand our toolkit and our portfolio until we’ve got sufficient options in there that we can address the different kinds of risks that we’re facing in the context of the climate situation. SilverLining is a two year old organization, and there are two things that we do. We look at policy and sort of driving how in particular these interventions in climate, these things that might help reduce warming or cool the planet quickly, how we might move those forward in terms of research and assessment from a policy perspective, and then how we might actually help drive research and technology innovation directly.

Lucas Perry: Okay, so the methods of intervention are policy and research?

Kelly Wanser: Our methods of operation are policy and research, the methods of intervention in particular that I’m referring to are these technologies and approaches for directly and rapidly reducing warming in the climate system.

Lucas Perry: So in what you just said you mentioned that you’re concerned about catastrophic risks from climate change for example, in the next 10 to 30 years. Could you paint us a little bit of a picture about why that kind of timescale is relevant? I think many people and myself included might have thought that the more significant changes would take longer than 10 to 30 years. So what is the general state of the climate now and where we’re heading in the next few decades?

Kelly Wanser: So I think there are a couple of key issues in the evolution of climate change and what to expect and how to think about risk. One is that the projections that we have, it’s a tough type of system and a tough type of situation to project and predict. And there are some things that climate modelers and climate scientists know are not adequately represented in our forecasts and projections. So a lot of the projections we’ve had over the past 10 or 15 years talk about climate change through 2100. And we see these sort of smooth curves depending on how we manage greenhouse gases. But people who are familiar with climate system itself or complex type systems problems know that there are these non-linear events that are likely to happen. Now in climate models they have a very difficult time representing those. So in many cases they’re either sort of roughly represented or excluded entirely.

And those are the things that we talk about in terms of abrupt change and tipping points. So our climate model projections are actually missing or under representing tipping points. Things like the release of greenhouse gases from permafrost that could happen suddenly and very quickly as the surface melts. Things like the collapse of big ice sheets and the downstream effects of that. So one of the concerns that we have in SilverLining is that some of the things that tech people know how to do, so similar problems to manage an IT network. It’s a highly complex systems problem, where you’re trying to maintain a stable state of the network. And some of the techniques that we use for doing that have not been fully applied to looking at the climate problem. Similarly, some of the similar techniques we use in finance, one of our advisors is the former director of global research at Goldman Sachs.

And this is a problem we’re talking to him about and folks in the IPCC and other places, essentially we need some new and different types of analysis applied to this problem beyond just what the climate models do. So problem number one is that our analytic techniques are under representing the risk, and particularly potentially risk in the near term. The second piece is that these abrupt climate changes tend to be highly related to what they call feedbacks, meaning that there are points at which these climate changes produce effects that either put warming back in the system or greenhouse gases back in the system or both. And once that starts to happen, the problem could get away from us in terms of our ability to respond. Now we might not know whether that risk is 5%, 10% or 80%. From SilverLinings perspective, from my perspective, any meaningful risk of that in the next 10 to 30 years is an unacceptable level of risk, because it’s approaching somewhere between catastrophic and existential.

So we’re less concerned about the arm wrestle debate over is there some scenario where we can constrain the system by just reducing greenhouse gases. We’re concerned about, are there scenarios where that doesn’t work, scenarios where the system moves faster than we can constrain greenhouse gases? The final thing I’ll say is that we’re seeing evidence of that now. So some of the things that we’re seeing like these extra ordinaries of wildfire events, what’s happening to the ice sheets. These are things that are happening at the far end of bad predictions. The observations of what’s happening in the system are indicative of the fact that that risk could be pretty high.

Lucas Perry: Yeah. So you’re ending here on the point that say fires that we’re observing more recently are showing that tail end risks are becoming more common. And so they’re less like tail end risks and more like becoming part of the central mass of the Gaussian curve?

Kelly Wanser: That’s right.

Lucas Perry: Okay. And so I want to slow down a little bit, because I think we introduced a bunch of crucial concepts here. One of these is tipping points. So if you were to explain tipping points in one to two sentences to someone who’s not familiar with climate science, how would you do that?

Kelly Wanser: The metaphor that I like to use is similar to a fever in the human body. Warming heat acts as a stressor on different parts of the system. So when you have a fever, you can carry a fever up to a certain point. And if it gets high enough and long enough, different parts of your body will be affected, like your brain, your organs and so on. The trapped heat energy in the climate system acts as a stressor on different parts of the system. And they can warm a bit over a certain period of time and they’ll recover their original state. But beyond a certain point, essentially the conditions of heat that they’re in are sufficiently different than what they’re used to, that they start to fundamentally change. And that can happen in biological systems where you start to lose the animal species, plant species, that can happen in physical systems where the structure of an ice sheet starts to disintegrate, and once that structure breaks down, it doesn’t come back.

Forests have this quality too where if they get hot enough and dry enough, they may pass a point where their operation as a forest no longer works and they collapse into something else like desertification. So there are two concerns with that. One is that we lose these big systems permanently because they change the state in a way that doesn’t recover. And the second is that when they do that, they either add warming or add greenhouse gases back into the system. So when an ice sheet collapse for example, these big ice structures, they reflect a huge amount of sunlight back out to space. And when we lose them, they’re replaced by dark water. And so that’s basically a trade-off from cooling to warming that’s happening with ice. And so there are different things like that, where that combination of losing that system and then having it really change the balance of warming is a double faceted problem.

Lucas Perry: Right, so you have these dynamic systems which play an integral part in maintaining the current climate stability, and they can undergo a phase state change. Like water is water until you hit a certain degree. And then it turns into ice or it evaporates and turns into steam, except you can’t go back easily with these kinds of systems. And once it changes, it throws off the whole more dynamic context that it’s in, it’s stabilizing the environment as we enjoy it.

Kelly Wanser: One of the problems that you have is not just that any one of these systems might change its state and might start putting warming or greenhouse gases back into the atmosphere, but they’re linked to each other. And so then they call that the cascade effect where one system changes its state and that pushes another system over the edge, and that pushes another system over the edge. So a collapse of ice sheets can actually accelerate the collapse of the Amazon rainforest for example, through this process. And that’s where we come more towards this existential category where we don’t want to come anywhere near that risk and we’re dangerously near it.

And, so one of the problems that scientists like Will Steffen and some arctic scientists for example are seeing, is that some of these tipping points they think we’re in. I work with climate scientists really closely, and I hear them saying, “We may be in it. Some of these tipping points are starting to occur.” And so the ice ones, we have front page news on that, the forest ones we’re starting to see. So that’s where the concern becomes that we sort of lack the measures to address these things if they’re happening in the next one, two or three decades.

Lucas Perry: Is this where a word like runaway climate change becomes relevant?

Kelly Wanser: Yes. When I came into the space like 12 years ago, and for many of your listeners, I came in from tech first as a sort of area of passion interest. And one of the first people I talked to was a climate scientist named Steve Schneider, who was at Stanford at the time, and he since passed away, but he was a giant of the field. And I asked him kind of the question you’re referring to, which is how would you characterize the odds of runaway change within our lifetime? And he said at that time, which was about 12 years ago, I put it in the single digits, but not the low single digits. My reaction to that was, if you had those odds of winning the lottery, you’d be out buying tickets. And that’s an unacceptable level of risk where we don’t have responses that really meaningfully arrest or reduce warming in that kind of time.

Lucas Perry: Okay. And so another point here is you used the word “existential” a few times here, and you’ve also used the word “global catastrophic.” I think broadly within the existential risk community, at least the place where I come from, climate change is not viewed as an existential risk. Even if it gets really, really, really bad, it’s hard to imagine ways in which it would kill all people on the planet rather than like make life very difficult for most of them and kill large fractions. And so it’s generally viewed as a global catastrophic threat being that it would kill large fractions, but not be existential. What is your reaction to that? And how do you view the use of the word “existential” here?

Kelly Wanser: Well, so for me there are two sides to that question. I normally stay on one of the two sides, which is for SilverLining our mission is to prevent suffering. The loss of a third of the population of the planet or two thirds of the population of the planet and the survival of some people in interconnected bubbles, which I’ve heard top analysts talk about. For us that’s an unacceptable level of suffering and an unacceptable outcome. And so in that way the debate about whether it’s all people or just lots of people is for us not material, because that whole situation seems to be not a risk that you want to take. In the other side of your question, whether is it all people and is it planetary livability? I think that question is subject to some of the inability to fully represent all of the systemic effects that happen at these levels of warming.

Early on when I talked about this with the director of NASA Ames at the time, who’s now at Planet Labs. What he talked to me about was the changes in chemistry of the earth system. This is something that hasn’t maybe been explored that widely, but we’re already looking at collapses of life in the ocean. And between the ocean and the land systems that generates a lot of the atmosphere that we’re familiar with and that’s comfortable for people. And there are risks to that, that we can’t have these collapses of biological life and necessarily maintain the atmosphere that we’re used to. And so I think that it’s inappropriate to discount the possibility that the planet could become largely unlivable at these higher levels of heat.

And at the end of the runaway climate change scenario, where the heat levels get very high and life collapses in an extreme way, I don’t think that’s been analyzed well enough yet. And I certainly wouldn’t rule it out as an existential risk. I think that that would be inappropriate, given both our level of knowledge and the fact that we know that we have these sort of non-linear cascading things that are going to happen. So to me, I challenge the existential threat community to look into this further.

Lucas Perry: Excellent.

Kelly Wanser: Put it out there.

Lucas Perry: I like that. Okay, so given tipping points and cascading tipping points, you think there’s a little bit more uncertainty over how unlivable things can get?

Kelly Wanser: I do. And that’s before you also get into the societal part of it, right? Going back to what I think has been one of the fundamental problems of the climate debate is this idea that there are winners and losers and that this is a reasonably survivable situation for a certain class of people. There’s a reasonable probability that that’s not the case, and this is not going to be a world that anyone, if they do get to live in it, is going to enjoy.

Lucas Perry: Even if you were a billionaire back before climate change and you have your nice stocked bunker, you can’t keep stocking it, your money won’t be worth anything.

Kelly Wanser: In a world without strawberries and lobsters and rock concerts and all kinds of things that we like. So I think we’re much more in it together than people think. And that over the course of many millennia, humans were engineered and fine tuned to this beautiful, extremely complicated system that we live in. And we’re pushing it, we can use our technology to the best of our ability to adapt, but this is an environment that’s beautifully made for us and we’re pushing it out of the state that supports us.

Lucas Perry: So I’d be curious if you could expand just fairly briefly here on more of the ways in which these systems, which help to maintain the current climate status function. So for example, like the jet stream and the boreal forest and the Amazon rainforest and the Sahel in the Indian summer monsoon and the permafrost and all these other things. If you can choose, I don’t know, maybe one or two of your favorites or something or whichever or few are biggest, I’m curious how these systems help continue to maintain the climate stability?

Kelly Wanser: Well, so there are people more expert than me, but I’ll talk about a couple that I care about a lot. So one is the permafrost, which is the frozen layer of earth. And that frozen layer of earth is under the surface in landmasses and also frozen layers under the ocean. For many thousands of years, if not longer, those layers capture and build up biological life that’s died and decayed within these frozen layers of earth and store massive amounts of carbon. And so provided the earth system is working within its usual parameters, all of those masses stay frozen, and that organic material stays there. As it warms up in a way that it moves beyond its normal range of parameters, then that stuff starts to melt and those gases start to be released. And the amount of gas stored in the permafrost is massive. And particularly it includes both CO2 and the more dense, fast acting gases like methane. We’re kind of sitting on the edge of that system, starting to melt in a way where those releases could be massive.

And in my work that’s to me one of the things that we need to watch most closely, that’s a potential runaway situation. So that’s one, and that’s a relatively straightforward one, because that’s a system storing greenhouse gases, releasing greenhouse gases. They range in complexity. Like the arctic is a much more complicated one because it’s related to all the physics of the movement of the atmosphere and ocean. So the circulation of the way the jet stream and weather patterns work, the circulation of the ocean and all of that. So there could be potential drastic effects on what weather is where on the planet. So major changes in the Arctic can lead to major changes in what we experience as like our normal weather. And we’re already seeing this start to happen in Europe. And that was predicted by changes in the jet stream where Europe’s always had this kind of mild sort of temperate range of temperature.

And they’re starting to see super cold winters and hot summers. And that’s because the jet stream is moving. And a lot of that is because the Arctic is melting. A personal one that’s dear to me and it is actually happening now and we may not be able to stop no matter what we do, are the coral reefs. Coral reefs are these organic structures and they teem with all different levels of life. And they trace up to about quarter of all life in the ocean. So as these coral reefs are getting hit by these waves of hot water, they’re dying. And ultimately they’re collapsed, so mean the collapse of at least 25% of life in the ocean that they support. And we don’t really know fully what the effects of that will be. So those are a few examples.

Lucas Perry: I feel like I’ve heard the word heat stress before in relation to coral reefs and then that’s what kills it.

Kelly Wanser: Yep.

Lucas Perry: All right. So before we move into the area you’re interested in, intervening as a potential solution if we can’t get the greenhouse gases down enough, are there any more bad things that we missed or bad things that would happen if we don’t sufficiently get climate change under control?

Kelly Wanser: So I think that there are many, and we haven’t talked too much about what happens on the human side. So there are even thresholds of direct heat for humans like the hot bulb temperature. I’m not going to be able to describe it super expertly, but the combination of heat and humidity at which the human body changes the way it’s expiring heat and that heat exchange. And so what’s happening in certain parts of the world right now, like in parts of India, like Calcutta, there’s an increasing number of days of the year where it’s not safe to work outside. And there were some projections that by 2030 there would be no days in Calcutta where it was safe to work outside. And we even see parts of the U.S. where you have these heat warnings. And right now, as a direct effect on humans, I just saw a study that said the actual heat index is killing more people than the smoke from fires.

The actual increase in heat is moving past where humans are actually comfortable living and interacting. As a secondary point, obviously in developed countries we have lots of tools for dealing with that in terms of our infrastructure. But one of the things that’s happening is the system is moving outside the band in which our infrastructure was built. And this is a bit of an understudied area. As warming progresses, and you have extreme temperature, you have more flooding, you have extreme storms and winds. We have everything from bridges to nuclear plants, to skyscrapers that were not engineered for those conditions. Full evaluation of that is not really available to us yet. And so I think we may be underestimating, even things like in some of these projections, we know that our sea level rise happens and extreme storms happen, places like Miami are probably lost.

And in that context, what does it mean to have a city the size of Miami sitting under water at the edge of the United States? It would be a massive environmental catastrophe. So I think unfortunately we haven’t looked closely enough at what it means for all of these parts of our human infrastructure for their external circumstances to be outside the arena they were engineered for.

Lucas Perry: Yeah. So natural systems become stressed. They come to fail, there could be cascades. Human systems and human infrastructure becomes stressed. I mean, you can imagine like nuclear facilities and oil rigs and whatever else can cause massive environmental damage getting stressed as well by being moved outside of the normal bandwidth of operation. It’s just a lot of bad things happening after bad things after bad things.

Kelly Wanser: Yeah. And you know, a big problem. Because I’ve had this debate with people who are bullish on adaptation. Hey, we can adapt to this, but the problem is you have all these things happening concurrently. So it’s not just Miami, it’s Miami and San Francisco and Bangladesh. It’s going to be happening lots of different variants of it happening all at the same time. And so anything we could do to prevent that, excuse my academic language, shit show is really something we should consider closely because the cost of that and this sort of compound damage is just pretty staggering.

Lucas Perry: Yeah. It’s often much cheaper to prevent risks than to deal with them when they come up and then clean up the aftermath. So as we try to avoid moderate to severe bad effects of climate change, we can mitigate. I think most everyone is very familiar with the idea of reducing greenhouse gas emissions. So the kinds of gases that help trap heat inside of the atmosphere. Now you’re coming at this from a different angle. So what is the research interest of SilverLining and what is the intervention of mitigating some of the effects of climate change? What is that intervention you guys are exploring?

Kelly Wanser: Well, so our interest is in the near term risk. And so therefore we focus most closely on things that might have the potential to act quickly to substantially reduce warming in the climate system. And the problem with greenhouse gas reduction and a lot of the categories of removing greenhouse gases from the air, are that they’re likely to take many decades to scale and even longer to actually act on the climate system. And so if we’re looking at sub 30 years where we’re coming from and SilverLining is saying, “We don’t have enough in that portfolio to make sure that we can keep the system stable.” We are a science led organization, meaning we don’t do research ourselves, but we follow the recommendations of the scientific community and the scientific assessment bodies. And in 2015 the National Academy of Sciences in the United States ran an assessment that looked at the different sort of technological interventions that might be used to accelerate, addressing climate warming and greenhouse gases.

And they issued two reports, one called climate intervention, carbon dioxide removal, and one called climate intervention, reflecting sunlight to cool earth. And what they found was that in the category where you’re looking to reduce warming quickly within a decade or even a few years, the most promising way to try to do that as based on one of the ways that the earth system actually regulates temperature, which is the reflection of sunlight from particles and clouds in the atmosphere. The theories behind why they think this might work are based on observations from the real world. And so what I’m showing you right now is a picture of a cloud bank off the Pacific West coast and the streaks in the clouds are created by emissions from ships. The particulates in those emissions, usually what people think of as the dirty stuff, has a property where it often mixes with clouds in a way that will make the clouds slightly brighter.

And so based on that effect, scientists think that there’s cooling that could be generated in this way actively, and also that there’s actually cooling going on right now as a result of the particulate effects of our emissions overall. And they think that we have this accidental cooling going on somewhere between 0.5 degrees and 1.1 degrees C, and this is something that they don’t understand very well, but is potentially both a promise and a risk when it comes to climate.

Lucas Perry: So there’s some amount of cooling that’s going on by accident, but the net anthropogenic heating is positive, even with the cooling. I think one facet of this that I learned from looking into your work is that the cooling effect is limited because the particles fall back down and so it goes away. And so there might be a period of acceleration of the heat. Is that right?

Kelly Wanser: Yes. I think what you’re getting at. So two things I’ll say, these white lines indicate the uncertainty. And so you can see the biggest line is on that cloud albedo effect, which is how much do these particles brighten clouds. The effects could be much bigger than what’s going into that net effect bar. And a lot of the uncertainty in that net effect bar is coming from this cloud albedo effect. Now the fact that they fall is an issue, but what happens today for the most part is we keep putting them up there. As long as you continuously put them up there, you continuously have this effect. If you take it away, which we’re doing a couple of big experiments in this year, then you lose that cooling effect right away. And so one of the things that we’re hoping to help with is getting more money for research to look at two big events that took that away this year.

One is the economic shutdowns associated with COVID where we had these clean skies all over the world because all this pollution went down. That’s a big global experiment in removing these particles that may be cooling. We are hoping to gain a better understanding from that experiment if we can get enough resources for people to look at it well.

Lucas Perry: So, the uncertainty with the degree to which current pollution is reflecting sunlight, is that because we have uncertainty over exactly how much pollution there is and how much sunlight that is exactly reflecting?

Kelly Wanser: It’s not that we don’t know how much pollution there is. I think we know that pretty well. It’s that this interaction between clouds and particles is one of the biggest uncertainties in the climate system. And there’s a natural form of it, when you see salt spray generating clouds, you’re in Big Sur looking at the waves and the clouds starting to form, that whole process is highly complex. Clouds are among the most complex creatures in our earth system. And they’re based on the behavior of these tiny particles that attract water to them and then create different sizes of droplets. So if the droplets are big, they reflect less total sunlight off less total surface area, and you have a dark cloud. And eventually, the droplets are big enough, they fall down as rain. If the droplets are small, there’s lots of surface area and the cloud becomes brighter.

The reason we have that uncertainty is that we have uncertainty around the whole process and some of the scientists that we work with in SilverLining, they really want to focus on that because understanding that process will tell you what you might be able to do with that artificially to create a brightening effect on purpose, as well as how much of an accidental effect we’ve got going on.

Lucas Perry: So you’re saying we’re removing sulfate from the emission of ships, and sulfate is helping to create these sea clouds that are reflecting sunlight?

Kelly Wanser: That’s right. And it happens over land as well. All the emissions that contain these sulfate and similar types of particles can have this property.

Lucas Perry: And so that, plus the reduction of pollution given COVID, there is this ongoing experiment, an accidental experiment to decrease the amount of reflective cloud?

Kelly Wanser: That’s right. And I should just note that the other thing that happened in 2020 is that the International Maritime Organization implemented regulations to drastically reduce emissions from ships. Those went into effect in January, an 85% reduction in these sulfate emissions. And so that’s the other experiment. Because sulfate and these emissions, we don’t like as pollutants for human health, for local ecosystems. They’re dirty. So we don’t like them for very good reasons, but they happen to have the side effect of producing a brightening effect on clouds, and that’s the piece we want to understand better.

When I talk to especially people in the Bay Area and people who think about systems, about this particular dynamic, most of the people that I’ve talked to were unfamiliar with this. And lots of people, even who think about climate a lot, are unfamiliar with the fact that we have this accidental cooling going on. And that as we reduce emissions, we have this uncertain near-term warming that may result from that, which I think is what you were getting at.

Lucas Perry: Yeah.

Kelly Wanser: So where I’m headed with this is that in the early ’90s, some British researchers proposed that you might be able to produce an optimized version of this effect using sea salt particles, like a salt mist from seawater, which would be cleaner and possibly actually produce a stronger effect because of the nature of the salt particles, and that you could target this at areas of unpolluted clouds and certain parts of the ocean where they’d be most susceptible, and you’d get this highly magnified reflective effect. And that in doing that, in these sort of few parts of the world where it would work best by brightening 10% to 20% of marine clouds or, say, the equivalent of 3% to 5% of the ocean’s surface, you might offset a doubling of CO2 or several degrees of warming. And so that’s one approach to this kind of rapid cooling, if you like, that scientists are thinking about that’s related to an observed effect.

This marine cloud brightening approach has the characteristic that you talked about, that it’s relatively temporary. So you have to do it continuously, last a few days and otherwise, if you stop, it stops. And it’s also relatively localized. So it opens up theoretical possibilities that you might consider it as a way of cooling ocean water and mitigating climate impacts regionally or locally. In theory, what you might do is engage in this technique in the months before hurricane season. So your goal is to cool the ocean surface temperatures, which are a big part of what increases the energy and the rainfall potential of storms.

So, this idea is very theoretical. There’s been almost no research in it. Similarly, there’s a little bit of emerging research in could you cool waters that flow on to coral reefs? And you might have to do this in areas that are further out from the coral reefs because coral reefs tend to be in places where there are no clouds, but your goal is to try to get those big currents of water they’re flowing on and cool them off. There was a little test, very little tests, tiny little tests of the technology that you might use down in Australia as part of their big program, I think it’s an $800 million program, to look at all possibilities for saving the Great Barrier Reef.

Lucas Perry: Okay. One thing that I think is interesting for you to comment on briefly is I think many people, and myself included, don’t really have a good intuition about how thick the atmosphere is. You look up and it’s just big open space, maybe it goes on forever or something. So how thick is it? Put it into scale so it makes sense that seven billion humans could effect it in such large scale ways.

Kelly Wanser: We’re going to talk about it a little bit differently because the things I’m talking to you about are different layers of the atmosphere. So, the idea that I talked to you about here, marine cloud brightening, that’s really looking at the troposphere, which is the lowest layer of the atmosphere, which are, when you look up, these are the clouds I see. It’s the cloud layer that’s closest to earth that’s going from sort of 500 feet up to a couple thousand feet. And so in that layer, you may have the possibility, especially over the ocean, of generating a mist from the surface where the convection, the motion of the air above you kind of pulls the particles up into the cloud layer. And so you can do this kind of activity potentially from the surface, like from ships. And it’s why the pollution particles are getting sucked up into the clouds too.

So that idea happens at that low layer, sub mile layer, visible eye layer of stuff. And for the most part, what’s being proposed in terms of volume of material, or when scientists are talking about brightening these clouds, they’re talking about brighten them 5% to 7%. So it’s not something that you would probably see as a human with your own naked eyes, and it’s over the ocean, and it’s something that would have a relatively modest effect on the light coming in to the ocean below, so probably, a relatively modest effect on the local ecology, except for the cooling that it’s creating.

So in that way, it’s potentially less invasive than people might think. Where the risks are in a technique like this are really around the fact that you’re creating these sort of concentrated areas of cooling, and those have the potential to move the circulation of the atmosphere and change weather patterns in ways that are hard to predict. And that’s probably the biggest thing that people are concerned about with this idea.

Now, if you’d like, I can talk about what people are proposing at the other side, the high end of the atmosphere.

Lucas Perry: Yeah. So I was about to ask you about stratospheric sunlight reflection.

Kelly Wanser: Yeah, because this is the one that most people have heard about those have heard about, and it’s the most widely studied and talked about, partly because it’s based on events that have occurred in nature. Large volcanoes push material into the atmosphere and very large ones can push material all the way into the outer layer of the atmosphere, the stratosphere, which I thinks starts at about 30,000 or 40,000 feet and goes up for a few miles. So when particles reach the stratosphere, they get entrained and they can stay for a year or two.

And when Mount Pinatubo erupted in 1991, it was powerful and it pushed particles into the stratosphere that stayed there for almost two years. And it produced a measurable cooling effect in the entire planet of at least a half a degree C, actually, popped up closer to one degree C. So this cooling effect was sustained. It was very clear and measurable. It lasted until the particles fell down to earth, and it also produced a marked change in Arctic ice where Arctic ice mass just recovered drastically. This cooling effect where the particles reach the stratosphere, they immediately or very quickly get dispersed globally. So it’s a global effect, but it may have an outsize effects on the Arctic, which warms and cools faster than the rest of the planet.

This idea, and there are some other examples in the volcanic record, is what led scientists, including the Nobel prize winning scientist who identified the ozone hole, Paul Crutzen, to suggest that one approach to offsetting the warming that’s happening with climate change would be to introduce particles in the stratosphere that reflects sunlight directly, almost kind of bedazzling the stratosphere, and that by increasing this reflectivity by just 1%, that you could offset a doubling of CO2 or several degrees of warming.

Now the particles that volcanoes release in this way are similar to the pollution particles on the ground. There are sulfates and there are precursors. These particles also have the property where they can damage the ozone layer and they can also cause the stratosphere itself to heat up, and so that introduces risks that we don’t understand very well. So that’s what people want to study. There isn’t very much research on this yet, but one of the earliest models is produced at NCAR that compared the course of global surface temperatures in a business as usual scenario in the NCAR global climate model versus introducing particles into the stratosphere starting in 2020, gradually increasing them and maintaining temperatures through the end of the century. And what you can see in that model representation is that it’s theoretically possible to keep global surface temperatures close to those of today with this technique and that if we were to go down this business as usual path or have higher than expected feedbacks that took us to something similar, that, that’s not a very livable situation for most people on the planet.

Lucas Perry: All right. So you can intervene in the troposphere or the stratosphere, and so there’s a large degree of uncertainty about indirect effects and second and third order effects of these interventions, right? So they need to be studied because you’re impacting a complex system which may have complex implications at different levels of causality. But the high level strategy here is that these things may be necessary if we’re not able to reduce greenhouse gas emissions sufficiently. That’s why we may be interested in it for mitigating some degree of climate change that happens or is inevitable.

Kelly Wanser: That’s right. There’s a slight sort of twist on that, I think, where it’s really about, if we can, trying to look at these dangerous instabilities and intervene before they happen or before they take us across thresholds we don’t want to go. It is what you’re saying, but it’s a little bit of a different shade where we don’t wait to see how our mitigation effort is going necessarily. What we need to do is watch the earth system and see whether we’re reaching kind of a red zone where we’ve got to bring the heat down in the system.

Lucas Perry: What kinds of ongoing experiments are happening for studying these tropospheric and stratospheric interventions in climate change?

Kelly Wanser: Well, so the first thing we’ll say is that the research in this field has been very taboo for most of the past few decades. So, relative to the problem space, very little research has been done. And the global level of investment in research even today is probably in the neighborhood of $10 million a year, and that includes a $3 million a year program in China and a program at Harvard, which is really the biggest funded program in the world. So, relative to the problem space and the potential, we’re very under-invested. And the things I’m going to talk to you about are really promising, and there are prestigious institutions and collaborations, but they’re still at, what I would call, a very seed level of funding.

So the two most significant interdisciplinary programs in the field, one is aimed at the stratosphere, and that’s a program at Harvard called the Harvard Solar Geoengineering Program and includes social science and physical sciences, but a sort of flagship of what they’re trying to do is to do an experiment in the stratosphere. And in their case, they would try to use a balloon, which is specially crafted to navigate in the stratosphere, which is a hard problem, so that they can do releases of different materials to look at their properties in the stratosphere as they disperse and as they mix with the gases in the stratosphere.

And so for understanding, what we hope, and I think the people in the field, is that we can do these small scale experimental studies that help you populate models that will better predict what happens if you did this at a bigger scale. So, the scale of this is tiny. It’s less than a minute of an emissions of an aircraft. It’s tiny, but they hope to be able to find out some important things about the properties of the chemical interactions and the way the particles disperse that would feed into models that would help us make predictions about what will happen when you do this and also, what materials might be more optimum to use.

So in this case, they’re going to look at sulfates, which we talked about, but also materials that might have better properties. Two of those are calcium carbonate, which is what were used doing chalk, and diamonds. What they hope to do is start down the path to finding out more about how you might optimize this in a way to minimize the risks.

The other effort is on the other side of the United States, this is an effort that’s based at the University of Washington, which is one of the top atmospheric science institutions in the country. It’s a partnership with Pacific Northwest National Labs, there’s the Department of Energy Lab, and PARC, which many of your community may know it’s the famous Xerox PARC, who has since developed expertise in aerosols.

At the University of Washington, they are looking to do a set of experiments that would get at this cloud brightening question. And their scientific research and their experiments are classified as dual purpose, meaning that they are experiments about understanding this climate intervention technique, can we brighten clouds to actively cool climate, but they’re also about getting out the question of what is this cloud aerosol effect? What is the accidental effect of emissions having and how does this work in the climate system more broadly? So, what they’re proposing to do is build a specialized spray technology. So one of the characteristics of both efforts is that you need to create almost a nano mist, the particles, 80 to 100 nanometers, very consistently, at a massive scale. That hasn’t been done before. And so how do we generate this massive number of tiny droplets of materials of salt particles from seawater or calcium carbonate particles?

And some retired physicists and engineers in Silicon Valley took on this problem about eight years ago. And they’ve been working on it for four days a week in their retirement for free for the sake of their grandchildren to invent this nozzle that I’m showing you, which is the first step of being able to generate the particles that you need to study here. They’re in the phase right now, where, because of COVID, they’ve had to set up a giant tent and do indoor spray tests, and they hope next year to go out and do what they call individual plume experiments. And then eventually, they would like to undertake what they call limited area field experiment, which would actually be 10,000 square kilometers, which is the size of a grid cell on a climate model. And that would be the minimum scale at which you could actually potentially detect a brightening effect.

Lucas Perry: Maybe it makes sense on reflection, but I guess I’m kind of surprised that so much research is needed to figure out how to make a nozzle make droplets of aerosol.

Kelly Wanser: I think I was surprised too. It turns out, I think for certain materials, and again, because you’re really talking about a nano mist, like silicon chip manufacturer, like asthma inhaler. And so here, we’re talking about three trillion particles a second from one nozzle and an apparatus that can generate 10 to the 16th particles and lift it up a few hundred meters.

It’s not nuclear fusion and it wouldn’t necessarily have taken eight years if they were properly funded and it was a focus program. I mean, these guys, the lead Armand Neukermans funded this with his own money and he was trading the biscottis from Belgium. He was trading biscottis for measurement instruments. And so it’s only recently in the past year or two where the program has gotten its first government funding, some from NOAA and some from the Department of Energy, very relatively small and more focused on the scientific modeling, and some money from private philanthropy, which they’re able to use for the technology development.

But again, going back to my comment earlier, this has been a very taboo area for scientists to even work in. There have been no formal sources of funding for it, so that’s made it go a lot slower. And the technology part is the hardest and most controversial. But overall, as a point, these things are very nascent. And the problem we were talking about at the beginning, predicting what the system is going to do, that in order to evaluate and assess these things properly, you need a better prediction system because you’re trying to say, okay, we’re going to perturb the system this way and this way and predict that the outcome will be better. It’s a tough challenge in terms of getting enough research in quickly. People have sort of propagated the idea that this is cheap and easy to do, and that it could run away from us very quickly. That has not been my experience.

Lucas Perry: Run away in what sense? Like everyone just starts doing it?

Kelly Wanser: Some billionaire could take a couple of billion dollars and do it, or some little country could do it.

Lucas Perry: Oh, as even an attack?

Kelly Wanser: Not necessarily an attack, but an ungoverned attempt to manage the climate system from the perspective of one individual or one small country, or what have you. That’s been a significant concern amongst social scientists and activists. And I guess my observation, working closely with it is, there are at least two types of technology that don’t exist yet that we need, so we have a technology hurdle. These things scale linearly and they pretty much stop when you stop, specifically referring to the aerosol generation technology. And for the stratosphere, we probably actually need a new and different kind of aircraft.

Lucas Perry: Can you define aerosol?

Kelly Wanser: I’ll caveat this by saying I’m not a scientist, so my definition may not be what a scientist would give you. But generally speaking, an aerosol is particles mixed with gases. It’s a manifestation in error of a mixed blend of particles and gases. I’ll often talk about particles because it’s a little bit clearer, and what we’re doing with these techniques for the most part is dispersing particles in a way that they mix with the atmosphere and…

Lucas Perry: Become an aerosol?

Kelly Wanser: Yeah. So, I would characterize the challenge we have right now is that we actually have a very low level of information and no technology. And these things would take a number of years to develop.

Lucas Perry: Yeah. Well, it’s an interesting future to imagine the international politics of weather control, like in negotiating whether to stop the hurricanes or new powers we might get over the weather in the coming decades.

Kelly Wanser: Well, you bring up an interesting point because as I’ve gotten into this field, I’ve learned about what’s going on. And actually, there’s an astonishing amount of weather modification activity going on in the world and in the United States.

Lucas Perry: Intentional?

Kelly Wanser: Intentional, yeah.

Lucas Perry: I think I did hear that Russia did some cloud seeding, or whatever it’s called, to stop some important event getting rained on or something.

Kelly Wanser: Yeah. And that kind of thing, if you remember the Beijing Olympics where they seeded clouds to generate rain to clear the pollution, that kind of localized cloud seeding type of stuff has gone on for a long time. And of course, I’m in Colorado, there’s always been cloud seeding for snowmaking. So what’s happened though in the Western United States, there’s even an industry association for weather modification in the United States. What started out as, especially snowmaking and a little bit of attempt to affect a snow pack in the West, has grown. And so there are actually major weather modification efforts in seven or eight Western states in the United States. And they’re mostly aimed at hydrology, like snow pack and water levels.

Lucas Perry: Is the snow pack for a ski resort?

Kelly Wanser: I believe, and I’m not an expert on the history of this, but I believe that snowmaking started out from the ski resorts, but when I say snow pack, it’s really about the water table. It’s about effecting the snow levels that generate the water levels downstream. Because in the West, a lot of our water comes from snow.

Lucas Perry: And so you want to seed more snow to get more water, and the government pays for that?

Kelly Wanser: I can’t say for sure who pays. This is still an exploration for us, but there are fairly significant initiatives in many Western states. And like I said, they’re primarily aimed at the problem of drought and hydrology. That’s in the United States. And if you look at other parts of the world, like the United Arab Emirates, they have a $400 million rainmaking fund. Can we make rain in the desert?

Lucas Perry: All right.

Kelly Wanser: Flip side of the coin. In Indonesia in January, this was in the news, they were seeding clouds off shore to induce rainfall off shore to prevent flooding, and they did that at a pretty big scale. In China last year, they announced a program to increase rainfall in the Tibetan plain, in an area the size of Alaska. So we are starting to see, I think around the world, and this activity would likely grow, weather extremes and attempts to deal with them locally.

Lucas Perry: Yeah. That makes sense. What are they using to do this?

Kelly Wanser: The traditional material is silver dioxide. That’s what’s proposed in the Chinese program and many of the rainmaking types of ideas. There are two things we’ll start to see, I think, as climate extremes grow and there’s pressure on politicians to act, growing interest in the potential for global mechanisms to reduce heat and bottoms up efforts that just continue to expand that try to manage weather extremes in these kinds of ways.

Lucas Perry: So we have this tropospheric intervention by using aerosols to generate clouds that will reflect sunlight, and then we have the stratospheric intervention, which aims to release particles which do something similar, how do you view the research and the project of understanding these things as fitting in with and informing efforts to decrease greenhouse gas emissions? And then also, the project of removing them from the atmosphere, if that’s also something people are looking into?

Kelly Wanser: I think they’re all very related because at the end of the day, from the SilverLining perspective and a personal perspective, we see this as a portfolio problem. So, we have a complex system that we need to manage back into a healthy state, and we have kind of a portfolio of things that we need to apply at different times and different ways to do that. And in that way, it’s a bit like medicine, where the interventions I’m talking about address the immediate stressor.

But to restore the system to health, you have to address the underlying cause. Where we see ourselves as maybe helping bridge those things is that we are under-invested in climate research and climate prediction. In the United States, our entire budget for climate research is about 2-1/2 billion dollars. If you put that in perspective, that’s like one 10th of an aircraft carrier. It’s half of a football stadium. It’s paltry. This is the most complicated, computing-intensive problem on planet earth.

It takes massive super computing capacity and all the analytical techniques you can throw at it to try to reduce the uncertainty around what’s going to happen to these systems. What I believe happened, in the past few decades, is the problem was defined as a need to limit greenhouse gases. So if you think of an equation, where one side is the greenhouse gases going in, and the other side is what happens to the system on the other end. We’ve invested most of our energy in climate advocacy and climate policy about bringing down greenhouse gases, and we’re under-invested in really trying to understand and predict what happens on the other side.

When you look at these climate intervention techniques, like I’m talking about, it’s pretty critical to understand and be able to predict what happens on the other side. It turns out, if you’re looking at the whole portfolio, typically, if you want to blend in these sort of nature-based solutions that could bring down greenhouse gases, but they have complex interaction with the system. Right? Like building new forests, or putting nutrients on the ocean. That need to better understand the system and better predict the system, it turns out we really need that. It would behoove us to be able to understand and predict these tipping points better.

I think that then where the interventions come in is to try to say, “Well, what does reducing the heat stress, get you in terms of safety? What time does it by you for these other things to take effect?” That’s kind of where we see ourselves fitting in. We care a lot about mitigation, about let’s move away from this whole greenhouse gas emissions business. We care a lot about carbon removal, and accelerating efforts to do that. If somebody comes up with a way to do carbon removal at scale in the next 10 years, then we won’t need to do what we’re doing. But that doesn’t look like a high probability thing.

And so what we’ve chosen to do is to say there’s a part of the portfolio that is totally unserviced. There are no advocates. There’s almost no research. It’s taboo. It’s complicated. It requires innovation. That’s where we’re going to focus.

Lucas Perry: Yeah. That makes sense. Let’s talk a little bit about this taboo aspect. Maybe some number of listeners have some initial reaction. Like anytime human beings try to in complex systems, there’s always unintended consequences or things happen that we can’t predict or imagine, especially in natural systems. How would you speak to, or connect with someone who viewed this project of releasing aerosols into the atmosphere to create clouds or reflect sunlight as dangerous?

Kelly Wanser: I’ll start out by saying, I have a lot of sympathy with that. If we were 30 years ago, if you’re at a different place in this sort of risk equation, then this kind of thing really doesn’t make any sense at all. If we’re in 1970 or 1980, and someone’s saying, “Look, we just need to economically tune the incentives, so that we phase greenhouse gases out of the bulk of our economic system,” that is infinitely smarter and less risky.

I believe that a lot of the principle and structure of how we think about the climate problem is based on that, because what we did was really stupid. It would be the same thing as if the doctor said, “Well, you have stage one cancer. Stop smoking,” and you just kept on puffing away. So I am very sympathetic to this. But the primary concern that we’re focused on, are now our forward outcomes and the fact that we have this big safety problem.

So now, we’re in a situation where we have greenhouse gas concentrations that we have. They were already there. We have warming and system impacts that are already there and some latency built in, that mean we’re going to have more of those. So that means we have to look at the risk-risk trade-off, based on the situation that we’re in now. Where we have conducted the experiment. Where we pushed all these aerosols into the atmosphere that mostly trap heat and change the system radically.

We did that. That was one form of human intervention. That wasn’t a very smart one. What we have to look at now is we’re not saying that we know that this is a good idea, or that the benefits outweigh the risks. But we’re saying that we have very few alternatives today to act in ways that could help stabilize the system.

Lucas Perry: Yeah. That makes sense. Can you enumerate what the main points are of detractors? If someone is skeptical of this whole approach and thinks, “We just need to stick to removing greenhouse gases by natural intervention, by building forests, and we need to reduce CO2 emissions and greenhouse gas emissions drastically. To do anything else would be adding more danger to the equation.” What are the main points of someone who comes with this problem, with such a perspective?

Kelly Wanser: You touched on two of them already. One, is that the problem is actually not moving that quickly and so we should be focused on things that are root cause, even if they take longer. Then the second one, being the fact that this introduces risks that are really hard to quantify. But I would say the primary objection, that’s raised by people like Al Gore, most of the advocates around climate, that have a problem with this is what they call the moral hazard. The idea that it gets put forward as a panacea and therefore, it slows down efforts to address the underlying problem.

This is sort of saying, even research in this stuff could have a societal negative effect, that it slows us down in doing what we’re really supposed to do. That has some interesting angles on it. One angle, which was talked about in a recent paper by Joseph Aldy at Harvard, and also was talked about with us, by Republicans we talked to about this early on, was that there’s also the thesis that it could have the opposite effect.

That the sort of drastic nature of these things could actually signal, to society and to skeptics, the seriousness of the problem. I did a bipartisan panel. The Republican on the panel, who was a moderate guy, pro-climate guy. He said, “When we, Republicans, hear these kinds of proposals coming from people who are serious about climate change, it makes you more credible than when you come to us and say, ‘The sky is falling,’ but none of these things are on the table.”

I thought that was interesting, early on. I thought it was interesting recently, that there’s at least an equal possibility that these things, as we look into them, could wake everyone up in the same way that more drastic medical treatments do and say, “Look, this is very serious. So on all fronts, we need to get very serious.” But I think, in general, this idea of moral hazard comes up pretty much as soon as the idea is there. And it can come up in the same way that Trump talks about planting trees.

Almost anything can be positioned in a way that could be attempted to use this as this panacea. I actually think that one of the moral hazards of the climate space has been the idea of winners and losers, because I think many more powerful people assume that this problem didn’t apply to them.

Lucas Perry: Like they’re not in a flooding zone. They can move to their bunker.

Kelly Wanser: The people who put forward this idea of winners and losers in climate did that because they were very concerned about the people who are impacted first. The mistake was in letting powerful people think that this wasn’t their problem. In this particular case, I’m optimistic that if we talk about these things candidly, and we say, “Look, these are serious, and they have serious risks. We wouldn’t use them, if we had a better choice.”

It’s not clear to me that that moral hazard idea really holds, but that is the biggest reservation, and it’s a reservation. That means that many people, very passionately, object to research. They don’t want us to look into any of this, because it sets off this societal problem.

Lucas Perry: Yeah. That makes a lot of sense. It seems like moral hazard should be called something more like, information hazard. The word moral seems a little bit confusing here, because it’s like if people have the information that this kind of intervention is possible, then bad things may happen. Moral means it has something to do with ethics, rather than the consequences of information. Yeah, so whatever. No one here has control over how this language was created.

Kelly Wanser: I agree with you. It’s an idea that comes from economics originally, about where the incentives are. But I think your point is well taken, because you’re exactly right. It’s information is dangerous and that’s a fundamental principle. I find myself in meetings with advocates, and around this issue having to say, “Look, our position is that information helps with fair and just consideration of this. That information is good, not bad.”

But I think you hit on an extremely important point, that it’s a masked way of saying that information is too dangerous for people to handle. Our position is information about these things is what empowers people all over the world to think about them for themselves.

Lucas Perry: Yeah. There’s a degree to which moral hazards or information hazards lack trust or confidence in the recipients of that information, which may or may not be valid, depending on the issue and the information. Here, you argue that this information is necessary to be known and shared, and then people can make informed decisions.

Kelly Wanser: That’s our argument. And so for us, we want to keep going forward and saying, “Look, let’s generate information about this, so we can all consider it together.” I guess one thing I should say about that, because I was so shocked by it when I started working in climate. That this idea of moral hazard, it isn’t new to this issue. It actually came up when they started looking at adaptation research in the IPCC and the climate community. Research and adaptation was considered to create a moral hazard, and so it didn’t move forward.

One of the reasons that we, as a society, have relatively low level of information about the things I was talking about, like infrastructure impacts, is because there was a strong objection to it, based on moral hazard. The same was true of carbon removal, which has only recently come into consideration in the IPCC. So this information is a dangerous idea because it will affect our motivation around this one part of the portfolio, that we think is the most important. I would argue that, that’s already slowed us down in really critical ways.

This is just another of those where we need to say, “Okay, we need to rethink this whole concept of moral hazard, because it hasn’t helped us.” So going back say 20 years ago, in the IPCC and the climate community, there’s this question of, how much should we invest in looking at adaptation? There was a strong objection to adaptation research, because it was felt it would disincentivize greenhouse gas reduction.

I think that’s been a pretty tragic mistake. Because if you had started research adaptation 20 years ago, you’d have much more information about what a shit show this is going to be and more incentive to reduce greenhouse gases, not less, because this is not very adaptable. But the effect of that was a real dampening of any investment in adaptation research. Even adaptation research in the US federal system is relatively new.

Lucas Perry: Yeah. The fear there is that McAlpha Corp will come and be like, “It’s okay that we have all these emissions, because we’ll just make clouds later.” Right? I feel like corporations have done extremely effective disinformation campaigns on scientific issues, like smoking and other things. I assume that would have been what some of the fear would have been with regards to adaptation techniques. And here, we’re putting stratospheric and tropospheric intervention as adaptation techniques. Right?

Kelly Wanser: Well, in what I was talking about before, I wasn’t referring to this category. But the more traditional adaptation techniques, like building dams and finding new different types of vegetation and things like that. I recognize that what I’m talking about in these common interventions is fairly unusual, but even traditional adaptation techniques to protect people were suppressed. I appreciate your point. It’s been raised to me before that, “Oh, maybe oil companies will jump on this, as a panacea for what they are doing.”

So we talked to oil companies about it, talked to a couple of them. Their response was, “We wouldn’t go anywhere near this,” because it would be admission that ties their fossil fuels to warming. They’re much more likely to invest in carbon removal techniques and things that are more closely associated with the actual emissions, than they are anything like this. Because they’re not conceding that they created the warming,

Lucas Perry: But if they’re creating the carbon, and now they’re like, “Okay, we’re going to help take out the carbon,” isn’t that admitting that they contributed to the problem?

Kelly Wanser: Yes. But they’re not conceding that they are the absolute and proven cause of all of this warming.

Lucas Perry: Oh. So they inject uncertainty, that people will say like, “There’s weather, and this is all just weather. Earth naturally fluctuates, and we’ll help take CO2 out of the atmosphere, but maybe it wasn’t really us.”

Kelly Wanser: And if you think about them as legal fiduciary entities. Creating a direct tie between themselves and warming is different than not doing that. This is how it was described to me. There’s a fairly substantial difference between them looking at greenhouse gases, which are part of the landscape of what they do, and then the actual warming and cooling of the planet, which they’re not admitting to be directly responsible for.

So if you’re concerned about there being someone doing it, we can’t count on them to bail us out and cool the planet this way, because they’re really, really not.

Lucas Perry: Yeah. Then my last quip I was suffering over, while you were speaking, was if listeners or anyone else are sick and tired of the amount of disinformation that already exists, get ready for the conspiracy theories that are going to happen. Like chemtrail 5.0, when we have to start potentially using these mist generators to create clouds. There could be even like significant social disruption just by governments undertaking that kind of project.

Kelly Wanser: That’s where I think generating information and talking about this in a way that’s well grounded is helpful. That’s why you don’t hear me use the term, geoengineering. It’s not a particularly accurate term. It sort of amplifies triggers. Climate intervention is the more accurate term. It helps kind of ground the conversation in what we’re talking about. The same thing when we explain that these are based on processes that are observed in nature, and some of them are already happening. So this isn’t some big, new Sci-Fi. You know, we’re going to throw bombs at hurricanes or something. Just getting the conversation better grounded.

I’ve had chemtrails people at my talks. I had a guy set up a tripod in the back and record it. He was giving out these little buttons that had an airplane with little trail coming out, and a strike through it. It was fantastic. I had a conversation with him. When you talk about it in this way, it’s kind of hard to argue with. The reality is that there is no secret government program to do these things, and there are definitely no mind-altering chemicals involved in any proposals.

Lucas Perry: Well, that’s what you would be saying, if there were mind-altering chemicals.

Kelly Wanser: Fair point. We tend to try to orient the dialogue at the sort of 90% across the political and thought spectrum.

Lucas Perry: Yeah. It’s not a super serious consideration, but something to be maddened about in the future.

Kelly Wanser: One of the other things I’ll say, with respect to the climate denial side of the spectrum. Because we work in the policy sphere in the United States, and so we have conversations across the political spectrum. In a strange way, coming out at the problem from this angle, where we talk about heat stress and we talk about these interventions, helps create a new insertion point for people who are shut down in the traditional kind of dialogue around climate change.

And so we’ve had some pretty good success actually talking to people on the right side of the spectrum, or people who are approaching the climate problem from a way that’s not super well-grounded in the science. We kind of start by talking about heat stress and what’s happening and the symptoms that we’re seeing and these kinds of approaches to it, and then walking them backwards into when you absolutely positively have to take down greenhouse gases.

It has interestingly, and kind of unexpectedly, created maybe another pathway for dealing with at least parts of those populations and policy people.

Lucas Perry: All right. I’d be interested in pivoting here into the international implications of this, and then also talking about this risk in the context of other global catastrophic and existential risks. The question here now is what are the risks of international conflict around setting the global temperature via CO2 reduction and geo… Sorry. Geoengineering is the bad word. Climate intervention? There are some countries which may benefit from the earth being slightly warmer, hotter. You talked about how there were no winners or losers. But there are winners, if it only changes a little bit. Like if it gets a little bit warmer, then parts of Russia may be happier than they were otherwise.

The international community, as we gain more and more efficacy over the problem of climate change and our ability to mitigate it to whatever degree, will be impacting the weather and agriculture and livability of regions for countries all across the planet. So how do you view this international negotiation problem of mitigating climate change and setting the global temperature to something appropriate?

Kelly Wanser: I don’t tend to use the framing of, setting the global temperature. I mean, we’re really, really far from having like a fine grained management capability for this. We tend to think of it more in the context of preventing certain kinds of disastrous events in the climate system. I think in that framing, where you say, “Well, we can develop this technology,” or where we have knobs and dials for creating favorable conditions in some places and not others, that would be potentially a problem. But it doesn’t necessarily look like that’s how it works.

So it’s possible that some places, like parts of Russia, parts of Canada, might for a period of time, have more favorable climate conditions, but it’s not a static circumstance. The problem that you have is well, the Arctic opens up, Siberia gets warmer and for a couple of decades, that’s nicer. But that’s in the context of these abrupt change risks that we were talking about, where that situation is just a transitory state to some worse states.

And so the question you’re asking me is, “Okay. Well, maybe we hold a system to where Russia is happier in this sort of different state that they had.” I think that the massive challenge, which we don’t know if we can do, is just whether we can keep the system stable enough. The idea that you can stabilize the system in a way that’s different then now, but still prevents these like cascading outcomes. That’s a pretty, I would say, not the highest probability scenario.

But I think there’s certainly validity in your question, which is this just makes everybody super nervous. It is the case that this is not a collective action capability. One of its features is that it does not require everyone in the world to agree, and that is a very unstable concerning state for a lot of people. It is true that its outcomes cannot be fully predicted.

And so there’s a high degree of likelihood that everyone would be better off or that the vast majority of the world would be better off, but there will be outcomes in some places that might be different. It’s more likely, rather than people electively turning the knobs and making things more favorable for themselves, just that 3 to 5% of the world thinks they’re worse off, while we’ve tried to keep the thing more or less stable.

I think behind your question is even the dialogue around this is pretty unnerving and has the potential to promote instability and conflict. One of the things that we’ve seen in the past, that’s been super helpful, is for scientific cooperation. Lots of global cooperation in the evolution of the research and the science, so that everybody’s got information. Then we’re all dealing from an information base where people can be part of the discussion.

Because our strong hypothesis is like we’re kind of looking at the edge of a cliff, where we might not have so much disagreement that we need to do something, but we all need information about this stuff. We have done some work, in SilverLining, at looking at this and how the international community has handled things better or worse, when it comes to environmental threats like this. Our favorite model is the Montreal Protocol, which is both the scientific research and the structure that helped manage what is, many perceive, to be an existential risk around the ozone layer.

That was a smaller, more focused case of, you have a part of the system that if it falls outside a certain parameter, lots and lots of people are going to die. We have some science we have to do to figure out where we can let that go and not let it go. The world has managed that very well over the past couple of decades. And we managed to walk back from the cliff, restore the ozone layer, and we’re still managing it now.

So we kind of see some similarities in this problem space of saying, “We’ve got to be really, really focused about what we can and can’t let the system do, and then get really strong science around what our options are.” The other thing I’ll say about the Montreal Protocol, in case people aren’t aware, is it is the only environmental forum, environmental treaty that is signed by all countries in the world. There are lots of aspects of that, that are a really good model to follow for something like this, I think.

Lucas Perry: Okay. So there’s the problem of runaway climate change, where the destruction of important ecosystems lead to tipping points, and that leads to tipping cascades. And without the reduction of CO2, we get worse and worse climate change, where like everyone is worse off. In that context, there is increased global destability, so there’s going to be more conflict with the migrations of people and the increase of disease.

It’s just going to be a stressor on all of human civilization. But if that doesn’t happen, then there is this later 21st century potential concern of more sophisticated weather manipulation, weather engineering technologies, making the question of constructing and setting the weather in certain ways as a more valid international geopolitical problem. But primarily the concern is obviously regular climate change with the stressors and conflict that are induced by that.

Kelly Wanser: One thing I’ll say, just to clarify a little bit about weather modification and the expansion of that activity. I think that, that’s already happening and likely to happen throughout the century, and the escalation of that and the expansion of that as a problem. Not necessarily people using it as a weaponized idea. But as weather modification activities get larger, they have what are called telegraphic effects. They affect other places.

So I might be trying to cool the Great Barrier Reef, but I might affect weather in Bali. If I’m China and I’m trying to do weather modification to areas the size of Alaska, it’s pretty sure that I’m going to be affecting other places. And if it’s big enough, I could even affect global circulation. So I do think that that aspect, that’s coming onto the radar now. That is an international decision-making problem, as you correctly say. Because that’s actually, in some ways, even almost a bit of a harder problem than the global one. Because we’ve got these sort of national efforts, where I might be engaged in my own jurisdiction, but I might be affecting people outside.

Kelly Wanser: I should also say, just so everybody’s clear, weather modification for the purpose of weapons is banned by international treaty. A treaty called ENMOD. It arose out of US weather modification efforts in the Vietnam war, where we were trying to use weather as a weapon and subsequently agreed not to do that.

Lucas Perry: So, wrapping up here on the geopolitics and political conflict around climate change. Can you describe to what extent there is gridlock around the issue? I mean, different countries have different degrees of incentives. They have different policies and plans and philosophies. One might be more interested in focusing on industrializing to meet its own needs. And so it would deprioritize reducing CO2 emissions. So how do you view the game theory and the incentives and getting international coordination on climate change when, yeah, we’d all be better off if this didn’t happen, but not everyone is ready or willing to pay the same price?

Kelly Wanser: I mean, the main issue that we have now is that we have this externality, this externalized costs that people aren’t paying for the damage that they’re doing. And so a modest charge for that, for greenhouse gas emissions, my understanding is that a relatively modest price for carbon can set the incentives such that innovation moves faster and you reach the thresholds of economic viability for some of these non-carbon approaches faster. I come from Silicon Valley, so I think innovation is a big part of the equation.

Lucas Perry: You mean like solar and wind?

Kelly Wanser: Well there’s solar and wind, which are the traditional techniques. And then there are emerging things which could be hydrogen fuel cells. It could be fusion energy. It could be really important things in the category of waste management, agriculture. You know, it’s not just energy and cars, right? And we’re just not reaching the economic threshold where we’re driving innovation fast enough and we’re reaching profitability fast enough for these systems to be viable.

So with a little turn of the dial in terms of pricing that in, you get all of that to go faster. And I’m a believer in moving that innovation faster means that the price of these low carbon techniques will come down, it will also accelerate offlining the greenhouse gas generating stuff. So I think that it’s not sensible that we’re not building in like a robust mechanism for having that price incentive, and that price incentive will behave differently in the developed countries versus the emerging markets and the developing countries. And it might need to be managed differently in terms of the cost that they face.

But it’s really important in the developing countries that we develop policies that incentivize them not to build out greenhouse gas generating infrastructure, however we do that. Because a lot of them are in inflection points, right? Where they can start building power plants and building out infrastructure.

So we also need to look closely at aligning policies and incentives for them that they just go ahead and go green, and it might be a little bit more expensive, which means that we have to help with that. But that would be a really smart thing for us to do. What we can’t do is expect developing countries who mostly didn’t cause the problem to also eat the impact in terms of not having electricity and some of the benefits that we have of things like running water and basic needs. I don’t actually think this is rocket science. You know, I’m not a total expert, but I think the mechanisms that are needed are not super complicated. The getting the political support for them is what the problem is.

Lucas Perry: A core solution here being increased funding into innovation, into the efficacy and efficiency of renewable energy resources, which don’t pollute greenhouse gases.

Kelly Wanser: The R&D funding is key. In the U.S. we’ve actually been pretty good at that in a lot of parts of that spectrum, but you also have to have the mechanisms on the market side. Right now you have effectively fossil fuels being subsidized in terms of not being charged for the problem they’re creating. So basically we’ve got to embed the cost in the fossil fuel side of the damage that they’re doing, and that makes the market mechanisms work better for these emerging things. And the emerging things are going to start out being more expensive until they scale.

So we have this problem right now where we have some emerging things, they’re expensive. How do we get them to market? Fossil fuels are still cheaper. That’s the problem where it will eventually sort itself out, but we need it to sort itself out quickly. So we’ve got to try to get in there and fix that.

Lucas Perry: So, let’s talk about climate change in the context of existential risks and global catastrophic risks. The way that I use these language is to say that global catastrophic risks are ones which would kill some large fraction of human civilization, but wouldn’t lead to extinction. And existential risks lead to all humans dying or all earth-originating intelligent life dying. The relevant distinction here for me is that the existential risks cancel the entire future. So there could be billions upon billions or trillions of experiential life years in the future if we don’t go extinct. And so that is this value being added into the equation of trying to understand which risks are the ones to pay attention to.

So you can react to this framing if you’d like, I’d be interested in what you think about it. And also just how you see the relative importance of climate change in the context of global catastrophic and existential risks and how you see its interdependence with other issues. So I’m mainly talking about climate change as being in a context of something like other pandemics, other than COVID-19, which may kill large fractions of the population and synthetic biorisk, which a sufficiently dangerous engineered pandemic could possibly be existential or an accidental nuclear war or misaligned artificial superintelligence that could lead to the human species extinction. So how do you think about climate change in the context of all of these very large risks?

Kelly Wanser: Well, I appreciate the question. Many of the risks that you described, how the characteristics that they are hard to quantify, and they’re hard to predict. And some of them are sort of like big black swan events, like even more deadly pandemics or pandemics polarized, artificially engineered things. So climate change I think shares that characteristic that it’s hard to predict. I think that climate change, when you dig into it, you can see that there are analytical deficiencies that make it very likely that we’re underestimating the risk.

In the spectrum between sort of catastrophic and existential we have not done the work to dig into the areas in which we are not currently adequately representing the risk. So I would say that there’s a definite possibility that it’s existential and that that possibility is currently under analyzed and possibly under estimated. I think there are two ways that it’s existential. So I’ll say I’m not an expert in survivability in outlier conditions, but if we just look at two phenomenon that are part of non-zero probability projections for climate, one is this example that I showed you where warming goes beyond five or six degrees C. The jury’s pretty far out on what that means for humans and what it means about all the conditions of the land and the sea and everything else.

So the question is like, how high does temperature go? And what does that mean in terms of the population livability curve? Part of what’s involved in that how high does temperature go is the biological species and their relationship to the physics and chemistry of the planet. This concern that I had from Pete Warden at NASA aims that I had never heard before talking to him is that at some point in the collapse of biological life, particularly in the ocean, you have a change in the chemical interactions that produce the atmosphere that we’re familiar with.

So for example, the biological life at the surface of the ocean, the phytoplankton and other organisms, they generate a lot of the oxygen that we breathe in the air, same with the forests. And so the question is whether you get collapse in the biological systems that generate breathable air. Now, if you watch sci-fi, you could say, “Well, we can engineer that.” And that starts to look more like engineering ourselves to live on Mars, which I’m happy to talk about why I don’t think that’s the solution. But so I think that it’s certainly reasonable for people to say, “Well, could that really happen?” There is some non-zero probability that that could happen that we don’t understand very well and we’ve been reluctant to explore.

And so I think that my challenge back to people about this being an existential risk is that the possibility that it’s an existential risk in the nearer term than you think may be higher than we think. And the gaps in our analysis of that are concerning.

Lucas Perry: Yeah. I mean, the question is like, do you know everything you need to know about all of the complex systems on planet Earth that help maintain the small bandwidth of conditions for which human beings can exist? And the answer is, no I don’t. And then the question is, how likely it is that climate change will perturb those systems in such a way that it would lead to an existential catastrophe? Well, it’s non-zero, but besides that, I don’t know.

Kelly Wanser: And one thing to look at that I think everyone should look at who’s interested in this is the observations of what’s happening in the system now. What’s happening in the system now are collapses of some biological life changes and some of the systems that are indicative that this risk might be higher than we think. And so if you look at things like, I think there was research coming out that estimates that we may have already lost like 40% of the phytoplankton on the surface of the ocean. So much so that the documentary filmmaker who made Chasing Coral was thinking about making a documentary about this.

Lucas Perry: About phytoplankton?

Kelly Wanser: Yeah. And phytoplankton, I think of it as the API layer between the ocean and the atmosphere, it’s the translation layer. It’s really important. And then I go to my friends who are climate modelers, and they’re like, “Yeah, phytoplankton isn’t well-represented in the climate models, there are over 500 species of phytoplankton and we have three of them in the climate models.” And so you look at that and you say, “Okay, well, there’s a risk that we’re don’t understand very well.” So, from my perspective, we have a non-zero risk in this category. I’d be happy if I was overstating it, but it may not be.

Lucas Perry: Okay. So that’s all new information and interesting. In the context of the existential risk community that I’m most familiar with, climate change, the way in which it’s said to potentially lead to existential risks is by destabilizing global human systems that would lead to the actualization of other things that are existential risks. Like if you care about nuclear war or synthetic bio or pandemics or getting AI right, that’s all a lot harder to do and control in the context of a much hotter earth. And so the other question I had for you, speaking of hotter earths, has the earth ever been five C hotter than it is now while mammals have been on it?

Kelly Wanser: So hasn’t been that hot while humans have been on it, but I’m not expert enough to know, as far as the mammal picture, I’m going to guess, probably yes. So when I touch on the first points that you were making too about the societal cascade, but on this question, the problem with the warming isn’t just whether or not the earth has ever been this warm, but it’s the pace of warming. If you look at over the past couple thousand years, how far and how fast we’re pushing the system, that normally when the earth goes through its fluctuations of temperature, and you can see in the past 2,000 years, it’s been small fluctuations, it’s been bigger. But it’s happened over very long periods of time, like hundreds of thousands of years, which means that all of the little organisms and all the big structures are adapting in this very slow way.

And in this situation where we’re pushing it this fast, the natural adaptation was very, very low. You know, you have species of fish and stuff that can move to different places, but it’s happening so fast in Earth system terms that there’s no adaptation happening. But to your other point about climate change setting off existential threats to society in other ways, I think that’s very true. And the climate change is likely to heighten the risk of like nuclear conflict on a couple of different vectors. And it’s also likely to heighten the risk that we throw biological solutions out there whose results we can’t predict. So I think one of the facets of climate change that might be a little bit different than runaway AI is just that it applies stress across every human and every natural system.

Lucas Perry: So this last point here then on climate change contextualized in this field of understanding around global catastrophic and existential risks, FLI views itself as being a part of the effective altruism community, and many of the listeners are effective altruists and 80,000 hours has come up with this simple framework for thinking about what kinds of projects and endeavors you should take on. And so the framework is just thinking about tractability, scope and neglectedness.

So tractability is just how much you can do to actually affect the thing. Scope is how big of a problem is it, how many people does it affect, and neglectedness is how many people are working on it? So you want to work on things that are highly tractable or tractable that have a large scope and that are neglected. So I think that there’s a view or the sense of climate change is that … I mean, from our conversation, it seems very tractable.

If we can get human civilization and coordinate on this, it’s something that we can do a lot about. I guess it’s another question on how tractable it is to actually get countries and corporations to coordinate on this. But the scope is global and would in the very least effect our generation and the next few generations, but it seems to not be neglected relative to other risks. One could say that it’s neglected relative to how much attention it deserves. But so I’m curious to know how you would react to this tractability, scope, and neglectedness framework being applied to climate change and in the context of other global catastrophic and existential risks.

Kelly Wanser: Firstly, I’m a big fan of the framework. I was familiar with it before, and it’s not dissimilar to the approach that we took in founding SilverLining, where I think this issue might fit into that framework depends on whether you put climate change all in one bucket and treat it as not neglected. Or you say in the portfolio of responses to climate change of which we have a significant gap in terms of ability to mitigate heat stress while we work on other parts of the portfolio, that part is entirely neglected.

So I think for us it’s about having to dissect the climate change problem, and we have this collective action problem, which is a hard problem to solve, to move industrial and other systems away from greenhouse gas emissions. And we have the system instability problem, which requires that we somehow alleviate the heat stress before the system breaks down too far.

I would say in that context, if your community looks at climate change as a relatively slowly unfolding problem, which has a lot of attention, then it wouldn’t fit. If you look at climate change as having some meaningful risk of catastrophic to existential unfolding in the next 30 to 50 years and not having response measures to try to stabilize the system, then it fits really nicely. It’s so under serviced that I represent the only NGO in the world that advocates for research in this area. So it depends on how your community thinks about it, but we look at those as quite different problems in a way.

Lucas Perry: So the problem of for example adaptation research, which has historically been stigmatized, we can apply this framework to this and see that you might get a high return on impact if you focus on supporting and doing research in climate intervention technologies and adaptation technologies?

Kelly Wanser: That’s right. What’s interesting to me and the people that I work with on this problem is that these climate intervention technologies have the potential to have very high leverage on the problem in the short term. And so from a philanthropic perspective or an octopus perspective, oftentimes I’m engaged with people who are looking for leverage, where can I really make a difference in terms of supporting research or policy? And I’m in this because literally I came from tech into climate, looking what is the most under-serviced highest leverage part of the space. And I landed here. And so I think that of your criteria that it’s under serviced and potentially high leverage, then this fits pretty well. It’s not the same as addressing the longer term problem of greenhouse gases, but it has very high leverage on the stability risk in the next 50 years or so.

Lucas Perry: So if that’s compelling to some number of listeners, what is your recommendation for action and participation for such persons? If I’m taking a portfolio approach to my impact or altruism, and I want to put some of it into this, how do you recommend I do that?

Kelly Wanser: So it’s interesting timing because we’re just a few weeks of launching something called a safe climate research initiative where we’re funding a portfolio of research programs. So what we do at Silver Lining is try to help drive philanthropic funding for these high leverage nascent research efforts that are going on and then try to help drive government funding and effective policy so that we can get resources moving in the big climate research system. So for people looking for that, when we start talking about the safe climate research initiative, we were agnostic as to whether, if you want to give money to SilverLining for the fund, or you want to donate to these programs directly.

So we interface with most of the mature-ish programs in the United States and quite a few around the world, mature and emerging. And we can direct people based on their interests, whether alumni, whether parts of the world there are opportunities for funding really high caliber things, Latin America, the UK, India.

So we’re happy to say, “You know, you can donate to our fund and we’re just moving through, getting seed funding to these programs as we can, or we can help connect you with programs based on your interests in the different parts of the world that you’re in, technology versus science versus impacts.” So that’s one way. For some philanthropists who are aware of the leverage on government R&D and government policy, Silver Lining’s been very effective in starting to kind of turn the dial on government funding. And we have some pretty big aspirations, not only to get funding directly in assessing these interventions, but also in expanding our capacity to do climate prediction quickly. So that’s another way where you can fund advocacy and we would appreciate it.

Lucas Perry: Accepting donations?

Kelly Wanser: We’re definitely accepting donations, happy to connect people or be a conduit for funding research directly.

Lucas Perry: All right. So let’s end on a fun one here then. So we were talking a little bit before we started about your visit planet earth picture behind you, and that you use that as a message against the colonization of Mars. So why don’t you think Mars is a solution to all of the human problems on earth?

Kelly Wanser: Well, let’s just start by saying, I grew up on Star Trek and so the colonization of Mars and the rest of the universe is appealing to me. But as far as the solutions to climate change or an escape from it, just to level set, because I’ve had serious conversations with people. I lived for 12 years in Silicon Valley, spent a lot of time with the Long Now community. And people have a passion for this vision of living on another planet and the idea that we might be able to move off of this one if it becomes dire. The reality is, and it goes back to education I got from very serious scientists. The problem with living on other planets, it’s not an engineering problem or a physics problem. It’s a biology problem.

That our bodies are fine tuned to the conditions of Earth, radiation, gravity, the air, the colors. And so we degrade pretty quickly when we go off planet. That’s a harder problem to solve than building a spaceship or a bubble. That’s not a problem that gets solved right away. And we can see it from the conditions of the astronauts that come back after a few years in orbit. And so the kinds of problems that we would need to solve to actually have quality of life living conditions on Mars or anywhere else are going to take a while. Longer than what we think are the 30 to 50 year instability problem that we have here on earth.

We are so finely tuned to the conditions of earth, like the Goldilocks sort of zone that we’re in, that it’s a really, really hard thing to replicate anywhere else. And so it’s really not very rational. It’s actually a much easier problem to solve to try to repair earth than it is to try to create the conditions of earth somewhere else.

Lucas Perry: Yeah. So I mean, these things might not be mutually exclusive, right? It really seems to be a problem of resource allocation. Like it’s not one or the other, it’s like, how much are we going to put into each-

Kelly Wanser: It’s less of a problem of resource allocation than time horizon. So I think that the kinds of scientific and technical problems that you have to solve to meaningfully have people live on Mars, that’s beyond a 50 year time horizon. And our concern is that the climate instability problem is inside a 50 year time horizon. So that’s the main issue is that over the long haul, there are advanced technologies and probably bio-engineering things we need to do and maybe engineering of planets that we need to do for that to work. And so over the next 100 or 200 years, that would be really cool, and I’ll be in favor of it also. But this is the spaceship that we have. All of the people are on it, and failure is not an option.

Lucas Perry: All right. That’s an excellent place to end on. And I think both you and I share the science fiction geek gene about getting to Mars, but we’ll have to potentially delay that until we figure out climate change, but hopefully we get to that. So, yeah. Thanks so much for coming on. This has been really interesting. I feel like I learned a lot of new things. There’s a lot here that probably most people who are even fairly familiar with climate science aren’t familiar with. So I just want to offer you a final little space here if you have any final remarks or anything you’d like to say that you feel like is unresolved or unsaid, just any last words for listeners?

Kelly Wanser: Well, for those people who’ve made it through the entire podcast, thanks for listening and being so engaged and interested in the topic. I think that apart from the things we talked about previously, it’s heartening and important that people from other fields are paying attention to the climate problem and becoming engaged, particularly people from the technology sector and certain parts of industry that bring a way of thinking about problems that’s useful. I think there are probably lots of people in your community who may be turning their attention to this, or turning their attention to this more fully in a new way, and may have perspectives and ideas and resources that are useful to bring to it.

The field has been quite academic and more academic than many other fields of endeavor. And so I think what people in Silicon Valley think about in terms of how you might transform a sector quickly, or a problem quickly, presents an opportunity. And so I hope that people are inspired to become involved and become involved in the parts of the space that are maybe more controversial or easier for people like us to think about.

Lucas Perry: All right. And so if people want to follow or find you or check out SilverLining, where are the best places to get more information or see what you guys are up to?

Kelly Wanser: So I’m on LinkedIn and Twitter as @kellywanser and our website is silverlining.ngo, no S at the end. And the majority of the information about what we do is there. And feel free to reach out to me on LinkedIn or on Twitter or contact Lucas who can contact me.

Lucas Perry: Yeah, all right. Wonderful. Thanks so much, Kelly.

Kelly Wanser: All right. Thanks very much, Lucas. I appreciate it. Thanks for taking so much time.

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named “Utility.” Sam’s artistic excellence, motivated by blissful visions of the future, and David’s philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David’s work, how it informed his music production, and Sam and David’s optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content.

Topics discussed in this episode include:

  • The relationship between Sam’s music and David’s writing
  • Existential hope
  • Ideas from the Hedonistic Imperative
  • Sam’s albums
  • The future of art and music

Where to follow Sam Barker :

Soundcloud
Twitter
Instagram
Website
Bandcamp

Where to follow Sam’s label, Ostgut Ton: 

Soundcloud
Facebook
Twitter
Instagram
Bandcamp

 

Timestamps: 

0:00 Intro

5:40 The inspiration around Sam’s music

17:38 Barker- Maximum Utility

20:03 David and Sam on their work

23:45 Do any of the tracks evoke specific visions or hopes?

24:40 Barker- Die-Hards Of The Darwinian Order

28:15 Barker – Paradise Engineering

31:20 Barker – Hedonic Treadmill

33:05 The future and evolution of art

54:03 David on how good the future can be

58:36 Guest mix by Barker

 

Tracklist:

Delta Rain Dance – 1

John Beltran – A Different Dream

Rrose – Horizon

Alexandroid – lvpt3

Datassette – Drizzle Fort

Conrad Sprenger – Opening

JakoJako –  Wavetable#1

Barker & David Goldberg – #3

Barker & Baumecker – Organik (Intro)

Anthony Linell – Fractal Vision

Ametsub – Skydroppin’

Ladyfish\Mewark – Comfortable

JakoJako & Barker – [unreleased]

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

David Pearce: I would encourage people to conjure up their vision of paradise. and the future can potentially be like that only much, much better. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a particularly unique episode with Berlin based DJ and producer Sam Barker as well as with David Pearce, and right now, you’re listening to Sam’s track Paradise Engineering on his album Utility. We focus centrally on the FLI Podcast on existential risk. The other side of existential risk is existential hope. This hope reflects all of our dreams, aspirations, and wishes for a better future. For me, this means a future where we’re able to create material abundance, eliminate global poverty, end factory farming and address animal suffering, evolve our social and political systems to bring greater wellbeing to everyone, and more optimistically, create powerful aligned artificial intelligence that can bring about the end involuntary suffering, and help us to idealize the quality of our minds and ethics. If we don’t go extinct, we have plenty of time to figure these things out and that brings me a lot of joy and optimism. Whatever future seems most appealing to you, these visions are a key component to why mitigating existential risk is so important. So, in the context of COVID-19, we’d like to revitalize existential hope and this podcast is aimed at doing that.  

As a part of this podcast, Sam was kind enough to create a guest mix for us. You can find that after the interview portion of this podcast and can find where it starts by checking the timestamps. I’ll also release the mix separately a few days after this podcast goes live. Some of my favorite tracks of Sam’s not highlighted in this podcast are Look How Hard I’ve Tried, and Neuron Collider. If you enjoy Sam’s work and music featured here, you can support or follow him at the links in the description. He has a Bandcamp shop where you can purchase his albums. I grabbed a vinyl copy of his album Debiasing from there. 

As for a little bit of background on this podcast, Sam Barker, who produces electronic music under the name Barker, has albums with titles such as Debiasing” and Utility. I was recommended to listen to these, and discovered his album “Utility” is centrally inspired by David Pearce’s work, specifically The Hedonistic Imperative. Utility has track titles like Paradise Engineering, Experience Machines, Gradients Of Bliss, Hedonic Treadmill, and Wireheading. So, being a big fan of Sam’s music production and David’s philosophy and writing, I wanted to bring them together to explore the theme of existential hope and Sam’s inspiration for his albums and how David fits into all of it. 

Many of you will already be familiar with David Pearce. He is a friend of this podcast and a multiple time guest. David is a co-founder of the World Transhumanist Association, rebranded Humanity+, and is a prominent figure within the transhumanism movement in general. You might know him from his work on the Hedonistic Imperative, a book which explores our moral obligation to work towards the abolition of suffering in all sentient life through technological intervention.

Finally, I want to highlight the 80,000 Hours Podcast with Rob Wiblin. If you like the content on this show, I think you’ll really enjoy the topics and guests on Rob’s podcast. His is also motivated by and contextualized in an effective altruism framework and covers a broad range of topics related to the world’s most pressing issues and what we can do about them. If that sounds of interest to you, I suggest checking out episode #71 with Ben Todd on the ideas of 80,000 Hours, and episode #72 with Toby Ord on existential risk. 

And with that, here’s my conversation with Dave and Sam, as well as Sam’s guest mix.

Lucas Perry: For this first section, I’m basically interested in probing the releases that you already have done, Sam, and exploring them and your inspiration for the track titles and the soundscapes that you’ve produced. Some of the background and context for this is that much of this seems to be inspired by and related to David’s work, in particular the Hedonistic Imperative. I’m at first curious to know, Sam, how did you encounter David’s work, and what does it mean for you?

Sam Barker: David’s work was sort of arriving in the middle of a sort of a series of realizations, and kind of coming from a starting point of being quite disillusioned with music, and a little bit disenchanted with the vagueness, and the terminology, and the imprecision of the whole thing. I think part of me has always wanted to be some kind of scientist, but I’ve ended up at perhaps not the opposite end, but quite far away from it.

Lucas Perry: Could explain what you mean by vagueness and imprecision?

Sam Barker: I suppose the classical idea of what making music is about is a lot to do with the sort of western idea of individualism and about self expression. I don’t know. There’s this romantic idea of artists having these frenzied creative bursts that give birth to the wonderful things, that it’s some kind of struggle. I just was feeling super disillusioned with all of that. Around that time, 2014 or 15, I was also reading a lot about social media, reading about behavioral science, trying to figure what was going on in this arena and how people are being pushed in different directions by this algorithmic system of information distribution. That kind of got me into this sort of behavioral science side of things, like the addictive part of the variable-ratio reward schedule with likes. It’s a free dopamine dispenser kind of thing. This was kind of getting me into reading about behavioral science and cognitive science. It was giving me a lot of clarity, but not much more sort of inspiration. It was basically like music.

Dance music especially is a sort of complex behavioral science. You do this and people do that. It’s all deeply ingrained. I sort of imagine the DJ as a sort Skinner box operator pulling puppet strings and making people behave in different ways. Music producers are kind of designing clever programs using punishment and reward or suspense and release, and controlling people’s behavior. The whole thing felt super pushy and not a very inspiring conclusion. Looking at the problem from a cognitive science point of view is just the framework that helped me to understand what the problem was in the first place, so this kind of problem of being manipulative. Behavioral science is kind of saying what we can make people do. Cognitive psychology is sort of figuring out why people do that. That was my entry point into cognitive psychology, and that was kind of the basis for Debiasing.

There’s always been sort of a parallel for me between what I make and my state of mind. When I’m in a more positive state, I tend to make things I’m happier with, and so on. Getting to the bottom of what tricks were, I suppose, with dance music. I kind of understood implicitly, but I just wanted to figure out why things worked. I sort of came to the conclusion it was to do with a collection of biases we have, like the confirmation bias, and the illusion of truth effect, and the mere exposure effect. These things are like the guardians of four four supremacy. Dance music can be pretty repetitive, and we describe it sometimes in really aggressive terminology. It’s a psychological kind of interaction.

Cognitive psychology was leading me to Kaplan’s law of the instrument. The law of the instrument says that if you give a small boy a hammer, he’ll find that everything he encounters requires pounding. I thought that was a good metaphor. The idea is that we get so used to using tools in a certain way that we lose sight of what it is we’re trying to do. We act in the way that the tool instructs us to do. I thought, what if you take away the hammer? That became a metaphor for me, in a sense, that David clarified in terms of pain reduction. We sort of put these painful elements into music in a way to give this kind of hedonic contrast, but we don’t really consider that that might not be necessary. What happens when we abolish these sort of negative elements? Are the results somehow released from this process? That was sort of the point, up until discovering the Hedonistic Imperative.

I think what I was needing at the time was a sort of framework, so I had the idea that music was decision making. To improve the results, you have to ask better questions, make better decisions. You can make some progress looking at the mechanics of that from a psychology point of view. What I was sort of lacking was a purpose to frame my decisions around. I sort of had the idea that music was a sort of a valence carrier, if you like, and that it could tooled towards a sort of a greater purpose than just making people dance, which was for Debiasing the goal, really. It was to make people dance, but don’t use the sort of deeply ingrained cues that people used to, and see if that works.

What was interesting was how broadly it was accepted, this first EP. There was all kinds of DJs playing it in techno, ambient, electro, all sorts of different styles. It reached a lot of people. It was as if taking out the most functional element made it more functional and more broadly appealing. That was the entry point to utilitarianism. There was sort of an accidentally utilitarian act, in a way, to sort of try and maximize the pleasure and minimize the pain. I suppose after landing in utilitarianism and searching for some kind of a framework for a sense of purpose in my work, the Hedonistic Imperative was probably the most radical, optimistic take on the system. Firstly, it put me in a sort of mindset where it granted permission to explore sort of utopian ideals, because I think the idea of pleasure is a little bit frowned upon in the art world. I think the art world turns its nose up at such direct cause and effect. The idea that producers could sort of be paradise engineers of sorts, so the precursors to paradise engineers, that we almost certainly would have a role in a kind of sensory utopia of the future.

There was this kind of permission granted. You can be optimistic. You can enter into your work with good intentions. It’s okay to see music as a tool to increase overall wellbeing, in a way. That was kind of the guiding idea for my work in the studio. I’m trying, these days, to put more things into the system to make decisions in a more conscious way, at least where it’s appropriate to. This sort of notion of reducing pain and increasing pleasure was the sort of question I would ask at any stage of decision making. Did this thing that I did serve those ends? If not, take a step back and try a different approach.

There’s something else to be said about the way you sort of explore this utopian world without really being bogged down. You handle the objections in such a confident way. I called it a zero gravity world of ideas. I wanted to bring that zero gravity feeling to my work, and to see that technology can solve any problem in this sphere. Anything’s possible. All the obstacles are just imagined, because we fabricate these worlds ourselves. These are things that were really instructive for me, as an artist.

Lucas Perry: That’s quite an interesting journey. From the lens of understanding cognitive psychology and human biases, was it that you were seeing those biases in dance music itself? If so, what were those biases in particular?

Sam Barker: On both sides, on the way it’s produced and in the way it’s received. There’s sort of an unspoken acceptance. You’re playing a set and you take a kick drum out. That signals to people to perhaps be alert. The lighting engineer, they’ll maybe raise the lights a little bit, and everybody knows that the music is going into sort of a breakdown, which is going to end in some sort of climax. Then, at that point, the kick drum comes back in. We all know this pattern. It’s really difficult to understand why that works without referring to things like cognitive psychology or behavioral science.

Lucas Perry: What does the act of debiasing the reception and production of music look like and do to the music and its reception?

Sam Barker: The first part that I could control was what I put into it. The experiment was whether a debiased piece of dance music could perform the same functionality, or whether it really relies on these deeply ingrained cues. Without wanting to sort of pat myself on the back, it kind of succeeded in its purpose. It was sort of proof that this was a worthy concept.

Lucas Perry: You used the phrase, earlier, four four. For people who are not into dance music, that just means a kick on each beat, which is ubiquitous in much of house and techno music. You’ve removed that, for example, in your album Debiasing. What are other things that you changed from your end, in the production of Debiasing, to debias the music from normal dance music structure?

Sam Barker: It was informing the structure of what I was doing so much that I wasn’t so much on a grid where you have predictable things happening. It’s a very highly formulaic and structured thing, and that all keys into the expectation and this confirmation bias that people, I think, get some kind of kick from when the predictable happens. They say, yep. There you go. I knew that was going to happen. That’s a little dopamine rush, but I think it’s sort of a cheap trick. I guess I was trying to get the tricks out of it, in a way, so figuring out what they were, and trying to reduce or eliminate them was the process for Debiasing.

Lucas Perry: That’s quite interesting and meaningful, I think. Let’s just take trap music. I know exactly how trap music is going to go. It has this buildup and drop structure. It’s basically universal across all dance music. Progressive house in the 2010s was also exactly like this. What else? Dubstep, of course, same exact structure. Everything is totally predictable. I feel like I know exactly what’s going to happen, having listened to electronic music for over a decade.

Sam Barker: It works, I think. It’s a tried and tested formula, and it does the job, but when you’re trying to imagine states beyond just getting a little kick from knowing what was going to happen, that’s the place that I was trying to get to, really.

Lucas Perry: After the release of Debiasing in 2018, which was a successful attempt at serving this goal and mission, you then discovered the Hedonistic Imperative by David Pearce, and kind of leaned into consequentialism, it seems. Then, in 2019, you had two releases. You had BARKER 001 and you had Utility. Now, Utility is the album which most explicitly adopts David Pearce’s work, specifically in the Hedonistic Imperative. You mentioned electronic dance producers and artists in general can be sort of the first wave of, or can perhaps assist in paradise engineering, insofar as that will be possible in the near to short terms future, given advancements in technology. Is that sort of the explicit motivation and framing around those two releases of BARKER 001 and Utility?

Sam Barker: BARKER 001 was a few tracks that were taken out of the running for the album, because they didn’t sort of fit the concept. Really, I knew the last track was kind of alluding to the album. Otherwise, it was perhaps not sort of thematically linked. Hopefully, if people are interested in looking more into what’s behind the music, you can lead people into topics with the concept. With Utility, I didn’t want to just keep exploring cognitive biases and unpicking dance music structurally. It’s sort of a paradox, because I guess the Hedonistic Imperative argues that pleasure can exist without purpose, but I really was striving for some kind of purpose with the pleasure that I was getting from music. That sort of emerged from reading the Hedonistic Imperative, really, that you can apply music to this problem of raising the general level of happiness up a notch. I did sort of worry that by trying to please, it wouldn’t work, that it would be something that’s too sickly sweet. I mean, I’m pretty turned off by pop music, and there was this sort of risk that it would end up somewhere like that. That’s it, really. Just looking for a higher purpose with my work in music.

Lucas Perry: David, do you have any reactions?

David Pearce: Well, when I encountered Utility, yes, I was thrilled. As you know, essentially I’m a writer writing in quite heavy sub-academic prose. Sam’s work, I felt, helps give people a glimpse of our glorious future, paradise engineering. As you know, the reviews were extremely favorable. I’m not an expert critic or anything like that. I was just essentially happy and thrilled at the thought. It deserves to be mainstream. It’s really difficult, I think, to actually evoke the glorious future we are talking about. I mean, I can write prose, but in some sense music can evoke paradise better, at least for many people, than prose.

Sam Barker: I think it’s something you can appreciate without cognitive effort which, your prose, at least you need to be able to read. It’s a bit more of a passive way of receiving, music, which I think is an intrinsic advantage it has. That’s actually really a relief to hear, because there was just a small fear in my mind that I was grabbing these concepts with clumsy hands and discrediting them.

David Pearce: Not at all.

Sam Barker: It all came from a place of sincere appreciation for this sort of world that you are trying to entice people with. When I’ve tried to put into words what it was that was so inspiring, I think it’s that there was also a sort of very practical, kind of making lots of notes. I’ve got lots of amazing one liners. Will we ever leave the biological dark ages or the biological domestication of heaven? There was just so many things that conjure up such vividly, heavenly sensations. It sort of brings me back to the fuzziness of art and inspiration, but I hope I’ve tried to adopt the same spirit of optimism that you approached the Hedonistic Imperative with. I actually don’t know what state of mind your approach was at the time, even, but it must’ve come in a bout of extreme hopefulness.

David Pearce: Yes, actually. I started taking Selegiline, and six weeks later I wrote the Hedonistic Imperative. It just gave me just enough optimism to embark on. I mean, I have, fundamentally, a very dark view of Darwinian life, but for mainly technical reasons I think the future is going to be super humanly glorious. How do you evoke this for our dark, Darwinian minds?

Sam Barker: Yeah. How do we get people excited about it? I think you did a great job.

David Pearce: It deserves to go mainstream, really, the core idea. I mean, forget the details, the neurobabble of genetics. Yeah, of course it’s incredibly important, but this vision of just how sublimely wonderful life could be. How do we achieve full spectrum, multimedia dominance? I mean, I can write it.

Lucas Perry: Sounds like you guys need to team up.

Sam Barker: It’s very primitive. I’m excited where it could head, definitely.

Lucas Perry: All right. I really like this idea about music showing how good the future can be. I think that many of the ways that people can understand how good the future can be comes from the best experiences they’ve had in their life. Now, that’s just a physical state of your brain. If something isn’t physically impossible, then the only barrier to achieving and realizing that thing is knowledge. Take all the best experiences in your life. If we could just understand computation, and biology in the brain, and consciousness well enough. It doesn’t seem like there’s any real limits to how good and beautiful things can get. Do any of the tracks that you’ve done evoke very specific visions, dreams, desires, or hopes?

Sam Barker: I would be sort of hesitant to make direct links between tracks and particular mindsets, because when I’m sitting down to make music, I’m not really thinking about any one particular thing. Rather, I’m trying to look past things and look more about what sort of mood I want to put into the work. Any of the tracks on the record, perhaps, could’ve been called paradise engineering, is what I’m saying. The names from the tracks are sort of a collection of the ideas that were feeding the overall process. The application of the names was kind of retroactive connection making. That’s probably a disappointment to some people, but the meaning of all of the track names is in the whole of the record. I think the last track on the record, Die-Hards of the Darwinian Order, that was a phrase that you used, David, to describe people clinging to the need for pain in life to experience pleasure.

David Pearce: Yes.

Sam Barker: That track was not made for the record. It was made some time ago, and it was just a technical experiment to see if I could kind of recreate a realistic sounding band with my synthesizers. The label manager, Alex, was really keen to have this on the record. I was kind of like, well, it doesn’t fit conceptually. It has a kick drum. It’s this kind of somber mood, and the rest of the record is really uplifting, or trying to be. Alex was saying he liked the contrast to the positivity of the rest of the album. He felt like it needed this dose of realism or something.

David Pearce: That makes sense, yes.

Sam Barker: I sort of conceded in the end. We called it Die-Hards of the Darwinian Order, because that was what I felt like he was.

David Pearce: Have you told him this?

Sam Barker: I told him. He definitely took the criticism. As I said, it’s the actual joining up of these ideas that I make notes on. The tracks themselves, in the end, had to be done in a creative way sort of retroactively. That doesn’t mean to say that all of these concepts were not crucial to the process of making the record. When you’re starting a project, you call it something like new track, happy two, mix one, or something. Then, eventually, the sort of meaning emerges from the end result, in a way.

Lucas Perry: It’s just like what I’ve heard from authors of best selling books. They say you have no idea what the book is going to be called until the end.

Sam Barker: Right, yeah.

David Pearce: One of the reasons I think it’s so important to stress life based on gradients of bliss ratcheting up hedonic set points is that, instead of me or anyone else trying to impose their distinctive vision of paradise, it just allows, with complications, everyone to keep most of their existing values and preferences, but just ratchets up hedonic tone and hedonic range. I mean, this is the problem with so many traditional paradises. They involve the imposition of someone else’s values and preferences on you. I’m being overly cerebral about it now, but I think my favorite track on the album is the first. I would encourage people to conjure up their vision of paradise and the future can potentially be like that and be much, much better.

Sam Barker: This, I think, relates to the sort of pushiness that I was feeling at odds with. The music does take people to these kind of euphoric states, sometimes chemically underwritten, but it’s being done in a dogmatic and singular way. There’s not much room for personal interpretation. It’s sort of everybody’s experiencing one thing, which I think there’s something in these kind of communal experiences that I’m going to hopefully understand one day.

Lucas Perry: All right. I think some of my favorite tracks are Look How Hard I’ve Tried on Debiasing. I also really like Maximum Utility and Neuron Collider. I mean, all of it is quite good and palatable.

Sam Barker: Thank you. The ones that you said are some of my personal favorites. It’s also funny how some of the least favorite tracks, or not least favorite, but the ones that I felt like didn’t really do what they set out to do, were other people’s favorites. Hedonic Treadmill, for example. I’d put that on the pile of didn’t work, but people are always playing it, too, finding things in it that I didn’t intentionally put there. Really, that track felt to me like stuck on the hedonic treadmill, and not sort of managing to push the speed up, or push the level up. This is, I suppose, the problem with art, that there isn’t a universal pleasure sense, that there isn’t a one size fits all way to these higher states.

David Pearce: You correctly called it the hedonic treadmill. Some people say the hedonistic treadmill. Even one professor I know calls it the hedonistic treadmill.

Lucas Perry: I want to get on that thing.

David Pearce: I wouldn’t mind spending all day on a hedonistic treadmill.

Sam Barker: That’s my kind of exercise, for sure.

Lucas Perry: All right, so let’s pivot here into section two of our conversation, then. For this section, I’d just like to focus on the future, in particular, and exploring the state of dance music culture, how it should evolve, and how science and technology, along with art and music, can evolve into the future. This question comes from you in particular, Sam, addressed to Dave. I think you were curious about his experiences in life and if he’s ever lost himself on a dance floor or has any special music or records that put him in a state of bliss?

Sam Barker: Very curious.

David Pearce: My musical autobiography. Well, some of my earliest memories is of a wind up gramophone. I’m showing my age here. Apparently, as a five year old child, I used to sing on the buses. Daisy, Daisy, give me your answer, due. I’m so crazy over love of you. Then, graduating via the military brass band play, apparently I used to enjoy as a small child to pop music. Essentially, for me, very, very unanswerable about music. I like to use it as a backdrop, you know. At its best, there’s this tingle up one’s spine one gets, but it doesn’t happen very often. The only thing I would say is that it’s really important for me that music should be happy. I know some people get into sad music. I know it’s complicated. Music, for me, has to elicit something that’s purely good.

Sam Barker: I definitely have no problem with exploring the sort of darker side of human nature, but I also have come to the realization that there’s better ways to explore the dark sides than aesthetic stimulation through, perhaps, words and ideas. Aesthetics is really at its optimum function when it’s working towards more positive goals of happiness and joy, and these sort of swear words in the art world.

Lucas Perry: Dave, you’re not trying to hide your rave warehouse days from us, are you?

David Pearce: Well, yeah. Let’s just say I might not have been entirely drug naïve with friends. Let’s just say I was high on life or something, but it’s a long time since I have explored that scene. Part of me still misses it. When it comes to anything in the art world, just as I think visual art should be beautiful. Which, I mean, not all serious artists would agree.

Sam Barker: I think the whole notion is just people find it repulsive somehow, especially in the art world. Somebody that painted a picture and then the description reads I just wanted it to be pretty is getting thrown out the gallery. What greater purpose could it really take on?

David Pearce: Yeah.

Lucas Perry: Maybe there’s some feeling of insecurity, and a feeling and a need to justify the work as having meaning beyond the sensual or something. Then there may also be this fact contributing to it. Seeking happiness and sensual pleasure directly, in and of itself, is often counterproductive towards that goal. Seeking wellbeing and happiness directly usually subverts that mission, and I guess that’s just a curse of Darwinian life. Perhaps those, I’m just speculating here, contribute to this cultural distaste, as you were pointing out, to enjoy pleasure as the goals of art.

Sam Barker: Yeah, we’re sort of intellectually allergic to these kinds of ideas, I think. They just seem sort of really shallow and superficial. I suppose that was kind of my existential fear before the album came out, that the idea that I was just trying to make people happy would just be seen as this shallow thing, which I don’t see it as, but I think the sentiment is quite strong in the art world.

Lucas Perry: If that’s quite shallow, then I guess those people are also going to have problems with the Buddha in people like that. I wouldn’t worry about it too much. I think you’re on the same intentional ground as the Buddha. Moving a little bit along here. Do you guys have thoughts or opinions on the future of aesthetics, art, music, and joy, and how science and technology can contribute to that?

David Pearce: Oh, good heavens. One possibility will be that, as neuroscience advances, it’ll be possible to isolate the molecular experience of visual beauty, musical bliss, spiritual excellence, and scientifically amplify them so that one can essentially enjoy musical experiences that are orders of magnitude richer than anything that’s even physiologically feasible today. I mean, I can use all this fancy language, but what actually this will involve, in terms of true trans-human and post-human artists. The gradients of bliss is important here, in such that I think we will retain information sensitive gradients, so we don’t lose critical sharpness, discernment, critical appreciation. Nonetheless, this base point for aesthetic excellence. All experience can be superhumanly beautiful. I mean, I religiously star my music collection from one to five, but what would a six be like? What would 100 be like?

Sam Barker: I like these questions. I guess the role of the artist in the long term future in creating these kinds of states maybe gets pushed out at some point by people who are in the labs and reprogram the way music is, or the way that any sort of sensory experience is received. I wonder whether there’s a place in techno utopia for music made by humans, or whether artists sort of just become redundant in some way. I’m not going to get offended if the answer is bye, bye.

Lucas Perry: I’d be interested in just making a few points about the evolutionary perspective before we get into the future of ape artists or mammalian artists. It just seems like some kind of happy cosmic accident that, for the vibration of air, human beings have developed a sensory appreciation of information and structure embedded in that medium. I think we’re quite lucky, as a species, that music and musical appreciation is embedded in the software of human genetics, as such that we can appreciate, and create, and share musical moments. Now, with genetic engineering and more ambitious paradise engineering, I think it would be beautiful to expand the modalities for which artistic, or aesthetic, or the appreciation of beauty can be experienced.

Music is one clear way of having aesthetic appreciation and joy. Visual art is another one. People do derive a lot of satisfaction from touch. Perhaps that could be more information structured in the ways that music and art are. There might be a way of changing what it means to be an intelligent thing, such there can be just an expansion of art appreciation across all of our essential modalities, and even into essential modalities which don’t exist yet.

David Pearce: The nature of trans-human and post-human art just leaves me floundering.

Lucas Perry: Yeah. It seems useful here just to reflect on how happy of an accident art is. As we begin to evolve, we can get into, say, A.I. here. A.I. and machine learning is likely to be able to have very, very good models of, say, our musical preferences within the next few years. I mean, they’re somewhat already very good at it. They’ll continue to get better. Then, we have fairly rudimental algorithms which can produce music. If we just extrapolate out into the future, eventually artificial intelligent systems will be able to produce music better than any human. In that world, what is the role of the human artist? I guess I’m not sure.

Sam Barker: I’m also completely not sure, but I feel like it’s probably going to happen in my lifetime, that these technologies get to a point that they actually do serve the purpose. At the moment, there is A.I. software that can create unique compositions, but it does so by looking at an archive of music with Ava. It’s Bach, and Beethoven, and Mozart. Then it reinterprets all of the codes that are embedded in that, and uses that to make new stuff. It sounds just like a composing quoting, and it’s convincing. Considering this is going to get better and better, I’m pretty confident that we’ll have a system that will be able to create music to a person’s specific taste, having not experienced music, that would say look at my music library, and then start making things that I might like. I can’t say how I feel about that.

Let’s say if it worked, and it did actually surprise me, and I was feeling like humans can’t make this kind of sensation in me. This is a level above. In a way, yeah, somebody that doesn’t like the vagueness of the creative process, this really appeals, somehow. The way that things are used, and the way that our attention is sort of a resource that gets manipulated, I don’t know whether we have an incredible technology, once again, in the wrong hands. It’s just going to be turned into a mind control. These kind of things would be put to use for nefarious purposes. I don’t fear the technology. I fear what we, in our unmodified state, might do with it.

David Pearce: Yes. I wonder when the last professional musician will retire, having been eclipsed by A.I. I mean, in some sense, we are, I think, stepping stones to something better. I don’t know when the last philosophers will be pensioned off. Hard problem of mind solved, announced in nature, Nobel Prize beckons. Distinguished philosophers of mind announce their intention to retire. Hard to imagine, but one does suppose that A.I. will be creating work of ever greater excellence tailored to the individual. I think the evolutionary roots of aesthetic appreciation are very, very deep. It kind of does sound very disrespectful to artists, saying that A.I. could replace artists, but mathematicians and scientists are probably going to be-

Lucas Perry: Everyone’s getting replaced.

Sam Barker: It’s maybe a similar step to when portrait painters when the camera was threatening their line of work. You can press a button and, in an instant, do what would’ve taken several days. I sort of am cautiously looking forward to more intelligent assistance in the production of music. If we did live in a world where there wasn’t any struggles to express, or any wrongs to right, any flaws in our character to unpick, then I would struggle to find anything other than the sort of basic pleasure of the action of making music. I wouldn’t really feel any reason to share what I made, in a sense. I think there’s a sort of moral, social purpose that’s embedded within music, if you want to grasp it. I think, if A.I. is implemented with that same moral, ethical purpose, then, in a way, we should treat it as any other task that comes to be automated or extremely simplified. In some way, we should sort of embrace the relaxation of our workload, in a way.

There’s nothing to say that we couldn’t just continue to make music if it brought us pleasure. I think distinguishing between these two things of making music and sharing it was an important discovery for me. The process of making a piece of music, if it was entirely pleasurable, but then you treat the experience like it was a failure because it didn’t reach enough people, or you didn’t get the response or the boost to your ego that you were searching from it, then it’s your remembering self overriding your experiencing self, in a way, or your expectations getting in the way of your enjoyment of the process. If there was no purpose to it anymore, I might still make it for my own pleasure, but I like to think I would be happy that a world that didn’t require music was already a better place. I like to think that I wouldn’t be upset with my redundancy with my P45 from David Pearce.

David Pearce: Oh, no. With a neuro chip, you see, your creative capacities could be massively augmented. You’d have narrow super intelligence on a chip. Now, in one sense, I don’t think classical digital computers are going to wake up and become conscious. They’re never actually going to be able to experience music or art or anything like this. In that sense, they will remain tools, but tools that one can actually incorporate within oneself, so that they become part of you.

Lucas Perry: A friendly flag there that many people who have been on this podcast disagree with that point. Yeah, fair enough, David. I mean, it seems that there are maybe three options. One is, as you mentioned, Sam, to find joy and beauty in more things, and to sort of let go of the need for meaning and joy to come from not being something that is redundant. Once human beings are made obsolete or redundant, it’s quite sad for us, because we derive much of our meaning, thanks a lot, evolution, from accomplishing things and being relevant. The two paths here seems like reaching some kind of spiritual evolution such that we’re okay with being redundant, or being okay with passing away as a species and allowing our descendants to proliferate. The last one would be to change what it means to be human, such that by merging or bi-evolution we somehow remain relevant to the progress of civilization. I don’t know which one it will be, but we’ll see.

David Pearce: I think the exciting one, for me, is where we can harness the advances in technology in a conscious way to positive ends, to greater net wellbeing in society. Maybe I’m hooked on the old ideals, but I do think a sense of purpose in your pleasure elevates the sensation somewhat.

Lucas Perry: I think human brains on MDMA would disagree with that.

Sam Barker: Yeah. You’ve obviously also reflected on an experience like that after the event, and come to the conclusion that there wasn’t, perhaps, much concrete meaning to your experience, but it was joyful, and real, and vivid. You don’t want to focus too much on the fact that it was mostly just you jumping up and down on a dance floor. I’m definitely familiar with the pleasure of essentially meaningless euphoria. I’ll say, at the very least, it’s interesting to think about. Reading a lot about the nature of happiness and the general consensus there being that happiness is sort of a balance of pleasure a purpose. The idea that maybe you don’t need the purpose is worth exploring, I think, at least.

David Pearce: We do have this term empty hedonism. One thing that’s striking is that one, for whatever reason or explanation, gets happier and happier. Everything seems more intensely meaningful. There are pathological forms like mania or hypermania, where it leads to grandiosity, masonic delusions, even theomania, and thinking one is God. It’s possible to have much more benign versions. In practice, I think when life is based on gradients of bliss, eventually, superhuman bliss, this will entail superhuman meaning and significance. Essentially, we’ve got a choice. I mean, we can either have pure bliss, or one could have a combination of miss and hyper-motivation, and one will be able to tweak the dials.

Sam Barker: This is all such deliciously appealing language as someone who’s spending a lot of their time tweaking dials.

David Pearce: This may or may not be the appropriate time to ask, but tell me about what future projects have you planned?

Sam Barker: I’m still very much exploring the potential of music as an increaser of wellbeing, and I think it’s sort of leading me in interesting directions. At present, I’m sort of in another crossroads, I feel. The general drive to realize these sort of higher functions of music is still a driving force. I’m starting to look at what is natural in music and what is learned. Like you say, there is this long history of the way that we appreciate sound. There’s link to all kinds of repetitive experiences that our ancestors had. There’s other aspects to sound production that are also very old. Use of reverb is connected to our experience as sort of cavemen dwelling in these kind of reverberant spaces. These were kind of sacred spaces for early humans, so this feeling of when you walk into a cathedral, for example, this otherworldly experience that comes from the acoustics is, I think, somehow deeply tied to this historical situation of seeking shelter in caves, and the caves having a bigger significance in the lives of early humans.

There’s a realization, I suppose, that what we’re experiencing that relates to music is rhythm, tone, and timbre noise. If you just sort of pay attention to your background noise, the things that you’re most familiar with are actually not very musical. You don’t really find harmony in nature very much. I’m sort of forming some ideas around what parts of music and our response to music are cultural, and what are natural. It’s sort of a strange word to apply. Our sort of harmonic language is a technical construction. Rhythm is something we have a much deeper connection with through our lives as defined by rhythms of planets and that dividing our time into smaller and smaller ratios down to heartbeats and breathing. We’re sort of experiencing really complex poly-rhythmic silence form of music, I suppose. I’m separating these two concepts of rhythm and harmony and trying to get to the bottom of their function and the goal of elevating bliss and happiness. I guess, looking at what the tools I’m using are and what their role could be, if that makes any sense.

David Pearce: In some sense, this sounds weird. I think, insofar as it’s possible, one does have a duty to take care of oneself, and if one can give happiness to others, not least by music, in that sense, one can be a more effective altruist. In some sense, perhaps one feels, ethically, ought one to be working 12, 14 hours a day to make the world a better place. Equally, we all have our design limitations, and just being able to relax and, either as a consumer of music, or if one is a creator of music, that has a valuable role, too. It really does. One needs to take care of one’s own mental health to be able to help others.

Sam Barker: I feel like the kind of under the bonnet tinkering that, in some way, needs to happen for us to really make use of the new technologies. We need to do something about human nature. I feel like we’re a bit further away from those sort of realities than we are with the technological side. I think there needs to be sort of emergency measures, in some way, to improve human nature through the old fashioned social, cultural nudges, perhaps, as a stopgap until we can really roll our sleeves up and change human nature on a molecular level.

David Pearce: Yeah. I think we might need both. All the kind of environmental, social, political form together, whether biological, genetic, by a happiness revolution. I would love to be able to. A 100 year plan blueprint to get rid of suffering. Replace it with gradients of bliss, paradise engineering. In practice, I feel the story of Darwinian life still has several centuries to go. I hope I’m too pessimistic. Some of my trans-humanist colleagues, intelligence explosion, or a complete cut via the infusion of humans and our machines, but we shall see.

Lucas Perry: David, Sam and I, and everyone else, loves your prose so much. Could you just kind of go off here and muster your best prose to give us some thoughts as beautiful as sunsets for how good the future of music, and art, and gradients of intelligent bliss will be?

David Pearce: I’m afraid. Put eloquence on hold, but yeah. Just try for a moment to remember your most precious, beautiful, sublime experience in your life, whatever it was. It may or may not be suitable for public consumption. Just try to hold it briefly. Imagine if life could be like that, only far, far better, all the time, and with no nasty side effects, no adverse social consequences. It is going to be possible to build this kind of super civilization based on gradients of bliss. Be over ambitious. Needless to say, if anything I have written, unfortunately you’d need to wade through all matter of fluff. I just want to say, I’m really thrilled and chuffed with utility, so anything else is just vegan icing on the cake.

Sam Barker: Beautiful. I’m really, like I say, super relieved that it was taken as such. It was really a reconfiguring of my approach and my involvement with the thing that I’ve sort of given my life to thus far, and a sort of a clarification of the purpose. Aside from anything else, it just put me in a really perfect mindset for addressing mental obstacles in the way of my own happiness. Then, once you get that, you sort of feel like sharing it with other people. I think it started off a very positive process in my thoughts, which sort of manifested in the work I was doing. Extremely grateful for your generosity in lending these ideas. I hope, actually, just that people scratched the surface a little bit, and maybe plug some of the terms into a search engine and got kind of lost in the world of utopia a little bit. That was really the main reason for putting these references in and pushing people in that direction.

David Pearce: Well, you’ve given people a lot of pleasure, which is fantastic. Certainly, I’d personally rather be thought of as associated with paradise engineering and gradients of bliss, rather than the depressive, gloomy, negative utilitarian.

Sam Barker: Yeah. There’s a real dark side to the idea. I think the thing I read after the Hedonistic Imperative was some of Les Knight’s writing about the voluntary human extinction movement. I honestly don’t know if he’d be classified as a utilitarian, but this sort of egocentric utilitarianism, which you sort of endorse through including the animal kingdom in your manifesto. There’s sort of a growing appreciation for this kind of antinatal sentiment.

David Pearce: Yes, antinatalism seems to be growing, but I don’t think it’s every going to be dominant. The only way to get rid of suffering and ensure high quality of life for all sentient beings is going to be, essentially, get to the heart of the problem to rewrite ourselves. I did actually do an antinatalist podcast the other week, but I’m only a soft antinatalist, because there’s always going to be selection pressure in favor of a predisposition to go forth and multiply. One needs to build alliances with fanatical life lovers, even if when one contemplates the state of the world, one has some rather dark thoughts.

Sam Barker: Yeah.

Lucas Perry: All right. So, is there any questions or things we haven’t touched on that you guys would like to talk about?

David Pearce: No. I just really want to just thank you to Lucas for organizing this. You’ve got quite a diverse range of podcasts now. Sam, I’m honored. Thank you very much. Really happy this has gone well.

Sam Barker: Yeah. David, really, it’s been my pleasure. Really appreciate your time and acceptance of how I’ve sort of handled your ideas.

Lucas Perry: I feel really happy that I was able to connect you guys, and I also think that both of you guys make the world more beautiful by your work and presence. For that, I am grateful and appreciative. Also, very much enjoy and take inspiration from both of your work, so keep on doing what you’re doing.

Sam Barker: Thanks, Lucas. Same to you. Really.

David Pearce: Thank you, Lucas. Very much appreciated.

Lucas Perry: I hope that you’ve enjoyed the conversation portion of this podcast. Now, I’m happy to introduce the guest mix by Barker. 

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we’re serious both as individuals and as a species about improving the world, it’s crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

Topics discussed in this episode include:

  • The problem of communication 
  • Global priorities 
  • Existential risk 
  • Animal suffering in both wild animals and factory farmed animals 
  • Global poverty 
  • Artificial general intelligence risk and AI alignment 
  • Ethics
  • Sam’s book, The Moral Landscape

You can take a survey about the podcast here

Submit a nominee for the Future of Life Award here

 

Timestamps: 

0:00 Intro

3:52 What are the most important problems in the world?

13:14 Global priorities: existential risk

20:15 Why global catastrophic risks are more likely than existential risks

25:09 Longtermist philosophy

31:36 Making existential and global catastrophic risk more emotionally salient

34:41 How analyzing the self makes longtermism more attractive

40:28 Global priorities & effective altruism: animal suffering and global poverty

56:03 Is machine suffering the next global moral catastrophe?

59:36 AI alignment and artificial general intelligence/superintelligence risk

01:11:25 Expanding our moral circle of compassion

01:13:00 The Moral Landscape, consciousness, and moral realism

01:30:14 Can bliss and wellbeing be mathematically defined?

01:31:03 Where to follow Sam and concluding thoughts

 

You can follow Sam here: 

samharris.org

Twitter: @SamHarrisOrg

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Sam Harris where we get into issues related to global priorities, effective altruism, and existential risk. In particular, this podcast covers the critical importance of improving our ability to communicate and converge on the truth, animal suffering in both wild animals and factory farmed animals, global poverty, artificial general intelligence risk and AI alignment, as well as ethics and some thoughts on Sam’s book, The Moral Landscape. 

If you find this podcast valuable, you can subscribe or follow us on your preferred listening platform, like on Apple Podcasts, Spotify, Soundcloud, or whatever your preferred podcasting app is. You can also support us by leaving a review. 

Before we get into it, I would like to echo two announcements from previous podcasts. If you’ve been tuned into the FLI Podcast recently you can skip ahead just a bit. The first is that there is an ongoing survey for this podcast where you can give me feedback and voice your opinion about content. This goes a super long way for helping me to make the podcast valuable for everyone. You can find a link for the survey about this podcast in the description of wherever you might be listening. 

The second announcement is that at the Future of Life Institute we are in the midst of our search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs where the first to invite the nomination winner gets $1,500, whoever first invited them gets $750, whoever first invited the previous person gets $375, and so on. You can find details about that on the Future of Life Award page. 

Sam Harris has a PhD in neuroscience from UCLA and is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). Sam hosts the Making Sense Podcast and is also the creator of the Waking Up App, which is for anyone who wants to learn to meditate in a modern, scientific context. Sam has practiced meditation for more than 30 years and studied with many Tibetan, Indian, Burmese, and Western meditation teachers, both in the United States and abroad.

And with that, here’s my conversation with Sam Harris.

Starting off here, trying to get a perspective on what matters most in the world and global priorities or crucial areas for consideration, what do you see as the most important problems in the world today?

Sam Harris: There is one fundamental problem which is encouragingly or depressingly non-technical, depending on your view of it. I mean it should be such a simple problem to solve, but it’s seeming more or less totally intractable and that’s just the problem of communication. The problem of persuasion, the problem of getting people to agree on a shared consensus view of reality, and to acknowledge basic facts and to have their probability assessments of various outcomes to converge through honest conversation. Politics is obviously the great confounder of this meeting of the minds. I mean, our failure to fuse cognitive horizons through conversation is reliably derailed by politics. But there are other sorts of ideology that do this just as well, religion being perhaps first among them.

And so it seems to me that the first problem we need to solve, the place where we need to make progress and we need to fight for every inch of ground and try not to lose it again and again is in our ability to talk to one another about what is true and what is worth paying attention to, to get our norms to align on a similar picture of what matters. Basically value alignment, not with superintelligent AI, but with other human beings. That’s the master riddle we have to solve and our failure to solve it prevents us from doing anything else that requires cooperation. That’s where I’m most concerned. Obviously technology influences it, social media and even AI and the algorithms behind the gaming of everyone’s attention. All of that is influencing our public conversation, but it really is a very apish concern and we have to get our arms around it.

Lucas Perry: So that’s quite interesting and not the answer that I was expecting. I think that that sounds like quite the crucial stepping stone. Like the fact that climate change isn’t something that we’re able to agree upon, and is a matter of political opinion drives me crazy. And that’s one of many different global catastrophic or existential risk issues.

Sam Harris: Yeah. The COVID pandemic has made me, especially skeptical of our agreeing to do anything about climate change. The fact that we can’t persuade people about the basic facts of epidemiology when this thing is literally coming in through the doors and windows, and even very smart people are now going down the rabbit hole of this is on some level a hoax, people’s political and economic interests just bend their view of basic facts. I mean it’s not to say that there hasn’t been a fair amount of uncertainty here, but it’s not the sort of uncertainty that should give us these radically different views of what’s happening out in the world. Here we have a pandemic moving in real time. I mean, where we can see a wave of illness breaking in Italy a few weeks before it breaks in New York. And again, there’s just this Baghdad Bob level of denialism. The prospects of our getting our heads straight with respect to climate change in light of what’s possible in the middle of a pandemic, that seems at the moment, totally farfetched to me.

For something like climate change, I really think a technological elite needs to just decide at the problem and decide to solve it by changing the kinds of products we create and the way we manufacture things and we just have to get out of the politics of it. It can’t be a matter of persuading more than half of American society to make economic sacrifices. It’s much more along the lines of just building cars and other products that are carbon neutral that people want and solving the problem that way.

Lucas Perry: Right. Incentivizing the solution by making products that are desirable and satisfy people’s self-interest.

Sam Harris: Yeah. Yeah.

Lucas Perry: I do want to explore more actual global priorities. This point about the necessity of reason for being able to at least converge upon the global priorities that are most important seems to be a crucial and necessary stepping stone. So before we get into talking about things like existential and global catastrophic risk, do you see a way of this project of promoting reason and good conversation and converging around good ideas succeeding? Or do you have any other things to sort of add to these instrumental abilities humanity needs to cultivate for being able to rally around global priorities?

Sam Harris: Well, I don’t see a lot of innovation beyond just noticing that conversation is the only tool we have. Intellectual honesty spread through the mechanism of conversation is the only tool we have to converge in these ways. I guess the thing to notice that’s guaranteed to make it difficult is bad incentives. So we should always be noticing what incentives are doing behind the scenes to people’s cognition. There are things that could be improved in media. I think the advertising model is a terrible system of incentives for journalists and anyone else who’s spreading information. You’re incentivized to create sensational hot takes and clickbait and depersonalize everything. Just create one lurid confection after another, that really doesn’t get at what’s true. The fact that this tribalizes almost every conversation and forces people to view it through a political lens. The way this is all amplified by Facebook’s business model and the fact that you can sell political ads on Facebook and we use their micro-targeting algorithm to frankly, distort people’s vision of reality and get them to vote or not vote based on some delusion.

All of this is pathological and it has to be disincentivized in some way. The business model of digital media is part of the problem. But beyond that, people have to be better educated and realize that thinking through problems and understanding facts and creating better arguments and responding to better arguments and realizing when you’re wrong, these are muscles that need to be trained, and there are certain environments in which you can train them well. And there’s certain environments where they are guaranteed to atrophy. Education largely consists in the former, in just training someone to interact with ideas and with shared perceptions and with arguments and evidence in a way that is agnostic as to how things will come out. You’re just curious to know what’s true. You don’t want to be wrong. You don’t want to be self-deceived. You don’t want to have your epistemology anchored to wishful thinking and confirmation bias and political partisanship and religious taboos and other engines of bullshit, really.

I mean, you want to be free of all that, and you don’t want to have your personal identity trimming down your perception of what is true or likely to be true or might yet happen. People have to understand what it feels like to be willing to reason about the world in a way that is unconcerned about the normal, psychological and tribal identity formation that most people, most of the time use to filter against ideas. They’ll hear an idea and they don’t like the sound of it because it violates some cherished notion they already have in the bag. So they don’t want to believe it. That should be a tip off. That’s not more evidence in favor of your worldview. That’s evidence that you are an ape who’s disinclined to understand what’s actually happening in the world. That should be an alarm that goes off for you, not a reason to double down on the last bad idea you just expressed on Twitter.

Lucas Perry: Yeah. The way the ego and concern for reputation and personal identity and shared human psychological biases influence the way that we do conversations seems to be a really big hindrance here. And being aware of how your mind is reacting in each moment to the kinetics of the conversation and what is happening can be really skillful for catching unwholesome or unskillful reactions it seems. And I’ve found that non-violent communication has been really helpful for me in terms of having valuable open discourse where one’s identity or pride isn’t on the line. The ability to seek truth with another person instead of have a debate or argument is a skill certainly developed. Yet that kind of format for discussion isn’t always rewarded or promoted as well as something like an adversarial debate, which tends to get a lot more attention.

Sam Harris: Yeah.

Lucas Perry: So as we begin to strengthen our epistemology and conversational muscles so that we’re able to arrive at agreement on core issues, that’ll allow us to create a better civilization and work on what matters. So I do want to pivot here into what those specific things might be. Now I have three general categories, maybe four, for us to touch on here.

The first is existential risk that primarily come from technology, which might lead to the extinction of Earth originating life, or more specifically just the extinction of human life. You have a Ted Talk on AGI risk, that’s artificial general intelligence risk, the risk of machines becoming as smart or smarter than human beings and being misaligned with human values. There’s also synthetic bio risk where advancements in genetic engineering may unleash a new age of engineered pandemics, which are more lethal than anything that is produced by nature. We have nuclear war, and we also have new technologies or events that might come about that we aren’t aware of or can’t predict yet. And the other categories in terms of global priorities, I want to touch on are global poverty, animal suffering and human health and longevity. So how is it that you think of and prioritize and what is your reaction to these issues and their relative importance in the world?

Sam Harris: Well, I’m persuaded that thinking about existential risk is something we should do much more. It is amazing how few people spend time on this problem. It’s a big deal that we have the survival of our species as a blind spot, but I’m more concerned about what seems likelier to me, which is not that we will do something so catastrophically unwise as to erase ourselves, certainly not in the near term. And we’re capable of doing that clearly, but I think it’s more likely we’re capable of ensuring our unrecoverable misery for a good long while. We could just make life basically not worth living, but we’ll be forced or someone will be forced to live it all the while, basically a Road Warrior like hellscape could await us as opposed to just pure annihilation. So that’s a civilizational risk that I worry more about than extinction because it just seems probabilistically much more likely to happen no matter how big our errors are.

I worry about our stumbling into an accidental nuclear war. That’s something that I think is still pretty high on the list of likely ways we could completely screw up the possibility of human happiness in the near term. It’s humbling to consider what an opportunity cost this, compared to what’s possible, minor pandemic is, right. I mean, we’ve got this pandemic that has locked down most of humanity and every problem we had and every risk we were running as a species prior to anyone learning the name of this virus is still here. The threat of nuclear war has not gone away. It’s just, this has taken up all of our bandwidth. We can’t think about much else. It’s also humbling to observe how hard a time we’re having, even agreeing about what’s happening here, much less responding intelligently to the problem. If you imagine a pandemic that was orders of magnitude, more deadly and more transmissible, man, this is a pretty startling dress rehearsal.

I hope we learn something from this. I hope we think more about things like this happening in the future and prepare for them in advance. I mean, the fact that we have a CDC, that still cannot get its act together is just astounding. And again, politics is the thing that is gumming up the gears in any machine that would otherwise run halfway decently at the moment. I mean, we have a truly deranged president and that is not a partisan observation. That is something that can be said about Trump. And it would not be said about most other Republican presidents. There’s nothing I would say about Trump that I could say about someone like Mitt Romney or any other prominent Republican. This is the perfect circumstance to accentuate the downside of having someone in charge who lies more readily than any person in human history perhaps.

It’s like toxic waste at the informational level has been spread around for three years now and now it really matters that we have an information ecosystem that has no immunity against crazy distortions of the truth. So I hope we learn something from this. And I hope we begin to prioritize the list of our gravest concerns and begin steeling our civilization against the risk that any of these things will happen. And some of these things are guaranteed to happen. The thing that’s so bizarre about our failure to grapple with a pandemic of this sort is, this is the one thing we knew was going to happen. This was not a matter of “if.” This was only a matter of “when.” Now nuclear war is still a matter of “if”, right? I mean, we have the bombs, they’re on hair-trigger, overseen by absolutely bizarre and archaic protocols and highly outdated technology. We know this is just a doomsday system we’ve built that could go off at any time through sheer accident or ineptitude. But it’s not guaranteed to go off.

But pandemics are just guaranteed to emerge and we still were caught flat footed here. And so I just think we need to use this occasion to learn a lot about how to respond to this sort of thing. And again, if we can’t convince the public that this sort of thing is worth paying attention to, we have to do it behind closed doors, right? I mean, we have to get people into power who have their heads screwed on straight here and just ram it through. There has to be a kind of Manhattan Project level urgency to this, because this is about as benign a pandemic as we could have had, that would still cause significant problems. An engineered virus, a weaponized virus that was calculated to kill the maximum number of people. I mean, that’s a zombie movie, all of a sudden, and we’re not ready for the zombies.

Lucas Perry: I think that my two biggest updates from the pandemic were that human civilization is much more fragile than I thought it was. And also I trust the US government way less now in its capability to mitigate these things. I think at one point you said that 9/11 was the first time that you felt like you were actually in history. And as someone who’s 25, being in the COVID pandemic, this is the first time that I feel like I’m in human history. Because my life so far has been very normal and constrained, and the boundaries between everything has been very rigid and solid, but this is perturbing that.

So you mentioned that you were slightly less worried about humanity just erasing ourselves via some kind of existential risk and part of the idea here seems to be that there are futures that are not worth living. Like if there’s such thing as a moment or a day that isn’t worth living then there are also futures that are not worth living. So I’m curious if you could unpack why you feel that these periods of time that are not worth living are more likely than existential risks. And if you think that some of those existential conditions could be permanent, and could you speak a little bit about the relative likely hood of existential risk and suffering risks and whether you see the higher likelihood of the suffering risks to be ones that are constrained in time or indefinite.

Sam Harris: In terms of the probabilities, it just seems obvious that it is harder to eradicate the possibility of human life entirely than it is to just kill a lot of people and make the remaining people miserable. Right? If a pandemic spreads, whether it’s natural or engineered, that has 70% mortality and the transmissibility of measles, that’s going to kill billions of people. But it seems likely that it may spare some millions of people or tens of millions of people, even hundreds of millions of people and those people will be left to suffer their inability to function in the style to which we’ve all grown accustomed. So it would be with war. I mean, we could have a nuclear war and even a nuclear winter, but the idea that it’ll kill every last person or every last mammal, it would have to be a bigger war and a worse winter to do that.

So I see the prospect of things going horribly wrong to be one that yields, not a dial tone, but some level of remaining, even civilized life, that’s just terrible, that nobody would want. Where we basically all have the quality of life of what it was like on a mediocre day in the middle of the civil war in Syria. Who wants to live that way? If every city on Earth is basically a dystopian cell on a prison planet, that for me is a sufficient ruination of the hopes and aspirations of civilized humanity. That’s enough to motivate all of our efforts to avoid things like accidental nuclear war and uncontrolled pandemics and all the rest. And in some ways it’s more of motivating because when you ask people, what’s the problem with the failure to continue the species, right? Like if we all died painlessly in our sleep tonight, what’s the problem with that?

That actually stumps some considerable number of people because they immediately see that the complete annihilation of the species painlessly is really a kind of victimless crime. There’s no one around to suffer our absence. There’s no one around to be bereaved. There’s no one around to think, oh man, we could have had billions of years of creativity and insight and exploration of the cosmos and now the lights have gone out on the whole human project. There’s no one around to suffer that disillusionment. So what’s the problem? I’m persuaded that that’s not the perfect place to stand to evaluate the ethics. I agree that losing that opportunity is a negative outcome that we want to value appropriately, but it’s harder to value it emotionally and it’s not as clear. I mean it’s also, there’s an asymmetry between happiness and suffering, which I think is hard to get around.

We are perhaps rightly more concerned about suffering than we are about losing opportunities for wellbeing. If I told you, you could have an hour of the greatest possible happiness, but it would have to be followed by an hour of the worst possible suffering. I think most people given that offer would say, oh, well, okay, I’m good. I’ll just stick with what it’s like to be me. The hour of the worst possible misery seems like it’s going to be worse than the highest possible happiness is going to be good and I do sort of share that intuition. And when you think about it, in terms of the future of humanity, I think it is more motivating to think, not that your grandchildren might not exist, but that your grandchildren might live horrible lives, really unendurable lives and they’ll be forced to live them because there’ll be born. If for no other reason, then we have to persuade some people to take these concerns seriously, I think that’s the place to put most of the emphasis.

Lucas Perry: I think that’s an excellent point. I think it makes it more morally salient and leverages human self-interest more. One distinction that I want to make is the distinction between existential risks and global catastrophic risks. Global catastrophic risks are those which would kill a large fraction of humanity without killing everyone, and existential risks are ones which would exterminate all people or all Earth-originating intelligent life. And this former risk, the global catastrophic risks are the ones which you’re primarily discussing here where something goes really bad and now we’re left with some pretty bad existential situation.

Sam Harris: Yeah.

Lucas Perry: Now we’re not locked in that forever. So it’s pretty far away from being what is talked about in the effective altruism community as a suffering risk. That actually might only last a hundred or a few hundred years or maybe less. Who knows. It depends on what happened. But now taking a bird’s eye view again on global priorities and standing on a solid ground of ethics, what is your perspective on longtermist philosophy? This is the position or idea that the deep future has overwhelming moral priority, given the countless trillions of lives that could be lived. So if an existential risk occur, then we’re basically canceling the whole future like you mentioned. There won’t be any suffering and there won’t be any joy, but we’re missing out on a ton of good it would seem. And with the continued evolution of life, through genetic engineering and enhancements and artificial intelligence, it would seem that the future could also be unimaginably good.

If you do an expected value calculation about existential risks, you can estimate very roughly the likelihood of each existential risk, whether it be from artificial general intelligence or synthetic bio or nuclear weapons or a black swan event that we couldn’t predict. And you multiply that by the amount of value in the future, you’ll get some astronomical number, given the astronomical amount of value in the future. Does this kind of argument or viewpoint do the work for you to commit you to seeing existential risk as a global priority or the central global priority?

Sam Harris: Well, it doesn’t do the emotional work largely because we’re just bad at thinking about longterm risk. It doesn’t even have to be that long-term for our intuitions and concerns to degrade irrationally. We’re bad at thinking about the well-being, even of our future selves as you get further out in time. The term of jargon is that we “hyperbolically discount” our future well being. People will smoke cigarettes or make other imprudent decisions in the present. They know they will be the inheritors of these bad decisions, but there’s some short-term upside.

The mere pleasure of the next cigarette say, that convinces them that they don’t really have to think long and hard about what their future self will wish they had done at this point. Our ability to be motivated by what we think is likely to happen in the future is even worse when we’re thinking about our descendants. Right? People we either haven’t met yet or may never meet. I have kids, but I don’t have grandkids. How much of my bandwidth is taken up thinking about the kinds of lives my grandchildren will have? Really none. It’s conserved. It’s safeguarded by my concern about my kids, at this point.

But, then there are people who don’t have kids and are just thinking about themselves. It’s hard to think about the comparatively near future. Even a future that, barring some real mishap, you have every expectation of having to live in yourself. It’s just hard to prioritize. When you’re talking about the far future, it becomes very, very difficult. You just have to have the science fiction geek gene or something disproportionately active in your brain, to really care about that.

Unless you think you are somehow going to cheat death and get aboard the starship when it’s finally built. You’re popping 200 vitamins a day with Ray Kurzweil and you think you might just be in the cohort of people who are going to make it out of here without dying because we’re just on the cusp of engineering death out of the system, then I could see, okay. There’s a self interested view of it. If you’re really talking about hypothetical people who you know you will never come in contact with, I think it’s hard to be sufficiently motivated, even if you believe the moral algebra here.

It’s not clear to me that it need run through. I agree with you that if you do a basic expected value calculation here, and you start talking about trillions of possible lives, their interests must outweigh the interests of the 7.8 or whatever it is, billion of us currently alive. A few asymmetries here, again. The asymmetry between actual and hypothetical lives, there are no identifiable lives who would be deprived of anything if we all just decided to stop having kids. You have to take the point of view of the people alive who make this decision.

If we all just decided, “Listen. These are our lives to live. We can decide how we want to live them. None of us want to have kids anymore.” If we all independently made that decision, the consequence on this calculus is we are the worst people, morally speaking, who have ever lived. That doesn’t quite capture the moment, the experience or the intentions. We could do this thing without ever thinking about the implications of existential risk. If we didn’t have a phrase for this and we didn’t have people like ourselves talking about this is a problem, people could just be taken in by the overpopulation thesis.

That that’s really the thing that is destroying the world and what we need is some kind of Gaian reset, where the Earth reboots without us. Let’s just stop having kids and let nature reclaim the edges of the cities. You could see a kind of utopian environmentalism creating some dogma around that, where it was no one’s intention ever to create some kind of horrific crime. Yet, on this existential risk calculus, that’s what would have happened. It’s hard to think about the morality there when you talk about people deciding not to have kids and it would be the same catastrophic outcome.

Lucas Perry: That situation to me seems to be like looking over the possible moral landscape and seeing a mountain or not seeing a mountain, but there still being a mountain. Then you can have whatever kinds of intentions that you want, but you’re still missing it. From a purely consequentialist framework on this, I feel not so bad saying that this is probably one of the worst things that have ever happened.

Sam Harris: The asymmetry here between suffering and happiness still seems psychologically relevant. It’s not quite the worst thing that’s ever happened, but the best things that might have happened have been canceled. Granted, I think there’s a place to stand where you could think that is a horrible outcome, but again, it’s not the same thing as creating some hell and populating it.

Lucas Perry: I see what you’re saying. I’m not sure that I quite share the intuition about the asymmetry between suffering and well-being. I feel somewhat suspect about that, but that would be a huge tangent right now, I think. Now, one of the crucial things that you said was, for those that are not really compelled to care about the long-term future argument, if you don’t have the science fiction geek gene and are not compelled by moral philosophy, the essential way it seems to be that you’re able to compel people to care about global catastrophic and existential risk is to demonstrate how they’re very likely within this century.

And so their direct descendants, like their children or grandchildren, or even them, may live in a world that is very bad or they may die in some kind of a global catastrophe, which is terrifying. Do you see this as the primary way of leveraging human self-interest and feelings and emotions to make existential and global catastrophic risk salient and pertinent for the masses?

Sam Harris: It’s certainly half the story, and it might be the most compelling half. I’m not saying that we should be just worried about the downside because the upside also is something we should celebrate and aim for. The other side of the story is that we’ve made incredible progress. If you take someone like Steven Pinker and his big books of what is often perceived as happy talk. He’s pointing out all of the progress, morally and technologically and at the level of public health.

It’s just been virtually nothing but progress. There’s no point in history where you’re luckier to live than in the present. That’s true. I think that the thing that Steve’s story conceals, or at least doesn’t spend enough time acknowledging, is that the risk of things going terribly wrong is also increasing. It was also true a hundred years ago that it would have been impossible for one person or a small band of people to ruin life for everyone else.

Now that’s actually possible. Just imagine if this current pandemic were an engineered virus, more like a lethal form of measles. It might take five people to create that and release it. Here we would be locked down in a truly terrifying circumstance. The risk is ramped up. I think we just have to talk about both sides of it. There is no limit to how beautiful life could get if we get our act together. Take an argument of the sort that David Deutsch makes about the power of knowledge.

Every problem has a solution born of a sufficient insight into how things work, i.e. knowledge, unless the laws of physics rules it out. If it’s compatible with the laws of physics, knowledge can solve the problem. That’s virtually a blank check with reality that we could live to cash, if we don’t kill ourselves in the process. Again, as the upside becomes more and more obvious, the risk that we’re going to do something catastrophically stupid is also increasing. The principles here are the same. The only reason why we’re talking about existential risk is because we have made so much progress. Without the progress, there’d be no way to make a sufficiently large mistake. It really is two sides of the coin of increasing knowledge and technical power.

Lucas Perry: One thing that I wanted to throw in here in terms of the kinetics of long-termism and emotional saliency, it would be stupidly optimistic I think, to think that everyone could become selfless bodhisattvas. In terms of your interest, the way in which you promote meditation and mindfulness, and your arguments against the conventional, experiential and conceptual notion of the self, for me at least, has dissolved much of the barriers which would hold me from being emotionally motivated from long-termism.

Now, that itself I think, is another long conversation. When your sense of self is becoming nudged, disentangled and dissolved in new ways, the idea that it won’t be you in the future, or the idea that the beautiful dreams that Dyson spheres will be having in a billion years are not you, that begins to relax a bit. That’s probably not something that is helpful for most people, but I do think that it’s possible for people to adopt and for meditation, mindfulness and introspection to lead to this weakening of sense of self, which then also opens one’s optimism, and compassion, and mind towards the long-termist view.

Sam Harris: That’s something that you get from reading Derek Parfit’s work. The paradoxes of identity that he so brilliantly framed and tried to reason through yield something like what you’re talking about. It’s not so important whether it’s you, because this notion of you is in fact, paradoxical to the point of being impossible to pin down. Whether the you that woke up in your bed this morning is the same person who went to sleep in it the night before, that is problematic. Yet there’s this fact of some degree of psychological continuity.

The basic fact experientially is just, there is consciousness and its contents. The only place for feelings, and perceptions, and moods, and expectations, and experience to show up is in consciousness, whatever it is and whatever its connection to the physics of things actually turns out to be. There’s just consciousness. The question of where it appears is a genuinely interesting one philosophically, and intellectually, and scientifically, and ultimately morally.

Because if we build conscious robots or conscious computers and build them in a way that causes them to suffer, we’ve just done something terrible. We might do that inadvertently if we don’t know how consciousness arises based on information processing, or whether it does. It’s all interesting terrain to think about. If the lights are still on a billion years from now, and the view of the universe is unimaginably bright, and interesting and beautiful, and all kinds of creative things are possible by virtue of the kinds of minds involved, that will be much better than any alternative. That’s certainly how it seems to me.

Lucas Perry: I agree. Some things here that ring true seem to be, you always talk about how there’s only consciousness and its contents. I really like the phrase, “Seeing from nowhere.” That usually is quite motivating for me, in terms of the arguments against the conventional conceptual and experiential notions of self. There just seems to be instantiations of consciousness intrinsically free of identity.

Sam Harris: Two things to distinguish here. There’s the philosophical, conceptual side of the conversation, which can show you that things like your concept of a self, or certainly your concept of a self that could have free will that, that doesn’t make a lot of sense. It doesn’t make sense when mapped onto physics. It doesn’t make sense when looked for neurologically. Any way you look at it, it begins to fall apart. That’s interesting, but again, it doesn’t necessarily change anyone’s experience.

It’s just a riddle that can’t be solved. Then there’s the experiential side which you encounter more in things like meditation, or psychedelics, or sheer good luck where you can experience consciousness without the sense that there’s a subject or a self in the center of it appropriating experiences. Just a continuum of experience that doesn’t have structure in the normal way. What’s more, that’s not a problem. In fact, it’s the solution to many problems.

A lot of the discomfort you have felt psychologically goes away when you punch through to a recognition that consciousness is just the space in which thoughts, sensations and emotions continually appear, change and vanish. There’s no thinker authoring the thoughts. There’s no experiencer in the middle of the experience. It’s not to say you don’t have a body. There’s every sign that you have a body is still appearing. There’s sensations of tension, warmth, pressure and movement.

There are sights, there are sounds but again, everything is simply an appearance in this condition, which I’m calling consciousness for lack of a better word. There’s no subject to whom it all refers. That can be immensely freeing to recognize, and that’s a matter of a direct change in one’s experience. It’s not a matter of banging your head against the riddles of Derek Parfit or any other way of undermining one’s belief in personal identity or the reification of a self.

Lucas Perry: A little bit earlier, we talked a little bit about the other side of the existential risk coin. Now, the other side of that is this existential hope, we like to call at The Future of Life Institute. We’re not just a doom and gloom society. It’s also about how the future can be unimaginably good if we can get our act together and apply the appropriate wisdom to manage and steward our technologies with wisdom and benevolence in mind.

Pivoting in here and reflecting a little bit on the implications of some of this no self conversation we’ve been having for global priorities, the effective altruism community has narrowed down on three of these global priorities as central issues of consideration, existential risk, global poverty and animal suffering. We talked a bunch about existential risk already. Global poverty is prolific, and many of us live in quite nice and abundant circumstances.

Then there’s animal suffering, which can be thought of as in two categories. One being factory farmed animals, where we have billions upon billions of animals being born into miserable conditions and being slaughtered for sustenance. Then we also have wild animal suffering, which is a bit more esoteric and seems like it’s harder to get any traction on helping to alleviate. Thinking about these last two points, global poverty and animal suffering, what is your perspective on these?

I find the lack of willingness for people to empathize and be compassionate towards animal suffering to be quite frustrating, as well as global poverty, of course. If you view the perspective of no self as potentially being informative or helpful for leveraging human compassion and motivation to help other people and to help animals. One quick argument here that comes from the conventional view of self, so isn’t strictly true or rational, but is motivating for me, is that I feel like I was just born as me and then I just woke up one day as Lucas.

I, referring to this conventional and experientially illusory notion that I have of myself, this convenient fiction that I have. Now, you’re going to die and you could wake up as a factory farmed animal. Surely there are those billions upon billions of instantiations of consciousness that are just going through misery. If the self is an illusion then there are selfless chicken and cow experiences of enduring suffering. Any thoughts or reactions you have to global poverty, animal suffering and what I mentioned here?

Sam Harris: I guess the first thing to observe is that again, we are badly set up to prioritize what should be prioritized and to have the emotional response commensurate with what we could rationally understand is so. We have a problem of motivation. We have a problem of making data real. This has been psychologically studied, but it’s just manifest in oneself and in the world. We care more about the salient narrative that has a single protagonist than we do about the data on, even human suffering.

The classic example here is one little girl falls down a well, and you get wall to wall news coverage. All the while there could be a genocide or a famine killing hundreds of thousands of people, and it doesn’t merit more than five minutes. One broadcast. That’s clearly a bug, not a feature morally speaking, but it’s something we have to figure out how to work with because I don’t think it’s going away. One of the things that the effective altruism philosophy has done, I think usefully, is that it has separated two projects which up until the emergence of effective altruism, I think were more or less always conflated.

They’re both valid projects, but one has much greater moral consequence. The fusion of the two is, the concern about giving and how it makes one feel. I want to feel good about being philanthropic. Therefore, I want to give to causes that give me these good feels. In fact, at the end of the day, the feeling I get from giving is what motivates me to give. If I’m giving in a way that doesn’t really produce that feeling, well, then I’m going to give less or give less reliably.

Even in a contemplative Buddhist context, there’s an explicit fusion of these two things. The reason to be moral and to be generous is not merely, or even principally, the effect on the world. The reason is because it makes you a better person. It gives you a better mind. You feel better in your own skin. It is in fact, more rewarding than being selfish. I think that’s true, but that doesn’t get at really, the important point here, which is we’re living in a world where the difference between having good and bad luck is so enormous.

The inequalities are so shocking and indefensible. The fact that I was born me and not born in some hell hole in the middle of a civil war soon to be orphaned, and impoverished and riddled by disease, I can take no responsibility for the difference in luck there. That difference is the difference that matters more than anything else in my life. What the effective altruist community has prioritized is, actually helping the most people, or the most sentient beings.

That is fully divorceable from how something makes you feel. Now, I think it shouldn’t be ultimately divorceable. I think we should recalibrate our feelings or struggle to, so that we do find doing the most good the most rewarding thing in the end, but it’s hard to do. My inability to do it personally, is something that I have just consciously corrected for. I’ve talked about this a few times on my podcast. When Will MacAskill came on my podcast and we spoke about these things, I was convinced at the end of the day, “Well, I should take this seriously.”

I recognize that fighting malaria by sending bed nets to people in sub-Saharan Africa is not a cause I find particularly sexy. I don’t find it that emotionally engaging. I don’t find it that rewarding to picture the outcome. Again, compared to other possible ways of intervening in human misery and producing some better outcome, it’s not the same thing as rescuing the little girl from the well. Yet, I was convinced that, as Will said on that podcast and as organizations like GiveWell attest, giving money to the Against Malaria Foundation was and remains one of the absolute best uses of every dollar to mitigate unnecessary death and suffering.

I just decided to automate my giving to the Against Malaria Foundation because I knew I couldn’t be trusted to wake up every day, or every month or every quarter, whatever it would be, and recommit to that project because some other project would have captured my attention in the meantime. I was either going to give less to it or not give at all, in the end. I’m convinced that we do have to get around ourselves and figure out how to prioritize what a rational analysis says we should prioritize and get the sentimentality out of it, in general.

It’s very hard to escape entirely. I think we do need to figure out creative ways to reformat our sense of reward. The reward we find in helping people has to begin to become more closely coupled to what is actually most helpful. Conversely, the disgust or horror we feel over bad outcomes should be more closely coupled to the worst things that happen. As opposed to just the most shocking, but at the end of the day, minor things. We’re just much more captivated by a sufficiently ghastly story involving three people than we are by the deaths of literally millions that happen some other way. These are bugs we have to figure out how to correct for.

Lucas Perry: I hear you. The person running in the burning building to save the child is sung as a hero, but if you are say, earning to give for example and write enough checks to save dozens of lives over your lifetime, that might not go recognized or felt in the same way.

Sam Harris: And also these are different people, too. It’s also true to say that someone who is psychologically and interpersonally not that inspiring, and certainly not a saint might wind up doing more good than any saint ever does or could. I don’t happen to know Bill Gates. He could be saint-like. I literally never met him, but I don’t get that sense that he is. I think he’s kind of a normal technologist and might be normally egocentric, concerned about his reputation and legacy.

He might be a prickly bastard behind closed doors. I don’t know, but he certainly stands a chance of doing more good than any person in human history at this point, just based on the checks he’s writing and his intelligent prioritization of his philanthropic efforts. There is an interesting uncoupling here where you could just imagine someone who might be a total asshole, but actually does more good than any army of Saints you could muster. That’s interesting. That just proves a point that a concern about real world outcomes is divorceable from the psychology that we tend to associate with doing good in the world. On the point of animal suffering, I share your intuitions there, although again, this is a little bit like climate change in that I think that the ultimate fix will be technological. It’ll be a matter of people producing the Impossible Burger squared that is just so good that no one’s tempted to eat a normal burger anymore, or something like Memphis Meats, which actually, I invested in.

I have no idea where it’s going as a company, but when I had its CEO on my podcast back in the day, Uma Valeti, I just thought, “This is fantastic to engineer actual meat without producing any animal suffering. I hope he can bring this to scale.” At the time, it was like an $18,000-meatball. I don’t know what it is now, but it’s that kind of thing that will close the door to the slaughterhouse more than just convincing billions of people about the ethics. It’s too difficult and the truth may not align with exactly what we want.

I’m going to reap the whirlwind of criticism from the vegan mafia here, but it’s just not clear to me that it’s easy to be a healthy vegan. Forget about yourself as an adult making a choice to be a vegan, raising vegan kids is a medical experiment on your kids of a certain sort and it’s definitely possible to screw it up. There’s just no question about it. If you’re not going to admit that, you’re not a responsible parent.

It is possible, it is by no means easier to raise healthy vegan kids than it is to raise kids who eat meat sometimes and that’s just a problem, right? Now, that’s a problem that has a technical solution, but there’s still diversity of opinion about what constitutes a healthy human diet even when all things are on the menu. We’re just not there yet. It’s unlikely to be just a matter of supplementing B12.

Then the final point you made does get us into a kind of, I would argue, a reductio ad absurdum of the whole project ethically when you’re talking about losing sleep over whether to protect the rabbits from the foxes out there in the wild. If you’re going to go down that path, and I will grant you, I wouldn’t want to trade places with a rabbit, and there’s a lot of suffering out there in the natural world, but if you’re going to try to figure out how to minimize the suffering of wild animals in relation to other wild animals then I think you are a kind of antinatalist with respect to the natural world. I mean, then it would be just better if these animals didn’t exist, right? Let’s just hit stop on the whole biosphere, if that’s the project.

Then there’s the argument that there are many more ways to suffer and to be happy as a sentient being. Whatever story you want to tell yourself about the promise of future humanity, it’s just so awful to be a rabbit or an insect that if an asteroid hit us and canceled everything, that would be a net positive.

Lucas Perry: Yeah. That’s an actual view that I hear around a bunch. I guess my quick response is as we move farther into the future, if we’re able to reach an existential situation which is secure and where there is flourishing and we’re trying to navigate the moral landscape to new peaks, it seems like we will have to do something about wild animal suffering. With AGI and aligned superintelligence, I’m sure there could be very creative solutions using genetic engineering or something. Our descendants will have to figure that out, whether they are just like, “Are wild spaces really necessary in the future and are wild animals actually necessary, or are we just going to use those resources in space to build more AI that would dream beautiful dreams?”

Sam Harris: I just think it may be, in fact, the case that nature is just a horror show. It is bad almost any place you could be born in the natural world, you’re unlucky to be a rabbit and you’re unlucky to be a fox. We’re lucky to be humans, sort of, and we can dimly imagine how much luckier we might get in the future if we don’t screw up.

I find it compelling to imagine that we could create a world where certainly most human lives are well worth living and better than most human lives ever were. Again, I follow Pinker in feeling that we’ve sort of done that already. It’s not to say that there aren’t profoundly unlucky people in this world, and it’s not to say that things couldn’t change in a minute for all of us, but life has gotten better and better for virtually everyone when you compare us to any point in the past.

If we get to the place you’re imagining where we have AGI that we have managed to align with our interests and we’re migrating into of spaces of experience that changes everything, it’s quite possible we will look back on the “natural world” and be totally unsentimental about it, which is to say, we could compassionately make the decision to either switch it off or no longer provide for its continuation. It’s like that’s just a bad software program that evolution designed and wolves and rabbits and bears and mice, they were all unlucky on some level.

We could be wrong about that, or we might discover something else. We might discover that intelligence is not all it’s cracked up to be, that it’s just this perturbation on something that’s far more rewarding. At the center of the moral landscape, there’s a peak higher than any other and it’s not one that’s elaborated by lots of ideas and lots of creativity and lots of distinctions, it’s just this great well of bliss that we actually want to fully merge with. We might find out that the cicadas were already there. I mean, who knows how weird this place is?

Lucas Perry: Yeah, that makes sense. I totally agree with you and I feel this is true. I also feel that there’s some price that is paid because there’s already some stigma around even thinking this. I think it’s a really early idea to have in terms of the history of human civilization, so people’s initial reaction is like, “Ah, what? Nature’s so beautiful and why would you do that to the animals?” Et cetera. We may come to find out that nature is just very net negative, but I could be wrong and maybe it would be around neutral or better than that, but that would require a more robust and advanced science of consciousness.

Just hitting on this next one fairly quickly, effective altruism is interested in finding new global priorities and causes. They call this “Cause X,” something that may be a subset of existential risk or something other than existential risk or global poverty or animal suffering probably still just has to do with the suffering of sentient beings. Do you think that a possible candidate for Cause X would be machine suffering or the suffering of other non-human conscious things that we’re completely unaware of?

Sam Harris: Yeah, well, I think it’s a totally valid concern. Again, it’s one of these concerns that’s hard to get your moral intuitions tuned up to respond to. People have a default intuition that a conscious machine is impossible, that substrate independence, on some level, is impossible, they’re making an assumption without ever doing it explicitly… In fact, I think most people would explicitly deny thinking this, but it is implicit in what they then go on to think when you pose the question of the possibility of suffering machines and suffering computers.

That just seems like something that never needs to be worried about and yet the only way to close the door to worrying about it is to assume that consciousness is totally substrate-dependent and that we would never build a machine that could suffer because we’re building machines out of some other material. If we built a machine out of biological neurons, well, then, then we might be up for condemnation morally because we’ve taken an intolerable risk analogous to create some human-chimp hybrid or whatever. It’s like obviously, that thing’s going to suffer. It’s an ape of some sort and now it’s in a lab and what sort of monster would do that, right? We would expect the lights to come on in a system of that sort.

If consciousness is the result of information processing on some level, and again, that’s an “if,” we’re not sure that’s the case, and if information processing is truly substrate-independent, and that seems like more than an “if” at this point, we know that’s true, then we could inadvertently build conscious machines. And then the question is: What is it like to be those machines and are they suffering? There’s no way to prevent that on some level.

Certainly, if there’s any relationship between consciousness and intelligence, if building more and more intelligent machines is synonymous with increasing the likelihood that the lights will come on experientially, well, then we’re clearly on that path. It’s totally worth worrying about, but it’s again, judging from what my own mind is like and what my conversations with other people suggest, it seems very hard to care about for people. That’s just another one of these wrinkles.

Lucas Perry: Yeah. I think a good way of framing this is that humanity has a history of committing moral catastrophes because of bad incentives and they don’t even realize how bad the thing is that they’re doing, or they just don’t really care or they rationalize it, like subjugation of women and slavery. We’re in the context of human history and we look back at these people and see them as morally abhorrent.

Now, the question is: What is it today that we’re doing that’s morally abhorrent? Well, I think factory farming is easily one contender and perhaps human selfishness that leads to global poverty and millions of people drowning in shallow ponds is another one that we’ll look back on. With just some foresight towards the future, I agree that machine suffering is intuitively and emotionally difficult to empathize with if your sci-fi gene isn’t turned on. It could be the next thing.

Sam Harris: Yeah.

Lucas Perry: I’d also like to pivot here into AI alignment and AGI. In terms of existential risk from AGI or transformative AI systems, do you have thoughts on public intellectuals who are skeptical of existential risk from AGI or superintelligence? You had a talk about AI risk and I believe you got some flak from the AI community about that. Elon Musk was just skirmishing with the head of AI at Facebook, I think. What is your perspective about the disagreement and confusion here?

Sam Harris: It comes down to a failure of imagination on the one hand and also just bad argumentation. No sane person who’s concerned about this is concerned because they think it’s going to happen this year or next year. It’s not a bet on how soon this is going to happen. For me, it certainly isn’t a bet on how soon it’s going to happen. It’s just a matter of the implications of continually making progress in building more and more intelligent machines. Any progress, it doesn’t have to be Moore’s law, it just has to be continued progress, will ultimately deliver us into relationship with something more intelligent than ourselves.

To think that that is farfetched or is not likely to happen or can’t happen is to assume some things that we just can’t assume. It’s to assume that substrate independence is not in the cards for intelligence. Forget about consciousness. I mean, consciousness is orthogonal to this question. I’m not suggesting that AGI need be conscious, it just needs to be more competent than we are. We already know that our phones are more competent as calculators than we are, they’re more competent chess players than we are. You just have to keep stacking cognitive-information-processing abilities on that and making progress, however incremental.

I don’t see how anyone can be assuming substrate dependence for really any of the features of our mind apart from, perhaps, consciousness. Take the top 200 things we do cognitively, consciousness aside, just as a matter of sheer information-processing and behavioral control and power to make decisions and you start checking those off, those have to be substrate independent: facial recognition, voice recognition, we can already do that in silico. It’s just not something you need meat to do.

We’re going to build machines that get better and better at all of these things and ultimately, they will pass the Turing test and ultimately, it will be like chess or now Go as far as the eye can see, where it will be in relationship to something that is better than we are at everything that we have prioritized, every human competence we have put enough priority in that we took the time to build it into our machines in the first place: theorem-proving in mathematics, engineering software programs. There is no reason why a computer will ultimately not be the best programmer in the end, again, unless you’re assuming that there’s something magical about doing this in meat. I don’t know anyone who’s assuming that.

Arguing about the time horizon is a non sequitur, right? No one is saying that this need happen soon to ultimately be worth thinking about. We know that whatever the time horizon is, it can happen suddenly. We have historically been very bad at predicting when there will be a breakthrough. This is a point that Stuart Russell makes all the time. If you look at what Rutherford said about the nuclear chain reaction being a pipe dream, it wasn’t even 24 hours before Leo Szilard committed the chain reaction to paper and had the relevant breakthrough. We know we can make bad estimates about the time horizon, so at some point, we could be ambushed by a real breakthrough, which suddenly delivers exponential growth in intelligence.

Then there’s a question of just how quickly that could unfold and whether this something like an intelligence explosion. That’s possible. We can’t know for sure, but you need to find some foothold to doubt whether these things are possible and the footholds that people tend to reach for are either nonexistent or they’re non sequiturs.

Again, the time horizon is irrelevant and yet the time horizon is the first thing you hear from people who are skeptics about this: “It’s not going to happen for a very long time.” Well, I mean, Stuart Russell’s point here, which is, again, it’s just a reframing, but in the persuasion business, reframing is everything. The people who are consoled by this idea that this is not going to happen for 50 years wouldn’t be so consoled if we receive a message from an alien civilization which said, “People of Earth, we will arrive on your humble planet in 50 years. Get ready.”

If that happened, we would be prioritizing our response to that moment differently than the people who think it’s going to take 50 years for us to build AGI are prioritizing their response to what’s coming. We would recognize a relationship with something more powerful than ourselves is in the often. It’s only reasonable to do that on the assumption that we will continue to make progress.

The point I made in my TED Talk is that the only way to assume we’re not going to continue to make progress is to be convinced of a very depressing thesis. The only way we wouldn’t continue to make progress is if we open the wrong door of the sort that you and I have been talking about in this conversation, if we invoke some really bad roll of the dice in terms of existential risk or catastrophic civilizational failure, and we just find ourselves unable to build better and better computers. I mean, that’s the only thing that would cause us to be unable to do that. Given the power and value of intelligent machines, we will build more and more intelligent machines at almost any cost at this point, so a failure to do it would be a sign that something truly awful has happened.

Lucas Perry: Yeah. From my perspective, the people that are skeptical of substrate independence, I wouldn’t say that those are necessarily AI researchers. Those are regular persons or laypersons who are not computer scientists. I think that’s motivated by mind-body dualism, where one has a conventional and experiential sense of the mind as being non-physical, which may be motivated by popular religious beliefs, but when we get into the area of actual AI researchers, for them, it seems to either be like they’re attacking some naive version of the argument or a straw man or something

Sam Harris: Like robots becoming spontaneously malevolent?

Lucas Perry: Yeah. It’s either that, or they think that the alignment problem isn’t as hard as it is. They have some intuition, like why the hell would we even release systems that weren’t safe? Why would we not make technology that served us or something? To me, it seems that when there are people from like the mainstream machine-learning community attacking AI alignment and existential risk considerations from AI, it seems like they just don’t understand how hard the alignment problem is.

Sam Harris: Well, they’re not taking seriously the proposition that what we will have built are truly independent minds more powerful than our own. If you actually drill down on what that description means, it doesn’t mean something that is perfectly enslaved by us for all time, I mean, because that is by definition something that couldn’t be more intelligent across the board than we are.

The analogy I use is imagine if dogs had invented us to protect their interests. Well, so far, it seems to be going really well. We’re clearly more intelligent than dogs, they have no idea what we’re doing or thinking about or talking about most of the time, and they see us making elaborate sacrifices for their wellbeing, which we do. I mean, the people who own dogs care a lot about them and make, you could argue, irrational sacrifices to make sure they’re happy and healthy.

But again, back to the pandemic, if we recognize that we had a pandemic that was going to kill the better part of humanity and it was jumping from dogs to people and the only way to stop this is to kill all the dogs, we would kill all the dogs on a Thursday. There’d be some holdouts, but they would lose. The dog project would be over and the dogs would never understand what happened.

Lucas Perry: But that’s because humans aren’t perfectly aligned with dog values.

Sam Harris: But that’s the thing: Maybe it’s a solvable problem, but it’s clearly not a trivial problem because what we’re imagining are minds that continue to grow in power and grow in ways that by definition we can’t anticipate. Dogs can’t possibly anticipate where we will go next, what we will become interested in next, what we will discover next, what we’ll prioritize next. If you’re not imagining minds so vast that we can’t capture their contents ourselves, you’re not talking about the AGI that the people who are worried about alignment are talking about.

Lucas Perry: Maybe this is like a little bit of a nuanced distinction between you or I, but I think that that story that you’re developing there seems to assume that the utility function or the value learning or the objective function of the systems that we’re trying to align with human values is dynamic. It may be the case that you can build a really smart alien mind and it might become super-intelligent, but there are arguments that maybe you could make its alignment stable.

Sam Harris: That’s the thing we have to hope for, right? I’m not a computer scientist, so as far as the doability of this, that’s something I don’t have good intuitions about, but Stuart Russell’s argument that we would need a system whose ultimate value is to more and more closely approximate our current values that would continually, no matter how much its intelligence escapes our own, it would continually remain available to the conversation with us where we say, “Oh, no, no. Stop doing that. That’s not what we want.” That would be the most important message from its point of view, no matter how vast its mind got.

Maybe that’s doable, right, but that’s the kind of thing that would have to be true for the thing to remain completely aligned to us because the truth is we don’t want it aligned to who we used to be and we don’t want it aligned to the values of the Taliban. We want to grow in moral wisdom as well and we want to be able to revise our own ethical codes and this thing that’s smarter than us presumably could help us do that, provided it doesn’t just have its own epiphanies which cancel the value of our own or subvert our own in a way that we didn’t foresee.

If it really has our best interest at heart, but our best interests are best conserved by it deciding to pull the plug on everything, well, then we might not see the wisdom of that. I mean, it might even be the right answer. Now, this is assuming it’s conscious. We could be building something that is actually morally more important than we are.

Lucas Perry: Yeah, that makes sense. Certainly, eventually, we would want it to be aligned with some form of idealized human values and idealized human meta preferences over how value should change and evolve into the deep future. This is known, I think, as “ambitious value learning” and it is the hardest form of value learning. Maybe we can make something safe without doing this level of ambitious value learning, but something like that may be deeper in the future.

Now, as we’ve made moral progress throughout history, we’ve been expanding our moral circle of consideration. In particular, we’ve been doing this farther into space, deeper into time, across species, and potentially soon, across substrates. What do you see as the central way of continuing to expand our moral circle of consideration and compassion?

Sam Harris: Well, I just think we have to recognize that things like distance in time and space and superficial characteristics, like whether something has a face, much less a face that can make appropriate expressions or a voice that we can relate to, none of these things have moral significance. The fact that another person is far away from you in space right now shouldn’t fundamentally affect how much you care whether or not they’re being tortured or whether they’re starving to death.

Now, it does. We know it does. People are much more concerned about what’s happening on their doorstep, but I think proximity, if it has any weight at all, it has less and less weight the more our decisions obviously affect people regardless of separation and space, but the more it becomes truly easy to help someone on another continent because you can just push a button in your browser, then you’re caring less about them is clearly a bug. And so it’s just noticing that the things that attenuate our compassion tend to be things that for evolutionary reasons we’re designed to discount in this way, but at the level of actual moral reasoning about a global civilization it doesn’t make any sense and it prevents us from solving the biggest problems.

Lucas Perry: Pivoting into ethics more so now. I’m not sure if this is the formal label that you would use but your work on the moral landscape lands you pretty much it seems in the moral realism category.

Sam Harris: Mm-hmm (affirmative).

Lucas Perry: You’ve said something like, “Put your hand in fire to know what bad is.” That seems to disclose or seems to argue about the self intimating nature of suffering about how it’s clearly bad. If you don’t believe me, go and do the suffering things. From other moral realists who I’ve talked to and who argued for moral realism, like Peter Singer, they make similar arguments. What view or theory of consciousness are you most partial to? And how does this inform this perspective about the self intimating nature of suffering as being a bad thing?

Sam Harris: Well, I’m a realist with respect to morality and consciousness in the sense that I think it’s possible not to know what you’re missing. So if you’re a realist, the property that makes the most sense to me is that there are facts about the world that are facts whether or not anyone knows them. It is possible for everyone to be wrong about something. We could all agree about X and be wrong. That’s the realist position as opposed to pragmatism or some other variant, where it’s all just a matter, it’s all a language game, and the truth value of a statement is just the measure of the work it does in conversation. So with respect to consciousness, I’m a realist in the sense that if a system is conscious, if a cricket is conscious, if a sea cucumber is conscious, they’re conscious whether we know it or not. For the purposes of this conversation, let’s just decide that they’re not conscious, the lights are not on in those systems.

Well, that’s a claim that we could believe, we could all believe it, but we could be wrong about it. And so the facts exceed our experience at any given moment. And so it is with morally salient facts, like the existence of suffering. If a system can be conscious whether I know it or not a system can be suffering whether I know it or not. And that system could be me in the future or in some counterfactual state. I could think I’m doing the right thing by doing X. But the truth is I would have been much happier had I done Y and I’ll never know that. I was just wrong about the consequences of living in a certain way. That’s what realism on my view entails. So the way this relates to questions of morality and good and evil and right and wrong, this is back to my analogy of the moral landscape, I think morality really is a navigation problem. There are possibilities of experience in this universe and we don’t even need the concept of morality, we don’t need the concept of right and wrong and good and evil really.

That’s shorthand for, in my view, the way we should talk about the burden that’s on us in each moment to figure out what we should do next. Where should we point ourselves across this landscape of mind and possible minds? And knowing that it’s possible to move in the wrong direction, and what does it mean to be moving in the wrong direction? Well, it’s moving in a direction where everything is getting worse and worse and everything that was good a moment ago is breaking down to no good end. You could conceive of moving down a slope on the moral landscape only to ascend some higher peak. That’s intelligible to me that we might have to all move in the direction that seems to be making things worse but it is a sacrifice worth making because it’s the only way to get to something more beautiful and more stable.

I’m not saying that’s the world we’re living in, but it certainly seems like a possible world. But this just doesn’t seem open to doubt. There’s a range of experience on offer. And, on the one end, it’s horrific and painful and all the misery is without any silver lining, right? It’s not like we learn a lot from this ordeal. No, it just gets worse and worse and worse and worse and then we die, and I call that the worst possible misery for everyone. Alright so, the worst possible misery for everyone is bad if anything is bad, if the word bad is going to mean anything, it has to apply to the worst possible misery for everyone. But now some people come in and think they’re doing philosophy when they say things like, “Well, who’s to say the worst possible misery for everyone is bad?” Or, “Should we avoid the worst possible misery for everyone? Can you prove that we should avoid it?” And I actually think those are unintelligible noises that they’re making.

You can say those words, I don’t think you can actually mean those words. I have no idea what that person actually thinks they’re saying. You can play a language game like that but when you actually look at what the words mean, “the worst possible misery for everyone,” to then say, “Well, should we avoid it?” In a world where you should do anything, where the word should make sense, there’s nothing that you should do more than avoid the worst possible misery for everyone. By definition, it’s more fundamental than the concept of should. What I would argue is if you’re hung up on the concept of should, and you’re taken in by Hume’s flippant and ultimately misleading paragraph on, “You can’t get an ought from an is,” you don’t need oughts then. There is just this condition of is. There’s a range of experience on offer, and the one end it is horrible, on the other end, it is unimaginably beautiful.

And we clearly have a preference for one over the other, if we have a preference for anything. There is no preference more fundamental than escaping the worst possible misery for everyone. If you doubt that, you’re just not thinking about how bad things can get. It’s incredibly frustrating. In this conversation, you’re hearing the legacy of the frustration I’ve felt in talking to otherwise smart and well educated people who think they’re on interesting philosophical ground in doubting whether we should avoid the worst possible misery for everyone. Or that it would be good to avoid it, or perhaps it’s intelligible to have other priorities. And, again, I just think that they’re not understanding the words “worst possible misery and everyone”, they’re not letting those words and land in language cortex. And if they do, they’ll see that there is no other place to stand where you could have other priorities.

Lucas Perry: Yeah. And my brief reaction to that is, I still honestly feel confused about this. So maybe I’m in the camp of frustrating people. I can imagine other evolutionary timelines where there are minds that just optimize for the worst possible misery for everyone, just because in mind space those minds are physically possible.

Sam Harris: Well, that’s possible. We can certainly create a paperclip maximizer that is just essentially designed to make every conscious being suffer as much as it can. And that would be especially easy to do provided that intelligence wasn’t conscious. If it’s not a matter of its suffering, then yeah, we could use AGI to make things awful for everyone else. You could create a sadistic AGI that wanted everyone else to suffer and it derived immense pleasure from that.

Lucas Perry: Or immense suffering. I don’t see anything intrinsically motivating about suffering as navigating a mind necessarily away from it. Computationally, I can imagine a mind just suffering as much as possible and spreads that as much as possible. And maybe the suffering is bad in some objective sense, given consciousness realism, and that that was disclosing the intrinsic valence of consciousness in the universe. But the is-ought distinction there still seems confusing to me. Yes, suffering is bad and maybe the worst possible misery for everyone is bad, but that’s not universally motivating for all possible minds.

Sam Harris: The usual problem here is, it’s easy for me to care about my own suffering, but why should I care about the suffering of others? That seems to be the ethical stalemate that people worry about. My response there is that it doesn’t matter. You can take the view from above there and you can just say, “The universe would be better if all the sentient beings suffered less and it would be worse if they suffered more.” And if you’re unconvinced by that, you just have to keep turning the dial to separate those two more and more and more and more so that you get to the extremes. If any given sentient being can’t be moved to care about the experience of others, well, that’s one sort of world, that’s not a peak on the moral landscape. That will be a world where beings are more callous than they would otherwise be in some other corner of the universe. And they’ll bump into each other more and they’ll be more conflict and they’ll fail to cooperate in certain ways that would have opened doors to positive experiences that they now can’t have.

And you can try to use moralizing language about all of this and say, “Well, you still can’t convince me that I should care about people starving to death in Somalia.” But the reality is an inability to care about that has predictable consequences. If enough people can’t care about that then certain things become impossible and those things, if they were possible, lead to good outcomes that if you had a different sort of mind, you would enjoy. So all of this bites its own tail in an interesting way when you imagine being able to change a person’s moral intuitions. And then the question is, well, should you change those intuitions? Would it be good to change your sense of what is good? That question has an answer on the moral landscape. It has an answer when viewed as a navigation problem.

Lucas Perry: Right. But isn’t the assumption there that if something leads to a good world, then you should do it?

Sam Harris: Yes. You can even drop your notion of should. I’m sure it’s finite, but a functionally infinite number of worlds on offer and there’s ways to navigate into those spaces. And there are ways to fail to navigate into those spaces. There are ways to try and fail, and worse still, there are ways to not know what you’re missing, to not even know where you should be pointed on this landscape, which is to say, you need to be a realist here. There are experiences that are better than any experience that you are going to have and you are never going to know about them, possible experiences. And granting that, you don’t need a concept of should, should is just shorthand for how we speak with one another and try to admonish one another to be better in the future in order to cooperate better or to realize different outcomes. But it’s not a deep principle of reality.

What is a deep principle of reality is consciousness and its possibilities. Consciousness is the one thing that can’t be an illusion. Even if we’re in a simulation, even if we’re brains in vats, even if we’re confused about everything, something seems to be happening, and that seeming is the fact of consciousness. And almost as rudimentary as that is the fact that within this space of seemings, again, we don’t know what the base layer of reality is, we don’t know if our physics is the real physics, we could be confused, this could be a dream, we could be confused about literally everything except that in this space of seemings there appears to be a difference between things getting truly awful to no apparent good end and things getting more and more sublime.

And there’s potentially even a place to stand where that difference isn’t so captivating anymore. Certainly, there are Buddhists who would tell you that you can step off that wheel of opposites, ultimately. But even if you buy that, that is some version of a peak on my moral landscape. That is a contemplative peak where the difference between agony and ecstasy is no longer distinguishable because what you are then aware of is just that consciousness is intrinsically free of its content and no matter what its possible content could be. If someone can stabilize that intuition, more power to them, but then that’s the thing you should do, just to bring it back to the conventional moral framing.

Lucas Perry: Yeah. I agree with you. I’m generally a realist about consciousness and still do feel very confused, not just because of reasons in this conversation, but just generally about how causality fits in there and how it might influence our understanding of the worst possible misery for everyone being a bad thing. I’m also willing to go that far to accept that as objectively a bad thing, if bad means anything. But then I still get really confused about how that necessarily fits in with, say, decision theory or “shoulds” in the space of possible minds and what is compelling to who and why?

Sam Harris: Perhaps this is just semantic. Imagine all these different minds that have different utility functions. The paperclip maximizer wants nothing more than paperclips. And anything that reduces paperclips is perceived as a source of suffering. It has a disutility. If you have any utility function, you have this liking and not liking component provided your sentient. That’s what it is to be motivated consciously. For me, the worst possible misery for everyone is a condition where, whatever the character of your mind, every sentient mind is put in the position of maximal suffering for it. So some things like paperclips and some things hate paperclips. If you hate paperclips, we give you a lot of paperclips. If you like paperclips, we take away all your paperclips. If that’s your mind, we tune your corner of the universe to that torture chamber. You can be agnostic as to what the actual things are that make something suffer. It’s just suffering is by definition the ultimate frustration of that mind’s utility function.

Lucas Perry: Okay. I think that’s a really, really important crux and crucial consideration between us and a general point of confusion here. Because that’s the definition of what suffering is or what it means. I suspect that those things may be able to come apart. So, for you, maximum disutility and suffering are identical, but I guess I could imagine a utility function being separate or inverse from the hedonics of a mind. Maybe the utility function, which is purely a computational thing, is getting maximally satisfied, maximizing suffering everywhere, and the mind that is implementing that suffering is just completely immiserated while doing it. But the utility function, which is different and inverse from the experience of the thing, is just getting satiated and so the machine keeps driving towards maximum-suffering-world.

Sam Harris: Right, but there’s either something that is liked to be satiated in that way or there isn’t right now. If we’re talking about real conscious society, we’re talking about some higher order satisfaction or pleasure that is not suffering by my definition. We have this utility function ourselves. I mean when you take somebody who decides to climb to the summit of Mount Everest where the process almost every moment along the way is synonymous with physical pain and intermittent fear of death, torture by another name. But the whole project is something that they’re willing to train for, sacrifice for, dream about, and then talk about for the rest of their lives, and at the end of the day might be in terms of their conscious sense of what it was like to be them, the best thing they ever did in their lives.

That is this sort of bilayered utility function you’re imagining, whereas if you could just experience sample what it’s like to be in the death zone on Everest, it really sucks and if imposed on you for any other reason, it would be torture. But given the framing, given what this person believes about what they’re doing, given the view out their goggles, given their identity as a mountain climber, this is the best thing they’ve ever done. You’re imagining some version of that, but that fits in my view on the moral landscape. That’s not the worst possible misery for anyone. The source of satisfaction that is deeper than just bodily, sensory pleasure every moment of the day, or at least it seems to be for that person at that point in time. They could be wrong about that. There could be something better. They don’t know what they’re missing. It’s actually much better to not care about mountain climbing.

The truth is, your aunt is a hell of a lot happier than Sir Edmund Hillary was and Edmund Hillary was never in a position to know it because he was just so into climbing mountains. That’s where the realism comes in, in terms of you not knowing what you’re missing. But I just see any ultimate utility function, if it’s accompanied by consciousness, it can’t define itself as the ultimate frustration of its aims if its aims are being satisfied.

Lucas Perry: I see. Yeah. So this just seems to be a really important point around hedonics and computation and utility function and what drives what. So, wrapping up here, I think I would feel defeated if I let you escape without maybe giving a yes or no answer to this last question. Do you think that bliss and wellbeing can be mathematically defined?

Sam Harris: That is something I have no intuitions about it. I’m not enough of a math head to think in those terms. If we mathematically understood what it meant for us neurophysiologically in our own substrate, well then, I’m sure we can characterize it for creatures just like us. I think substrate independence makes it something that’s hard to functionally understand in new systems and it’ll just pose problems of our just knowing what it’s like to be something that on the outside seems to be functioning much like we do but is organized in a very different way. But yeah, I don’t have any intuitions around that one way or the other.

Lucas Perry: All right. And so pointing towards your social media or the best places to follow you, where should we do that?

Sam Harris: My website is just samharris.org and I’m SamHarrisorg without the dot on Twitter, and you can find anything you want about me on my website, certainly.

Lucas Perry: All right, Sam. Thanks so much for coming on and speaking about this wide range of issues. You’ve been deeply impactful in my life since I guess about high school. I think you probably partly at least motivated my trip to Nepal, where I overlooked the Pokhara Lake and reflected on your terrifying acid trip there.

Sam Harris: That’s hilarious. That’s in my book Waking Up, but it’s also on my website and it’s also I think I read it on the Waking Up App and it’s in a podcast. It’s also on Tim Ferriss’ podcast. But anyway, that acid trip was detailed in this piece called Drugs and The Meaning of Life. That’s hilarious. I haven’t been back to Pokhara since, so you’ve seen that lake more recently than I have.

Lucas Perry: So yeah, you’ve contributed much to my intellectual and ethical development and thinking, and for that, I have tons of gratitude and appreciation. And thank you so much for taking the time to speak with me about these issues today.

Sam Harris: Nice. Well, it’s been a pleasure, Lucas. And all I can say is keep going. You’re working on very interesting problems and you’re very early to the game, so it’s great to see you doing it.

Lucas Perry: Thanks so much, Sam.

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?

Topics discussed in this episode include:

  • Existential risk
  • Computational substrates and AGI
  • Genetics and aging
  • Risks of synthetic biology
  • Obstacles to space colonization
  • Great Filters, consciousness, and eliminating suffering

You can take a survey about the podcast here

Submit a nominee for the Future of Life Award here

 

Timestamps: 

0:00 Intro

3:58 What are the most important issues in the world?

12:20 Collective intelligence, AI, and the evolution of computational systems

33:06 Where we are with genetics

38:20 Timeline on progress for anti-aging technology

39:29 Synthetic biology risk

46:19 George’s thoughts on COVID-19

49:44 Obstacles to overcome for space colonization

56:36 Possibilities for “Great Filters”

59:57 Genetic engineering for combating climate change

01:02:00 George’s thoughts on the topic of “consciousness”

01:08:40 Using genetic engineering to phase out voluntary suffering

01:12:17 Where to find and follow George

 

Citations: 

George Church’s Twitter and website

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Professor George Church on existential risk, the evolution of computational systems, synthetic-bio risk, aging, space colonization, and more. We’re skipping the AI Alignment Podcast episode this month, but I intend to have it resume again on the 15th of June. Some quick announcements for those unaware, there is currently a live survey that you can take about the FLI and AI Alignment Podcasts. And that’s a great way to voice your opinion about the podcast, help direct its evolution, and provide feedback for me. You can find a link for that survey on the page for this podcast or in the description section of wherever you might be listening. 

The Future of Life Institute is also in the middle of its search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Institute award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs to the person who invited the nomination winner, and so on. You can find details about that on the page. 

George Church is Professor of Genetics at Harvard Medical School and Professor of Health Sciences and Technology at Harvard and MIT. He is Director of the U.S. Department of Energy Technology Center and Director of the National Institutes of Health Center of Excellence in Genomic Science. George leads Synthetic Biology at the Wyss Institute, where he oversees the directed evolution of molecules, polymers, and whole genomes to create new tools with applications in regenerative medicine and bio-production of chemicals. He helped initiate the Human Genome Project in 1984 and the Personal Genome Project in 2005. George invented the broadly applied concepts of molecular multiplexing and tags, homologous recombination methods, and array DNA synthesizers. His many innovations have been the basis for a number of companies including Editas, focused on gene therapy, Gen9bio, focused on Synthetic DNA, and Veritas Genetics, which is focused on full human genome sequencing. And with that, let’s get into our conversation with George Church.

So I just want to start off here with a little bit of a bigger picture about what you care about most and see as the most important issues today.

George Church: Well, there’s two categories of importance. One are things that are very common and so affect many people. And then there are things that are very rare but very impactful nevertheless. Those are my two top categories. They weren’t when I was younger. I didn’t consider either of them that seriously. So examples of very common things are age-related diseases, infectious diseases. They can affect all 7.7 billion of us. Then on the rare end would be things that could wipe out all humans or all civilization or all living things, asteroids, supervolcanoes, solar flares, and engineered or costly natural pandemics. So those are things that I think are very important problems. Then we have had the research to enhance wellness and minimize those catastrophes. The third category or somewhat related to those two which is things we can do to say get us off the planet, so things would be highly preventative from total failure.

Lucas Perry: So in terms of these three categories, how do you see the current allocation of resources worldwide and how would you prioritize spending resources on these issues?

George Church: Well the current allocation of resources is very different from the allocations that I would set for my own research goals and what I would set for the world if I were in charge, in that there’s a tendency to be reactive rather than preventative. And this applies to both therapeutics versus preventatives and the same thing for environmental and social issues. All of those, we feel like it somehow makes sense or is more cost-effective, but I think it’s an illusion. It’s far more cost-effective to do many things preventatively. So, for example, if we had preventatively had a system of extensive testing for pathogens, we could probably save the world trillions of dollars on one disease alone with COVID-19. I think the same thing is true for global warming. A little bit of preventative environmental engineering for example in the Arctic where relatively few people would be directly engaged, could save us disastrous outcomes down the road.

So I think we’re prioritizing a very tiny fraction for these things. Aging and preventative medicine is maybe a percent of the NIH budget, and each institute sets aside about a percent to 5% on preventative measures. Gene therapy is another one. Orphan drugs, very expensive therapies, millions of dollars per dose versus genetic counseling which is now in the low hundreds, soon will be double digit dollars per lifetime.

Lucas Perry: So in this first category of very common widespread issues, do you have any other things there that you would add on besides aging? Like aging seems to be the kind of thing in culture where it’s recognized as an inevitability so it’s not put on the list of top 10 causes of death. But lots of people who care about longevity and science and technology and are avant-garde on these things would put aging at the top because they’re ambitious about reducing it or solving aging. So are there other things that you would add to that very common widespread list, or would it just be things from the top 10 causes of mortality?

George Church: Well infection was the other one that I included in the original list in common diseases. Infectious diseases are not so common in the wealthiest parts of the world, but they are still quite common worldwide, HIV, TB, malaria are still quite common, millions of people dying per year. Nutrition is another one that tends to be more common in the four parts of the world that still results in death. So the top three would be aging-related.

And even if you’re not interested in longevity and even if you believe that aging is natural, in fact some people think that infectious diseases and nutritional deficiencies are natural. But putting that aside, if we’re attacking age-related diseases, we can use preventative medicine and aging insights into reducing those. So even if you want to neglect longevity that’s unnatural, if you want to address heart disease, strokes, lung disease, falling down, infectious disease, all of those things might be more easily addressed by aging studies and therapies and preventions than by a frontal assault on each micro disease one at a time.

Lucas Perry: And in terms of the second category, existential risk, if you were to rank order the likelihood and importance of these existential and global catastrophic risks, how would you do so?

George Church: Well you can rank their probability based on past records. So, we have some records of supervolcanoes, solar activity, and asteroids. So that’s one way of calculating probability. And then you can also calculate the impact. So it’s a product, the probability and impact for the various kinds of recorded events. I mean I think they’re similar enough that I’m not sure I would rank order those three.

And then pandemics, whether natural or human-influenced, probably a little more common than those first three. And then climate change. There are historic records but it’s not clear that they’re predictive. The probability of an asteroid hitting probably is not influenced by human presence, but climate change probably is and so you’d need a different model for that. But I would say that that is maybe the most likely of the lot for having an impact.

Lucas Perry: Okay. The Future of Life Institute, the things that we’re primarily concerned about in terms of this existential risk category would be the risks from artificial general intelligence and superintelligence, also synthetic bio-risk coming up in the 21st century more and more, and then accidental nuclear war would also be very bad, maybe not an existential risk. That’s arguable. Those are sort of our central concerns in terms of the existential risk category.

Relatedly the Future of Life Institute sees itself as a part of the effective altruism community which when ranking global priorities, they have four areas of essential consideration for impact. The first is global poverty. The second is animal suffering. And the third is long-term future and existential risk issues, having to do mainly with anthropogenic existential risks. The fourth one is meta-effective altruism. So I don’t want to include that. They also tend to make the same ranking, being that mainly the long-term risks of advanced artificial intelligence are basically the key issues that they’re worried about.

How do you feel about these perspectives or would you change anything?

George Church: My feeling is that natural intelligence is ahead of artificial intelligence and will stay there for quite a while, partly because synthetic biology has a steeper slope and I’m including the enhanced natural intelligence in the synthetic biology. That has a steeper upward slope than totally inorganic computing now. But we can lump those together. We can say artificial intelligence writ large to include anything that our ancestors didn’t have in terms of intelligence, which could include enhancing our own intelligence. And I think especially should include corporate behavior. Corporate behavior is a kind of intelligence which is not natural, is wide spread, and it is likely to change, mutate, evolve very rapidly, faster than human generation times, probably faster than machine generation times.

Nukes I think are aging and maybe are less attractive as a defense mechanism. I think they’re being replaced by intelligence, artificial or otherwise, or collective and synthetic biology. I mean I think that if you wanted to have mutually assured destruction, it would be more cost-effective to do that with syn-bio. But I would still keep it on the list.

So I agree with that list. I’d just like nuanced changes to where the puck is likely to be going.

Lucas Perry: I see. So taking into account and reflecting on how technological change in the short to medium term will influence how one might want to rank these risks.

George Church: Yeah. I mean I just think that a collective human enhanced intelligence is going to be much more disruptive potentially than AI is. That’s just a guess. And I think that nukes will just be part of a collection of threatening things that people do. Probably it’s more threatening to cause collapse of a electric grid or a pandemic or some other economic crash than nukes.

Lucas Perry: That’s quite interesting and is very different than the story that I have in my head, and I think will also be very different than the story that many listeners have in their heads. Could you expand and unpack your timelines and beliefs about why you think the\at collective organic intelligence will be ahead of AI? Could you say, I guess, when you would expect AI to surpass collective bio intelligence and some of the reasons again for why?

George Church: Well, I don’t actually expect silicon-based intelligence to ever bypass in every category. I think it’s already super good at storage retrieval and math. But that’s subject to change. And I think part of the assumptions have been that we’ve been looking at a Moore’s law projection while most people haven’t been looking at the synthetic biology equivalent and haven’t noticed that the Moore’s law might finally be plateauing, at least as it was originally defined. So that’s part of the reason I think for the excessive optimism, if you will, about artificial intelligence.

Lucas Perry: The Moore’s law thing has to do with hardware and computation, right?

George Church: Yeah.

Lucas Perry: That doesn’t say anything about how algorithmic efficiency and techniques and tools are changing, and the access to big data. Something we’ve talked about on this podcast before is that many of the biggest insights and jumps in deep learning and neural nets haven’t come from new techniques but have come from more massive and massive amounts of compute on data.

George Church: Agree, but those data are also available to humans as big data. I think maybe the compromise here is that it’s some hybrid system. I’m just saying that humans plus big data plus silicon-based computers, even if they stay flat in hardware is going to win over either one of them separately. So maybe what I’m advocating is hybrid systems. Just like in your brain you have different parts of your brain that have different capabilities and functionality. In a hybrid system we would have the wisdom of crowds, plus compute engines, plus big data, but available to all the parts of the collective brain.

Lucas Perry: I see. So it’s kind of like, I don’t know if this is still true, but I think at least at some point it was true, that the best teams at chess were AIs plus humans?

George Church: Correct, yeah. I think that’s still true. But I think it will become even more true if we start altering human brains, which we have a tendency to try to do already via education and caffeine and things like that. But there’s really no particular limit to that.

Lucas Perry: I think one of the things that you said was that you don’t think that AI alone will ever be better than biological intelligence in all ways.

George Church: Partly because biological intelligence is a moving target. The first assumption was that the hardware would keep improving on Moore’s law, which it isn’t. The second assumption was that we would not alter biological intelligence. There’s one moving target which was silicon and biology was not moving, when in fact biology is moving at a steeper slope both in terms of hardware and algorithms and everything else and we’re just beginning to see that. So I think that when you consider both of those, it at least sows the seed of uncertainty as to whether AI is inevitably better than a hybrid system.

Lucas Perry: Okay. So let me just share the kind of story that I have in my head and then you can say why it might be wrong. AI researchers have been super wrong about predicting how easy it would be to make progress on AI in the past. So taking predictions with many grains of salt, if you interview say the top 100 AI researchers in the world, they’ll give a 50% probability of there being artificial general intelligence by 2050. That could be very wrong. But they gave like a 90% probability of there being artificial general intelligence by the end of the century.

And the story in my head says that I expect there to be bioengineering and genetic engineering continuing. I expect there to be designer babies. I expect there to be enhancements to human beings further and further on as we get into the century in increasing capacity and quality. But there are computational and substrate differences between computers and biological intelligence like the clock speed of computers can be much higher. They can compute much faster. And then also there’s this idea about the computational architectures in biological intelligences not being privileged or only uniquely available to biological organisms such that whatever the things that we think are really good or skillful or they give biological intelligences a big edge on computers could simply be replicated in computers.

And then there is an ease of mass manufacturing compute and then emulating those systems on computers such that the dominant and preferable form of computation in the future will not be on biological wetware but will be on silicon. And for that reason at some point there’ll just be a really big competitive advantage for the dominant form of compute and intelligence and life on the planet to be silicon based rather than biological based. What is your reaction to that?

George Church: You very nicely summarized what I think is a dominant worldview of people that are thinking about the future, and I’m happy to give a counterpoint. I’m not super opinionated but I think it’s worthy of considering both because the reason we’re thinking about the future is we don’t want to be blind sighted by it. And this could be happening very quickly by the way because both revolutions are ongoing as is the merger.

Now clock speed, my guess is that clock speed may not be quite as important as energy economy. And that’s not to say that both systems, let’s call them bio and non-bio, can’t optimize energy. But if you look back at sort of the history of evolution on earth, the fastest clock speeds, like bacteria and fruit flies, aren’t necessarily more successful in any sense than humans. They might have more bio mass, but I think humans are the only species with our slow clock speed relative to bacteria that are capable of protecting all of the species by taking us to a new planet.

And clock speed is only important if you’re in a direct competition in a fairly stable environment where the fastest bacteria win. But worldwide most of the bacteria are actually very slow growers. If you look at energy consumption right now, which both of them can improve, there are biological compute systems that are arguably a million times more energy-efficient at even tasks where the biological system wasn’t designed or evolved for that task, but it can kind of match. Now there are other things where it’s hard to compare, either because of the intrinsic advantage that either the bio or the non-bio system has, but where they are sort of on the same framework, it takes 100 kilowatts of power to run say Jeopardy! and Go on a computer and the humans that are competing are using considerably less than that, depending on how you calculate all the things that is required to support the 20 watt brain.

Lucas Perry: What do you think the order of efficiency difference is?

George Church: I think it’s a million fold right now. And this largely a hardware thing. I mean there is algorithmic components that will be important. But I think that one of the advantages that bio chemical systems have is that they are intrinsically atomically precise. While Moore’s law seem to be plateauing somewhere around 3 nanometer fabrication resolution, that’s off by maybe a thousand fold from atomic resolution. So that’s one thing, that as you go out many years, they will either be converging on or merging in some ways so that you get the advantages of atomic precision, the advantages of low energy and so forth. So that’s why I think that we’re moving towards a slightly more molecular future. It may not be recognizable as either our silicon von Neumann or other computers, nor totally recognizable as a society of humans.

Lucas Perry: So is your view that we won’t reach artificial general intelligence like the kind of thing which can reason about as well as about humans across all the domains that humans are able to reason? We won’t reach that on non-bio methods of computation first?

George Church: No, I think that we will have AGI in a number of different substrates, mechanical, silicon, quantum computing. Various substrates will be able of doing artificial general intelligence. It’s just that the ones that do it in a most economic way will be the ones that we will tend to use. There’ll be some cute museum that will have a collection of all the different ways, like the tinker toy computer that did Tic Tac Toe. Well, that’s in a museum somewhere next to Danny Hillis, but we’re not going to be using that for AGI. And I think there’ll be a series of artifacts like that, that in practice it will be very pragmatic collection of things that make economic sense.

So just for example, its easier to make a copy of a biological brain. Now that’s one thing that appears to be an advantage to non-bio computers right now, is you can make a copy of even large data sets for a fairly small expenditure of time, cost, and energy. While, to educate a child takes decades and in the end you don’t have anything totally resembling the parents and teachers. I think that’s subject to change. For example, we have now storage of data in DNA form, which is about a million times denser than any comprable non-chemical, non-biological system, and you can make a copy of it for hundreds of joules of energy and pennies. So you can hold an exabyte of data in the palm of your hand and you can make a copy of it relatively easy.

Now that’s not a mature technology, but it shows where we’re going. If we’re talking 100 years, there’s no particular reason why you couldn’t have that embedded in your brain and input and output to it. And by the way, the cost of copying that is very close to the thermodynamic limit for making copies of bits, while computers are nowhere near that. They’re off by a factor of a million.

Lucas Perry: Let’s see if I get this right. Your view is that there is this computational energy economy benefit. There is this precisional element which is of benefit, and that because there are advantages to biological computation, we will want to merge the best aspects of biological computation with non-biological in order to sort of get best of both worlds. So while there may be many different AGIs on offer on different substrates, the future looks like hybrids.

George Church: Correct. And it’s even possible that silicon is not in the mix. I’m not predicting that it’s not in the mix. I’m just saying it’s possible. It’s possible that an atomically precise computer is better at quantum computing or is better at clock time or energy.

Lucas Perry: All right. So I do have a question later about this kind of thing and space exploration and reducing existential risk via further colonization which I do want to get into later. I guess I don’t have too much more to say about our different stories around here. I think that what you’re saying is super interesting and challenging in very interesting ways. I guess the only thing I would have to say is I guess I don’t know enough, but you said that the computation energy economy is like a million fold more efficient.

George Church: That’s for copying bits, for DNA. For doing complex tasks for example, Go, Jeopardy! or Einstein’s Mirabilis, those kinds of things were typically competing a 20 watt brain plus support structure with a 100 kilowatt computer. And I would say at least in the case of Einstein’s 1905 we win, even though we lose at Go and Jeopardy!, which is another interesting thing, is that humans have a great deal more of variability. And if you take the extreme values like one person in one year, Einstein in 1905 as the representative rather than the average person and the average year for that person, well, if you make two computers, they are going to likely be nearly identical, which is both a plus and a minus in this case. Now if you make Einstein in 1905 the average for humans, then you have a completely different set of goalpost for the AGI than just being able to pass a basic Turing test where you’re simulating someone of average human interest and intelligence.

Lucas Perry: Okay. So two things from my end then. First is, do you expect AGI to first come from purely non-biological silicon-based systems? And then the second thing is no matter what the system is, do you still see the AI alignment problem as the central risk from artificial general intelligence and superintelligence, which is just aligning AIs with human values and goals and intentions?

George Church: I think the further we get from human intelligence, the harder it is to convince ourselves that we can educate, and whereas the better they will be at fooling us. It doesn’t mean they’re more intelligent than us. It’s just they’re alien. It’s like a wolf can fool us when we’re out in the woods.

Lucas Perry: Yeah.

George Church: So I think that exceptional humans are as hard to guarantee that we really understand their ethics. So if you have someone who is a sociopath or high functioning autistic, we don’t really know after 20 years of ethics education whether they actually are thinking about it the same way we are, or even in compatible way to the way that we are. We being in this case neurotypicals, although I’m not sure I am one. But anyway.

I think that this becomes a big problem with AGI, and it may actually put a damper on it. Part of the assumption so far is we won’t change humans because we have to get ethics approval for changing humans. But we’re increasingly getting ethics approval for changing humans. I mean gene therapies are now approved and increasing rapidly, all kinds of neuro-interfaces and so forth. So I think that that will change.

Meanwhile, the silicon-based AGI as we approached it, it will change in the opposite direction. It will be harder and harder to get approval to do manipulations in those systems, partly because there’s risk, and partly because there’s sympathy for the systems. Right now there’s very little sympathy for them. But as you got to the point where computers haven an AGI level of say IQ of 70 or something like that for a severely mentally disabled person so it can pass the Turing test, then they should start getting the rights of a disabled person. And once they have the rights of a disabled person, that would include the right to not be unplugged and the right to vote. And then that creates a whole bunch of problems that we won’t want to address, except as academic exercises or museum specimens that we can say, hey, 50 years ago we created this artificial general intelligence, just like we went to the Moon once. They’d be stunts more than practical demonstrations because they will have rights and because it will represent risks that will not be true for enhanced human societies.

So I think more and more we’re going to be investing in enhanced human societies and less and less in the uncertain silicon-based. That’s just a guess. It’s based not on technology but on social criteria.

Lucas Perry: I think that it depends what kind of ethics and wisdom that we’ll have at that point in time. Generally I think that we may not want to take conventional human notions of personhood and apply them to things where it might not make sense. Like if you have a system that doesn’t mind being shut off, but it can be restarted, why is it so unethical to shut it off? Or if the shutting off of it doesn’t make it suffer, suffering may be some sort of high level criteria.

George Church: By the same token you can make human beings that don’t mind being shut off. That won’t change our ethics much I don’t think. And you could also make computers that do mind being shut off, so you’ll have this continuum on both sides. And I think we will have sympathetic rules, but combined with the risk, which is the risk that they can hurt you, the risk that if you don’t treat them with respect, they will be more likely to hurt you, the risk that you’re hurting them without knowing it. For example, if you have somebody with locked-in syndrome, you could say, “Oh, they’re just a vegetable,” or you could say, “They’re actually feeling more pain than I am because they have no agency, they have no ability to control their situation.”

So I think creating computers that could have the moral equivalent of locked-in syndrome or some other pain without the ability to announce their pain could be very troubling to us. And we would only overcome it if that were a solution to an existential problem or had some gigantic economic benefit. I’ve already called that into question.

Lucas Perry: So then, in terms of the first AGI, do you have a particular substrate that you imagine that coming online on?

George Church: My guess is it will probably be very close to what we have right now. As you said, it’s going to be algorithms and databases and things like that. And it will be probably at first a stunt, in the same sense that Go and Jeopardy! are stunts. It’s not clear that those are economically important. A computer that could pass the Turing test, it will make a nice chat bots and phone answering machines and things like that. But beyond that it may not change our world, unless we solve energy issues and so. So I think to answer your question, we’re so close to it now that it might be based on an extrapolation of current systems.

Quantum computing I think is maybe a more special case thing. Just because it’s good at encryption, encryption is very societal utility. I haven’t yet seen encryption described as something that’s mission critical for space flight or curing diseases, other than the social components of those. And quantum simulation may be beaten by building actual quantum systems. So for example, atomically precise systems that you can build with synthetic biology are quantum systems that are extraordinarily hard to predict, but they’re very easy to synthesize and measure.

Lucas Perry: Is your view here that if the first AGI is on the economic and computational scale of a supercomputer such that we imagine that we’re still just leveraging really, really big amounts of data and we haven’t made extremely efficient advancements and algorithms such that the efficiency jumps a lot but rather the current trends continue and it’s just more and more data and maybe some algorithmic improvements, that the first system is just really big and clunky and expensive, and then that thing can self-recursively try to make itself cheaper, and then that the direction that that would move in would be increasingly creating hardware which has synthetic bio components.

George Church: Yeah, I’d think that that already exists in a certain sense. We have a hybrid system that is self-correcting, self-improving at an alarming rate. But it is a hybrid system. In fact, it’s such a complex hybrid system that you can’t point to a room where it can make a copy of itself. You can’t even point to a building, possibly not even a state where you can make a copy of this self-modifying system because it involves humans, it involves all kinds of fab labs scattered around the globe.

We could set a goal to be able to do that, but I would argue we’re much closer to achieving that goal with a human being. You can have a room where you only can make a copy of a human, and if that is augmentable, that human can also make computers. Admittedly it would be a very primitive computer if you restricted that human to primitive supplies and a single room. But anyway, I think that’s the direction we’re going. And we’re going to have to get good at doing things in confined spaces because we’re not going to be able to easily duplicate planet Earth, probably going to have to make a smaller version of it and send it off and how big that is we can discuss later.

Lucas Perry: All right. Cool. This is quite perspective shifting and interesting, and I will want to think about this more in general going forward. I want to spend just a few minutes on this next question. I think it’ll just help give listeners a bit of overview. You’ve talked about it in other places. But I’m generally interested in getting a sense of where we currently stand with the science of genetics in terms of reading and interpreting human genomes, and what we can expect on the short to medium term horizon in human genetic and biological sciences for health and longevity?

George Church: Right. The short version is that we have gotten many factors of 10 improvement in speed, cost, accuracy, and interpretability, 10 million fold reduction in price from $3 billion for a poor quality genomic non-clinical quality sort of half a genome in that each of us have two genomes, one from each parent. So we’ve gone from $3 billion to $300. It will probably be $100 by the middle of year, and then will keep dropping. There’s no particular second law of thermodynamics or Heisenberg stopping us, at least for another million fold. That’s where we are in terms of technically being able to read and for that matter write DNA.

But the interpretation certainly there are genes that we don’t know what they do, there are disease that we don’t know what causes them. There’s a great vast amount of ignorance. But that ignorance may not be as impactful as sometimes we think. It’s often said that common diseases or so called complex multi-genic diseases are off in the future. But I would reframe that slightly for everyone’s consideration, that many of these common diseases are diseases of aging. Not all of them but many, many of them that we care about. And it could be that attacking aging as a specific research program may be more effective than trying to list all the millions of small genetic changes that has small phenotypic effects on these complex diseases.

So that’s another aspect of the interpretation where we don’t necessarily have to get super good at so called polygenic risk scores. We will. We are getting better at it, but it could be in the end a lot of the things that we got so excited about precision medicine, and I’ve been one of the champions of precision medicine since before it was called that. But precision medicine has a potential flaw in it, which is it’s the tendency to work on the reactive cures for specific cancers and inherited diseases and so forth when the preventative form of it which could be quite generic and less personalized might be more cost-effective and humane.

So for example, taking inherited diseases, we have a million to multi-million dollars spent on people having inherited diseases per individual, while a $100 genetic diagnosis could be used to prevent that. And generic solutions like aging reversal or aging prevention might stop cancer more effectively than trying to stop it once it gets to metastatic stage, which there is a great deal of resources put into that. That’s my update on where genomics is. There’s a lot more that could be said.

Lucas Perry:

Yeah. As a complete lay person in terms of biological sciences, stopping aging to me sounds like repairing and cleaning up human DNA and the human genome such that information that is lost over time is repaired. Correct me if I’m wrong or explain a little bit about what the solution to aging might look like.

George Church: I think there’s two kind of closer related schools of thought which one is that there’s damage that you need to go in there and fix the way you would fix a pothole. And the other is that there’s regulation that informs the system how to fix itself. I believe in both. I tend to focus on the second one.

If you take a very young cell, say a fetal cell. It has a tendency to repair much better than an 80-year-old adult cell. The immune system of a toddler is much more capable than that of a 90-year-old. This isn’t necessarily due to damage. This is due to the epigenetic so called regulation of the system. So one cell is convinced that it’s young. I’m going to use some anthropomorphic terms here. So you can take an 80-year-old cell, actually up to 100 years is now done, reprogram it into an embryo like state through for example Yamanaka factors named after Shinya Yamanaka. And that reprogramming resets many, not all, of the features such that it now behaves like a young non-senescent cell. While you might have taken it from a 100-year-old fibroblast that would only replicate a few times before it senesced and died.

Things like that seem to convince us that aging is reversible and you don’t have to micromanage it. You don’t have to go in there and sequence the genome and find every bit of damage and repair it. The cell will repair itself.

Now there are some things like if you delete a gene it’s gone unless you have a copy of it, in which case you could copy it over. But those cells will probably die off. And the same thing happens in the germline when you’re passing from parent to kid, those sorts of things that can happen and the process of weeding them out is not terribly humane right now.

Lucas Perry: Do you have a sense or timelines on progress of aging throughout the century?

George Church: There’s been a lot of wishful thinking for centuries on this topic. But I think we have a wildly different scenario now, partly because this exponential improvement in technologies, reading and writing DNA and the list goes on and on in cell biology and so forth. So I think we suddenly have a great deal of knowledge of causes of aging and ways to manipulate those to reverse it. And I think these are all exponentials and we’re going to act on them very shortly.

We already are seeing some aging drugs, small molecules that are in clinical trials. My lab just published a combination gene therapy that will hit five different diseases of aging in mice and now it’s in clinical trials in dogs and then hopefully in a couple of years it will be in clinical trials in humans.

We’re not talking about centuries here. We’re talking about the sort of time that it takes to get things through clinical trails, which is about a decade. And a lot of stuff going on in parallel which then after one decade of parallel trials would be merging into combined trials. So a couple of decades.

Lucas Perry: All right. So I’m going to get in trouble in here if I don’t talk to you about synthetic bio risk. So, let’s pivot into that. What are your views and perspectives on the dangers to human civilization that an increasingly widespread and more advanced science of synthetic biology will pose?

George Church: I think it’s a significant risk. Getting back to the very beginning of our conversation, I think it’s probably one of the most significant existential risks. And I think that preventing it is not as easy as nukes. Not that nukes are easy, but it’s harder. Partly because it’s becoming cheaper and the information is becoming more widespread.

But it is possible. Part of it depends on having many more positive societally altruistic do gooders than do bad. It would be helpful if we could also make a big impact on poverty and diseases associated poverty and psychiatric disorders. The kind of thing that causes unrest and causes dissatisfaction is what tips the balance where one rare individual or a small team will do something that otherwise it would be unthinkable for even them. But if they’re sociopaths or they are representing a disadvantaged category of people then they feel justified.

So we have to get at some of those core things. It would also be helpful if we were more isolated. Right now we are very well mixed pot, which puts us both at risk for natural, as well as engineered diseases. So if some of us lived in sealed environments on Earth that are very similar to the sealed environments that we would need in space, that would both prepare us for going into space. And some of them would actually be in space. And so the further we are away from the mayhem of our wonderful current society, the better. If we had a significant fraction of population that was isolated, either on earth or elsewhere, it would lower the risk of all of us dying.

Lucas Perry: That makes sense. What are your intuitions about the offense/defense balance on synthetic bio risk? Like if we have 95% to 98% synthetic bio do gooders and a small percentage of malevolent actors or actors who want more power, how do you see the relative strength and weakness of offense versus defense?

George Church: I think as usual it’s a little easier to do offense. It can go back and forth. Certainly it seems easier to defend yourself from a ICBM than from something that could be spread in a cough. And we’re seeing that in spades right now. I think the fraction of white hats versus black hats is much better than 98% and it has to be. It has to be more like a billion to one. And even then it’s very risky. But yeah, it’s not easy to protect.

Now you can do surveillance so that you can restrict research as best you can, but it’s a numbers game. It’s combination of removing incentives, adding strong surveillance, whistleblowers that are not fearful of false positives. The suspicious package in the airport should be something you look at, even though most of them are not actually bombs. We should tolerate a very high rate of false positives. But yes, surveillance is not something we’re super good at it. It falls in the category of preventative medicine. And we would far prefer to do reactive, is to wait until somebody releases some pathogen and then say, “Oh, yeah, yeah, we can prevent that from happening again in the future.”

Lucas Perry: Is there a opportunity for boosting or beefing a human immune system or a public early warning detection systems of powerful and deadly synthetic bio agents?

George Church: Well so, yes is the simple answer. If we boost our immune systems in a public way — which it almost would have to be, there’d be much discussion about how to do that — then pathogens that get around those boosts might become more common. In terms of surveillance, I proposed in 2004 that we had an opportunity and still do of doing surveillance on all synthetic DNA. I think that really should be 100% worldwide. Right now it’s 80% or so. That is relatively inexpensive to fully implement. I mean the fact that we’ve done 80% already closer to this.

Lucas Perry: Yeah. So, funny enough I was actually just about to ask you about that paper that I think you’re referencing. So in 2004 you wrote A Synthetic Biohazard Non-proliferation Proposal, in anticipation of a growing dual use risk of synthetic biology, which proposed in part the sale and registry of certain synthesis machines to verified researchers. If you were to write a similar proposal today, are there some base elements of it you would consider including, especially since the ability to conduct synthetic biology research has vastly proliferated since then? And just generally, are you comfortable with the current governance of dual use research?

George Church: I probably would not change that 2004 white paper very much. Amazingly the world has not changed that much. There still are a very limited number of chemistries and devices and companies, so that’s a bottleneck which you can regulate and is being regulated by the International Gene Synthesis Consortium, IGSC. I did advocate back then and I’m still advocating that we get closer to an international agreement. Two sectors generally in the United Nations have said casually that they would be in favor of that, but we need essentially every level from the UN all the way down to local governments.

There’s really very little pushback today. There was some pushback back in 2004 where the company’s lawyers felt that they would be responsible or there would be an invasion of privacy of their customers. But I think eventually the rationale of high risk avoidance won out, so now it’s just a matter of getting full compliance.

One of these unfortunate things that the better you are at avoiding an existential risk, the less people know about it. In fact, we did so well on Y2K makes it uncertain as to whether we needed to do anything about Y2K at all, and I think hopefully the same thing will be true for a number of disasters that we avoid without most of the population even knowing how close we were.

Lucas Perry: So the main surveillance intervention here would be heavy monitoring and regulation and tracking of the synthesis machines? And then also a watch dog organization which would inspect the products of said machines?

George Church: Correct.

Lucas Perry: Okay.

George Church: Right now most of the DNA is ordered. You’ll send on the internet your order. They’ll send back the DNA. Those same principles have to apply to desktop devices. It has to get some kind of approval to show that you are qualified to make a particular DNA before the machine will make that DNA. And it has to be protected against hardware and software hacking which is a challenge. But again, it’s a numbers game.

Lucas Perry: So on the topic of biological risk, we’re currently in the context of the COVID-19 pandemic. What do you think humanity should take as lessons from COVID-19?

George Church: Well, I think the big one is testing. Testing is probably the fastest way out of it right now. The geographical locations that have pulled out of it fastest were the ones that were best at testing and isolation. If your testing is good enough, you don’t even have to have very good contact tracing, but that’s also valuable. The longer shots are cures and vaccines and those are not entirely necessary and they are long-term and uncertain. There’s no guarantee that we will come up with a cure or a vaccine. For example, HIV, TB and malaria do not have great vaccines, and most of them don’t have great stable cures. HIV is a full series of cures over time. But not even cures. They’re more maintenance, management.

I sincerely hope that coronavirus is not in that category of HIV, TB, and malaria. But we can’t do public health based on hopes alone. So testing. I’ve been requesting a bio weather map and working towards improving the technology to do so since around 2002, which was before the SARS 2003, as part of the inspiration for the personal genome project, was this bold idea of bio weather map. We should be at least as interested in what biology is doing geographically as we are about what the low pressure fronts are doing geographically. It could be extremely inexpensive, certainly relative to the multi-trillion dollar cost for one disease.

Lucas Perry: So given the ongoing pandemic, what has COVID-19 demonstrated about human global systems in relation to existential and global catastrophic risk?

George Church: I think it’s a dramatic demonstration that we’re more fragile than we would like to believe. It’s a demonstration that we tend to be more reactive than proactive or preventative. And it’s a demonstration that we’re heterogeneous. That there are geographical reasons and political systems that are better prepared. And I would say at this point the United States is probably among the least prepared, and that was predictable by people who thought about this in advance. Hopefully we will be adequately prepared that we will not emerge from this as a third world nation. But that is still a possibility.

I think it’s extremely important to make our human systems, especially global systems more resilient. It would be nice to take as examples the countries that did the best or even towns that did the best. For example, the towns of Vo, Italy and I think Bolinas, California, and try to spread that out to the regions that did the worst. Just by isolation and testing, you can eliminate it. That sort of thing is something that we should have worldwide. To make the human systems more resilient we can alter our bodies, but I think very effective is altering our social structures so that we are testing more frequently, we’re constantly monitoring both zoonotic sources and testing bushmeat and all the places where we’re getting too close to the animals. But also testing our cities and all the environments that humans are in so that we have a higher probability of seeing patient zero before they become a patient.

Lucas Perry: The last category that you brought up at the very beginning of this podcast was preventative measures and part of that was not having all of our eggs in the same basket. That has to do with say Mars colonization or colonization of other moons which are perhaps more habitable and then eventually to Alpha Centauri and beyond. So with advanced biology and advanced artificial intelligence, we’ll have better tools and information for successful space colonization. What do you see as the main obstacles to overcome for colonizing the solar system and beyond?

George Church: So we’ll start with the solar system. Most of the solar system is not pleasant compared to Earth. It’s a vacuum and it’s cold, including Mars and many of the moons. There are moons that have more water, more liquid water than Earth, but it requires some drilling to get down to it typically. There’s radiation. There’s low gravity. And we’re not adaptive.

So we might have to do some biological changes. They aren’t necessarily germline but they’ll be the equivalent. There are things that you could do. You can simulate gravity with centrifuges and you can simulate the radiation protection we have on earth with magnetic fields and thick shielding, equivalent of 10 meters of water or dirt. But there will be a tendency to try to solve those problems. There’ll be issues of infectious disease, which ones we want to bring with us and which ones we want to quarantine away from. That’s an opportunity more than a uniquely space related problem.

A lot of the barriers I think are biological. We need to practice building colonies. Right now we have never had a completely recycled human system. We have completely recycled plant and animal systems but none that are humans, and that is partly having to do with social issues, hygiene and eating practices and so forth. I think that can be done, but it should be tested on Earth because the consequences of failure on a moon or non-earth planet is much more severe than if you test it out on Earth. We should have thousands, possibly millions of little space colonies on Earth since one of my pet projects is making that so that it’s economically feasible on Earth. Only by heavy testing at that scale will we find the real gotchas and failure modes.

And then final barrier, which is more in the category that people think about is the economies of, if you do the physics calculation how much energy it takes to raise a kilogram into orbit or out of orbit, it’s much, much less than the cost per kilogram, orders of magnitude than what we currently do. So there’s some opportunity for improvement there. So that’s in the solar system.

Outside of the solar system let’s say Proxima B, Alpha Centauri and things of that range, there’s nothing particularly interesting between here and there, although there’s nothing to stop us from occupying the vacuum of space. To get to four and a half light years either requires a revolution in propulsion and sustainability in a very small container, or a revolution in the size of the container that we’re sending.

So, one pet project that I’m working on is trying to make a nanogram size object that would contain the information sufficiently for building a civilization or at least building a communication device that’s much easier to accelerate and decelerate a nanogram than it is to do any of the scale of space probes we currently use.

Lucas Perry: Many of the issues that human beings will face within the solar system and beyond machines or synthetic computation that exist today seems more robust towards. Again, there are the things which you’ve already talked about like the computational efficiency and precision for self-repair and other kinds of things that modern computers may not have. So I think just a little bit of perspective on that would be useful, like why we might not expect that machines would take the place of humans in many of these endeavors.

George Church: Well, so for example, we would be hard pressed to even estimate, I haven’t seen a good estimate yet, of a self-contained device that could make a copy of itself from dirt or whatever, the chemicals that are available to it on a new planet. But we do know how to do that with humans or hybrid systems.

Here’s a perfect example of a hybrid system. Is a human can’t just go out into space. It needs a spaceship. A spaceship can’t go out into space either. It needs a human. So making a replicating system seems like a good idea, both because we are replicating systems and it lowers the size of the package you need to send. So if you want to have a million people in the Alpha Centauri system, it might be easier just to send a few people and a bunch of frozen embryos or something like that.

Sending a artificial general intelligence is not sufficient. It has to also be able to make a copy of itself, which I think is a much higher hurdle than just AGI. I think AGI, we will achieve before we achieve AGI plus replication. It may not be much before, it will be probably be before.

In principle, a lot of organisms, including humans, start from single cells and mammals tend to need more support structure than most other vertebrates. But in principle if you land a vertebrate fertilized egg in an aquatic environment, it will develop and make copies of itself and maybe even structures.

So my speculation is that there exist a nanogram cell that’s about the size of a lot of vertebrate eggs. There exists a design for a nanogram that would be capable of dealing with a wide variety of harsh environments. We have organisms that thrive everywhere between the freezing point of water and the boiling point or 100 plus degrees at high pressure. So you have this nanogram that is adapted to a variety of different environments and can reproduce, make copies of itself, and built into it is a great deal of know-how about building things. The same way that building a nest is built into a bird’s DNA, you could have programmed into an ability to build computers or a radio or laser transmitters so it could communicate and get more information.

So a nanogram could travel at close the speed of light and then communicate at close the speed of light once it replicates. I think that illustrates the value of hybrid systems, within this particular case a high emphasis on the biochemical, biological components that’s capable of replicating as the core thing that you need for efficient transport.

Lucas Perry: If your claim about hybrid systems is true, then if we extrapolate it to say the deep future, then if there’s any other civilizations out there, then the form in which we will meet them will likely also be hybrid systems.

And this point brings me to reflect on something that Nick Bostrom talks about, the great filters which are supposed points in the evolution and genesis of life throughout the cosmos that are very difficult for life to make it through those evolutionary leaps, so almost all things don’t make it through the filter. And this is hypothesized to be a way of explaining the Fermi paradox, why is it that there are hundreds of billions of galaxies and we don’t see any alien superstructures or we haven’t met anyone yet?

So, I’m curious to know if you have any thoughts or opinions on what the main great filters to reaching interstellar civilization might be?

George Church: Of all the questions you’ve asked, this is the one where i’m most uncertain. I study among other things how life originated, in particular how we make complex biopolymers, so ribosomes making proteins for example, the genetic code. That strikes me as a pretty difficult thing to have arisen. That’s one filter. Maybe much earlier than many people would think.

Another one might be lack of interest that once you get to a certain level of sophistication, you’re happy with your life, your civilization, and then typically you’re overrun by someone or something that is more primitive from your perspective. And then they become complacent, and the cycle repeats itself.

Or the misunderstanding of resources. I mean we’ve seen a number of island civilizations that have gone extinct because they didn’t have a sustainable ecosystem, or they might turn inward. You know, like Easter Island, they got very interested in making statutes and tearing down trees in order to do that. And so they ended up with an island that didn’t have any trees. They didn’t use those trees to build ships so they could populate the rest of the planet. They just miscalculated.

So all of those could be barriers. I don’t know which of them it is. There probably are many planets and moons where if we transplanted life, it would thrive there. But it could be that just making life in the first place is hard and then making intelligence and civilizations that care to grow outside of their planet. It might be hard to detect them if they’re growing in a subtle way.

Lucas Perry: I think the first thing you brought up might be earlier than some people expect, but I think for many people thinking about great filters it is not like abiogenesis, if that’s the right word, seems really hard getting the first self-replicating things in the ancient oceans going. There seemed to be loss of potential filters from there to multi-cellular organisms and then general intelligences like people and beyond.

George Church: But many empires have just become complacent and they’ve been overtaken by perfectly obvious technology that they could’ve at least kept up with by spying, if not by invention. But they became complacent. They seem to plateau at roughly the same place. We’re plateauing more or less the same place the Easter Islanders and the Roman Empire plateaued. Today I mean the slight differences that we are maybe space faring civilization now.

Lucas Perry: Barely.

George Church: Yeah.

Lucas Perry: So, climate change has been something that you’ve been thinking about a bunch it seems. You have the Woolly Mammoth Project which we don’t need to necessarily get into here. But are you considering or are you optimistic about other methods of using genetic engineering for combating climate change?

George Church: Yeah, I think genetic engineering has potential. Most of the other things we talk about putting in LEDs or slightly more efficient car engines, solar power and so forth. And these are slowing down the inevitable rather than reversing it. To reverse it we need to take carbon out of the air, and a really, great way to do that is with photosynthesis, partly because it builds itself. So if we just allow the Arctic to do the photosynthesis the way it used to, we could get a net loss of carbon dioxide from the atmosphere and put it into the ground rather than releasing a lot.

That’s part of the reason that I’m obsessed with Arctic solutions and the Arctic Ocean is also similar. It’s the place where you get upwelling of nutrients, and so you get a natural, very high rate of carbon fixation. It’s just you also have a high rate of carbon consumption back into carbon dioxide. So if you could change that cycle a little bit. So that I think both Arctic land and ocean is a very good place to reverse carbon and accumulation in the atmosphere, and I think that that is best done with synthetic biology.

Now the barriers have historically been release of recombinant DNA into the wild. We now have salmon which are essentially in the wild, the humans that are engineered that are in the wild, and we have golden rice is now finally after more than a decade of tussle being used in the Philippines.

So I think we’re going to see more and more of that. To some extent even the plants of agriculture are in the wild. This is one of the things that was controversial, was that the pollen was going all over the place. But I think there’s essentially zero examples of recombinant DNA causing human damage. And so we just need to be cautious about our environmental decision making.

Lucas Perry: All right. Now taking kind of a sharp pivot here. In the philosophy of consciousness there is a distinction between the hard problem of consciousness and the easy problem. The hard problem is why is it that computational systems have something that it is like to be that system? Why is there a first person phenomenal perspective and experiential perspective filled with what one might call qualia. Some people reject the hard problem as being an actual thing and prefer to say that consciousness is an illusion or is not real. Other people are realists about consciousness and they believe phenomenal consciousness is substantially real and is on the same ontological or metaphysical footing as other fundamental forces of nature, or that perhaps consciousness discloses the intrinsic nature of the physical.

And then the easy problems are how is that we see, how is that light enters the eyes and gets computed, how is it that certain things are computationally related to consciousness?

David Chalmers calls another problem here, the meta problem of consciousness, which is why is it that we make reports about consciousness? Why is that we even talk about consciousness? Particularly if it’s an illusion? Maybe it’s performing some kind of weird computational efficiency. And if it is real, there seems to be some tension between the standard model of physics, being pretty complete feeling, and then how is it that we would be making reports about something that doesn’t have real causal efficacy if there’s nothing real to add to the standard model?

Now you have the Human Connectome Project which would seem to help a lot with the easy problems of consciousness and maybe might have something to say about the meta problem. So I’m curious to know if you have particular views on consciousness or how the Human Connectome Project might relate to that interest?

George Church: Okay. So I think that consciousness is real and it has selective advantage. Part of reality to a biologist is evolution, and I think it’s somewhat coupled to free will. I think of them as even though they are real and hard to think about, they may be easier than we often lay on, and this is when you think of it from an evolutionary standpoint or also from a simulation standpoint.

I can really only evaluate consciousness and the qualia by observations. I can only imagine that you have something similar to what I feel by what you do. And from that standpoint it wouldn’t be that hard to make a synthetic system that displayed consciousness that would be nearly impossible to refute. And as that system replicated and took on a life of its own, let’s say it’s some hybrid biological, non-biological system that displays consciousness, to really convincingly display consciousness it would also have to have some general intelligence or at least pass the Turing test.

But it would have evolutionary advantage in that it could think or could reason about itself. It recognizes the difference between itself and something else. And this has been demonstrated already in robots. There are admittedly kind of proof of concept demos. Like you have robots that can tell themselves in a reflection in a mirror from other people to operate upon their own body by removing dirt from their face, which is only demonstrated in a handful of animal species and recognize their own voice.

So you can see how these would have evolutionary advantages and they could be simulated to whatever level of significance is necessarily to convince an objective observer that they are conscious as far as you know, to the same extent that I know that you are.

So I think the hard problem is a worthy one. I think it is real. It has evolutionary consequences. And free will is related in that free will I think is a game theory which is if you behave in a completely deterministic predictable way, all the organisms around you have an advantage over you. They know that you are going to do a certain thing and so they can anticipate that, they can steal your food, they can bite you, they can do whatever they want. But if you’re unpredictable, which is essentially free will, in this case it can be a random number generator or dice, you now have a selective advantage. And to some extent you could have more free will than the average human, though the average human is constrained by all sorts of social mores and rules and laws and things like that, that something with more free will might not be.

Lucas Perry: I guess I would just want to tease a part self-consciousness from consciousness in general. I think that one can have a first person perspective without having a sense of self or being able to reflect on one’s own existence as a subject in the world. I also feel a little bit confused about why consciousness would provide an evolutionary advantage, where consciousness is the ability to experience things, I guess I have some intuitions about it not being causal like having causal efficacy because the standard model doesn’t seem to be missing anything essentially.

And then your point on free will makes sense. I think that people mean very different things here. I think within common discourse, there is a much more spooky version of free will which we can call libertarian free will, which says that you could’ve done otherwise and it’s more closely related to religion and spirituality, which I reject and I think most people listening to this would reject. I just wanted to point that out. Your take on free will makes sense and is the more scientific and rational version.

George Church: Well actually, I could say they could’ve done otherwise. If you consider that religious, that is totally compatible with flipping the coin. That helps you do otherwise. If you could take the same scenario, you could do something differently. And that ability to do otherwise is of selective advantage. As indeed religions can be of a great selective advantage in certain circumstances.

So back to consciousness versus self-consciousness, I think they’re much more intertwined. I’d be cautious about trying to disentangle them too much. I think your ability to reason about your own existence as being separate from other beings is very helpful for say self-grooming, for self-protection, so forth. And I think that maybe consciousness that is not about oneself may be a byproduct of that.

The greater your ability to reason about yourself versus others, your hand versus the piece of wood in your hands makes you more successful. Even if you’re not super intelligent, just the fact that you’re aware that you’re different from the entity that you’re competing with is a advantage. So I find it not terribly useful to make a giant rift between consciousness and self-consciousness.

Lucas Perry: Okay. So I’m becoming increasingly mindful of your time. We have five minutes left here so I’ve just got one last question for you and I need just a little bit to set it up. You’re vegan as far as I understand.

George Church: Yes.

Lucas Perry: And the effective altruism movement is particularly concerned with animal suffering. We’ve talked a lot about genetic engineering and its possibilities. David Pearce has written something called The Hedonistic Imperative which outlines a methodology and philosophy for using genetic engineering for voluntarily editing out suffering. So that can be done both for wild animals and it could be done for the human species and our descendants.

So I’m curious to know what your view is on animal suffering generally in the world, and do you think about or have thoughts on genetic engineering for wild animal suffering in places outside of human civilization? And then finally, do you view a role for genetic engineering and phasing out human suffering, making it biologically impossible by re-engineering people to operate on gradients of intelligent bliss?

George Church: So I think this kind of difficult problem, a technique that I employ is I imagine what this would be like on another planet and in the future, and whether given that imagined future, we would be willing to come back to where we are now. Rather than saying whether we’re willing to go forward, they ask whether you’re willing to come back. Because there’s a great deal of appropriate respect for inertia and the way things have been. Sometimes it’s called natural, but I think natural includes the future and everything that’s manmade, as well, we’re all part of nature. So I think it’s more of the way things were. So if you go to the future and ask whether we’d be willing to come back is a different way of looking.

I think in going to another planet, we might want to take a limited set of organisms with us, and we might be tempted to make them so that they don’t suffer, including humans. There is a certain amount of let’s say pain which could be a little red light going off on your dashboard. But the point of pain is to get your attention. And you could reframe that. People are born with chronic insensitivity to pain, CIPA, genetically, and they tend to get into problems because they will chew their lips and other body parts and get infected, or they will jump from high places because it doesn’t hurt and break things they shouldn’t break.

So you need some kind of alarm system that gets your attention that cannot be ignored. But I think it could be something that people would complain about less. It might even be more effective because you could prioritize it.

I think there’s a lot of potential there. By studying people that have chronic insensitivity to pain, you could even make that something you could turn on and off. SCNA9 for example is a channel in human neuro system that doesn’t cause the dopey effects of opioids. You can be pain-free without being compromised intellectually. So I think that’s a very promising direction to think about this problem.

Lucas Perry: Just summing that up. You do feel that it is technically feasible to replace pain with some other kind of informationally sensitive thing that could have the same function for reducing and mitigating risk and signaling damage?

George Church: We can even do better. Right now we’re unaware of certain physiological states can be quite hazardous and we’re blind to for example all the pathogens in the air around us. These could be new signaling. It wouldn’t occur to me to make every one of those painful. It would be better just to see the pathogens and have little alarms that go off. It’s much more intelligent.

Lucas Perry: That makes sense. So wrapping up here, if people want to follow your work, or follow you on say Twitter or other social media, where is the best place to check out your work and to follow what you do?

George Church: My Twitter is @geochurch. And my website is easy to find just by google, but it’s arep.med.harvard.edu. Those are two best places.

Lucas Perry: All right. Thank you so much for this. I think that a lot of the information you provided about the skillfulness and advantages of biology and synthetic computation will challenge many of the intuitions of our usual listeners and people in general. I found this very interesting and valuable, and yeah, thanks so much for coming on.

George Church: Okay. Great. Thank you.

FLI Podcast: On Superforecasting with Robert de Neufville

Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of “superforecasters” are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it’s done, and the ways it can help us with crucial decision making. 

Topics discussed in this episode include:

  • What superforecasting is and what the community looks like
  • How superforecasting is done and its potential use in decision making
  • The challenges of making predictions
  • Predictions about and lessons from COVID-19

You can take a survey about the podcast here

Submit a nominee for the Future of Life Award here

 

Timestamps: 

0:00 Intro

5:00 What is superforecasting?

7:22 Who are superforecasters and where did they come from?

10:43 How is superforecasting done and what are the relevant skills?

15:12 Developing a better understanding of probabilities

18:42 How is it that superforecasters are better at making predictions than subject matter experts?

21:43 COVID-19 and a failure to understand exponentials

24:27 What organizations and platforms exist in the space of superforecasting?

27:31 Whats up for consideration in an actual forecast

28:55 How are forecasts aggregated? Are they used?

31:37 How accurate are superforecasters?

34:34 How is superforecasting complementary to global catastrophic risk research and efforts?

39:15 The kinds of superforecasting platforms that exist

43:00 How accurate can we get around global catastrophic and existential risks?

46:20 How to deal with extremely rare risk and how to evaluate your prediction after the fact

53:33 Superforecasting, expected value calculations, and their use in decision making

56:46 Failure to prepare for COVID-19 and if superforecasting will be increasingly applied to critical decision making

01:01:55 What can we do to improve the use of superforecasting?

01:02:54 Forecasts about COVID-19

01:11:43 How do you convince others of your ability as a superforecaster?

01:13:55 Expanding the kinds of questions we do forecasting on

01:15:49 How to utilize subject experts and superforecasters

01:17:54 Where to find and follow Robert

 

Citations: 

The Global Catastrophic Risk Institute

NonProphets podcast

Robert’s Twitter and his blog Anthropocene

If you want to try making predictions, you can try Good Judgement Open or Metaculus

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Robert de Neufville about superforecasting. But, before I get more into the episode I have two items I’d like to discuss. The first is that the Future of Life Institute is looking for the 2020 recipient of the Future of Life Award. For those not familiar, the Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who have done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or the description of wherever you might be listening. You can also just search for it directly. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs to the person who invited the nomination winner, and so on. You can find details about that on the page. 

The second item is that there is a new survey that I wrote about the Future of Life Institute and AI Alignment Podcasts. It’s been a year since our last survey and that one was super helpful for me understanding what’s going well, what’s not, and how to improve. I have some new questions this time around and would love to hear from everyone about possible changes to the introductions, editing, content, and topics covered. So, if you have any feedback, good or bad, you can head over to the SurveyMonkey poll in the description of wherever you might find this podcast or on the page for this podcast. You can answer as many or as little of the questions as you’d like and it goes a long way for helping me to gain perspective about the podcast, which is often hard to do from my end because I’m so close to it. 

And if you find the content and subject matter of this podcast to be important and beneficial, consider sharing it with friends, subscribing on Apple Podcasts, Spotify, or whatever your preferred listening platform, and leaving us a review. It’s really helpful for getting information on technological risk and the future of life to more people.

Regarding today’s episode, I just want to provide a little bit of context. The foundation of risk analysis has to do with probabilities. We use these probabilities and the predicted value lost if certain risks occur to calculate or estimate expected value. This in turn helps us to prioritize risk mitigation efforts to where it’s truly needed. So, it’s important that we’re able to make accurate predictions about the likelihood of future events and risk so that we can take the appropriate action to mitigate them. This is where superforecasting comes in.

Robert de Neufville is a researcher, forecaster, and futurist with degrees in government and political science from Harvard and Berkeley. He works particularly on the risk of catastrophes that might threaten human civilization. He is also a “superforecaster”, since he was among the top 2% of participants in IARPA’s Good Judgment forecasting tournament. He has taught international relations, comparative politics, and political theory at Berkeley and San Francisco State. He has written about politics for The Economist, The New Republic, The Washington Monthly, and Big Think. 

And with that, here’s my conversation with Robert de Neufville on superforecasting. 

All right. Robert, thanks so much for coming on the podcast.

Robert de Neufville: It’s great to be here.

Lucas Perry: Let’s just start off real simply here. What is superforecasting? Say if you meet someone, a friend or family member of yours asks you what you do for work. How do you explain what superforecasting is?

Robert de Neufville: I just say that I do some forecasting. People understand what forecasting is. They may not understand specifically the way I do it. I don’t love using “superforecasting” as a noun. There’s the book Superforecasting. It’s a good book and it’s kind of great branding for Good Judgment, the company, but it’s just forecasting, right, and hopefully I’m good at it and there are other people that are good at it. We have used different techniques, but it’s a little bit like an NBA player saying that they play super basketball. It’s still basketball.

But what I tell people for background is that the US intelligence community had this forecasting competition basically just to see if anyone could meaningfully forecast the future because it turns out one of the things that we’ve seen in the past is that people who supposedly have expertise in subjects don’t tend to be very good at estimating probabilities that things will happen.

So the question was, can anyone do that? And it turns out that for the most part people can’t, but a small subset of people in the tournament were consistently more accurate than the rest of the people. And just using open source information, we were able to decisively beat subject matter experts who actually that’s not a high bar. They don’t do very well. And we were also able to beat intelligence community analysts. We didn’t originally know we were going up against them, but we’re talking about forecasters in the intelligence community who had access to classified information we didn’t have access to. We were basically just using Google.

And one of the stats that we got later was that as a group we were more accurate 300 days ahead of a question being resolved than others were just a hundred days ahead. As far as what makes the technique of superforecasting sort of fundamentally distinct, I think one of the things is that we have a system for scoring our accuracy. A lot of times when people think about forecasting, people just make pronouncements. This thing will happen or it won’t happen. And then there’s no real great way of checking whether they were right. And they can also often after the fact explain away their forecast. But we make probabilistic predictions and then we use a mathematical formula that weather forecasters have used to score them. And then we can see whether we’re doing well or not well. We can evaluate and say, “Hey look, we actually outperformed these other people in this way.” And we can also then try to improve our forecasting when we don’t do well, ask ourselves why and try to improve it. So that’s basically how I explain it.

Lucas Perry: All right, so can you give me a better understanding here about who “we” is? You’re saying that the key point and where this started was this military competition basically attempting to make predictions about the future or the outcome of certain events. What are the academic and intellectual foundations of superforecasting? What subject areas would one study or did superforecasters come from? How was this all germinated and seeded prior to this competition?

Robert de Neufville: It actually was the intelligence community, although though I think military intelligence participated in this. But I mean I didn’t study to be a forecaster and I think most of us didn’t. I don’t know if there really has been a formal study that would lead you to be a forecaster. People just learn subject matter and then apply that in some way. There must be some training that people had gotten in the past, but I don’t know about it.

There was a famous study by Phil Tetlock. I think in the 90s it came out as a book called Expert Political Judgment, and he found essentially that experts were not good at this. But what he did find, he made a distinction between foxes and hedgehogs you might’ve heard. Hedgehogs are people that have one way of thinking about things, one system, one ideology, and they apply it to every question, just like the hedgehog has one trick and it’s its spines. Hedgehogs didn’t do well. If you were a Marxist or equally a dyed in the wool Milton Friedman capitalist and you applied that way of thinking to every problem, you tended not to do as well at forecasting.

But there’s this other group of people that he found did a little bit better and he called him foxes, and foxes are tricky. They have all sorts of different approaches. They don’t just come in with some dogmatic ideology. They look at things from a lot of different angles. So that was sort of the initial research that inspired him. And there’s other people that were talking about this, but it was ultimately Phil Tetlock and Barb Miller’s group that outperformed everyone else, had looked for people that were good at forecasting and they put them together in teams, and they aggregated their scores with algorithmic magic.

We had a variety of different backgrounds. If you saw any of the press initially, the big story that came out in the press was that we were just regular people. There was a lot of talk about so-and-so was a housewife and that’s true. We weren’t people that had a reputation for being great pundits or anything. That’s totally true. I think that was a little bit overblown though because it made it sound like so and so was a housewife and no one knew that she had this skill. Otherwise she was completely unremarkable. In fact, superforecasters as a group tended to be highly educated with advanced degrees. They tended to have backgrounds and they lived in a bunch of different countries.

The thing that correlates most with forecasting ability seems to be basically intelligence, performing well on measures of intelligence tests, and also I should say that a lot of very smart people aren’t good forecasters. Just being smart isn’t enough, but that’s one of the strongest predictors of forecasting ability and that’s not as good a story for journalists.

Lucas Perry: So it wasn’t crystals.

Robert de Neufville: If you do surveys of the way superforecasters think about the world, they tend not to do what you would call magical thinking. Some of us are religious. I’m not. But for the most part the divine isn’t an explanation in their forecast. They don’t use God to explain it. They don’t use things that you might consider a superstition. Maybe that seems obvious, but it’s a very rational group.

Lucas Perry: How’s superforecasting done and what kinds of models are generated and brought to bear?

Robert de Neufville: As a group, we tend to be very numeric. That’s one thing that correlates pretty well with forecasting ability. And when I say they come from a lot of backgrounds, I mean there are doctors, pharmacists, engineers. I’m a political scientist. There are actually a fair number of political scientists. Some people who are in finance or economics, but they all tend to be people who could make at least a simple spreadsheet model. We’re not all statisticians, but have at least a intuitive familiarity with statistical thinking and intuitive concept of Bayesian updating.

As far as what the approach is, we make a lot of simple models, often not very complicated models I think because often when you make a complicated model, you end up over fitting the data and drawing falsely precise conclusions, at least when we’re talking about complex, real-world political science-y kind of situations. But I would say the best guide for predicting the future, and this probably sounds obvious, best guide for what’s going to happen is what’s happened in similar situations in the past. One of the key things you do, if somebody asks you, “Will so and so when an election?” you would look back and say, “Well, what’s happened in similar elections in the past? What’s the base rate of the incumbent, for example, maybe from this party or that party winning an election, given this economy and so on?”

Now it is often very hard to beat simple algorithms that try to do the same thing, but that’s not a thing that you can just do by rote. It requires an element of judgment about what situations in the past count as similar to the situation you’re trying to ask a question about. In some ways that’s a big part of the trick is to figure out what’s relevant to the situation, trying to understand what past events are relevant, and that’s something that’s hard to teach I think because you could make a case for all sorts of things being relevant and there’s an intuitive feel that’s hard to explain to someone else.

Lucas Perry: The things that seem to be brought to bear here would be like these formal mathematical models and then the other thing would be what I think comes from Daniel Kahneman and is borrowed by the rationalist community, this idea of system one and system two thinking.

Robert de Neufville: Right.

Lucas Perry: Where system one’s, the intuitive, the emotional. We catch balls using system one. System one says the sun will come out tomorrow.

Robert de Neufville: Well hopefully the system two does too.

Lucas Perry: Yeah. System two does too. So I imagine some questions are just limited to sort of pen and paper system one, system two thinking, and some are questions that are more suitable for mathematical modeling.

Robert de Neufville: Yeah, I mean some questions are more suitable for mathematical modeling for sure. I would say though the main system we use is system two. And this is, as you say, we catch balls with some sort of intuitive reflex. It’s sort of maybe not in our prefrontal cortex. If I were trying to calculate the trajectory of a ball and tried to catch it, that would work very well. But I think most of what we’re doing when we forecast is trying to calculate something else. Often the models are really simple. It might be as simple as saying, “This thing has happened seven times in the last 50 years, so let’s start from the idea there’s a 14% chance of that thing happening again.” It’s analytical. We don’t necessarily just go with the gut and say this feels like a one in three chance.

Now that said, I think that it helps a lot and this is a problem with applying the results of our work. It helps a lot to have a good intuitive feel of probability like what one in three feels like, just a sense of how often that is. And superforecasters tend to be people who they are able to distinguish between smaller gradations of probability.

I think in general people that don’t think about this stuff very much, they have kind of three probabilities: definitely going to happen, might happen, and will never have. And there’s no finer grain distinction there. Whereas, I think superforecasters often feel like they can distinguish between 1% or 2% probabilities, the difference between 50% and 52%.

The sense of what that means I think is a big thing. If we’re going to tell a policymaker there’s a 52% chance of something happening, a big part of the problem is that policymakers have no idea what that means. They’re like, “Well, will it happen or won’t it? Oh, what do I do at number?” Right? How is that different from 50%? And I

Lucas Perry: All right, so a few things I’m interested in here. The first is I’m interested in what you have to say about what it means and how one learns how probabilities work. If you were to explain to policymakers or other persons who are interested who are not familiar with working with probabilities a ton, how one can get a better understanding of them and what that looks like. I feel like that would be interesting and helpful. And then the other thing that I’m sort of interested in getting a better understanding of is most of what is going on here seems like a lot of system two thinking, but I also would suspect and guess that many of the top superforecasters have very excellent, finely tuned system ones.

Robert de Neufville: Yeah.

Lucas Perry: Curious if you have any thoughts about these two things.

Robert de Neufville: I think that’s true. I mean, I don’t know exactly what counts as system one in the cognitive psych sense, but I do think that there is a feel that you get. It’s like practicing a jump shot or something. I’m sure Steph Curry, not that I’m Steph Curry in forecasting, but sure, Steph Curry, when he takes a shot, isn’t thinking about it at the time. He’s just practiced a lot. And by the same token, if you’ve done a lot of forecasting and thought about it and have a good feel for it, you may be able to look at something and think, “Oh, here’s a reasonable forecast. Here’s not a reasonable forecast.” I had that sense recently. When looking at FiveThirtyEight tracking COVID predictions for a bunch of subject matter experts, and they’re honestly kind of doing terribly. And part of it is that some of the probabilities are just not plausible. And that’s immediately obvious to me. And I think to other forecasters spent a lot of time thinking about it.

So I do think that without even having to do a lot of calculations or a lot of analysis, often I have a sense of what’s plausible, what’s in the right range just because of practice. When I’m watching a sporting event and I’m stressed about my team winning, for years before I started doing this, I would habitually calculate the probability of winning. It’s a neurotic thing. It’s like imposing some kind of control. I think I’m doing the same thing with COVID, right? I’m calculating probabilities all the time to make myself feel more in control. But that actually was pretty good practice for getting a sense of it.

I don’t really have the answer to how to teach that to other people except potentially the practice of trying to forecast and seeing what happens and when you’re right and when you’re wrong. Good Judgment does have some training materials that improved forecasting for people validated by research. They involve things about thinking about the base rate of things happening in the past and essentially going through sort of system two approaches, and I think that kind of thing can also really help people get a sense for it. But like anything else, there’s an element of practice. You can get better or worse at it. Well hopefully you get better.

Lucas Perry: So a risk that is 2% likely is two times more likely than a 1% chance risk. How do those feel differently to you than to me or a policymaker who doesn’t work with probabilities a ton?

Robert de Neufville: Well I don’t entirely know. I don’t entirely know what they feel like to someone else. I think I do a lot of one time in 50 that’s what 2% is and one time in a hundred that’s what 1% is. The forecasting platform we use, we only work in integer probabilities. So if it goes below half a percent chance, I’d round down to zero. And honestly I think it’s tricky to get accurate forecasting with low probability events for a bunch of reasons or even to know if you’re doing a good job because you have to do so many of them. I think about fractions often and have a sense of what something happening two times in seven might feel like in a way.

Lucas Perry: So you’ve made this point here that superforecasters are often better at making predictions than subject matter expertise. Can you unpack this a little bit more and explain how big the difference is? You recently just mentioned the COVID-19 virologists.

Robert de Neufville: Virologists, infectious disease experts, I don’t know all of them, but people whose expertise I really admire, who know the most about what’s going on and to whom I would turn in trying to make a forecast about some of these questions. And it’s not really fair because these are people often who have talked to FiveThirtyEight for 10 minutes and produced a forecast. They’re very busy doing other things, although some of them are doing modeling and you would think that they would have thought about some of these probabilities in advance. But one thing that really stands out when you look at those is they’ll give a 5% or 10% chance of something happening, which to me is virtually impossible. And I don’t think it’s their better knowledge of virology that makes them think it’s more likely. I think it’s having thought about what 5% or 10% means a lot. Well, they think it’s not very likely and they assign it, which sounds like a low number. That’s my guess. I don’t really know what they’re doing.

Lucas Perry: What’s an example of that?

Robert de Neufville: Recently there were questions about how many tests would be positive by a certain date, and they assigned a real chance, like a 5% or 10%, I don’t remember exactly the numbers, but way higher than I thought it would be for there being below a certain number of tests. And the problem with that was it would have meant essentially that all of a sudden the number of tests that were happening positive every day would drop off the cliff. Go from, I don’t know how many positive tests are a day, 27,000 in the US all of a sudden that would drop to like 2000 or 3000. And this we’re talking about forecasting like a week ahead. So really a short timeline. It just was never plausible to me that all of a sudden tests would stop turning positive. There’s no indication that that’s about to happen. There’s no reason why that would suddenly shift.

I mean maybe I can always say maybe there’s something that a virologist knows that I don’t, but I have been reading what they’re saying. So how would they think that it would go from 25,000 a day to 2000 a day over the next six days? I’m going to assign that basically a 0% chance.

Another thing that’s really striking, and I think this is generally true and it’s true to some extent of superforecasts, so we’ve had a little bit of an argument on our superforecasting platform, people are terrible at thinking about exponential growth. They really are. They really under predicted the number of cases and deaths even again like a week or two in advance because it was orders of magnitude higher than the number at the beginning of the week. But a computer, they’ve had like an algorithm to fit an exponential curve, would have had no problem doing it. Basically, I think that’s what the good forecasters did is we fit an exponential curve and said, “I don’t even need to know many of the details over the course of a week. My outside knowledge is the progression of the disease and vaccines or whatever isn’t going to make much difference.”

And like I said it’s often hard to beat a simple algorithm, but the virologists and infectious disease experts weren’t applying that simple algorithm, and it’s fair to say, well maybe some public health intervention will change the curve or something like that. But I think they were assigning way too high a probability to the exponential trends stopping. I just think it’s a failure to imagine. You know maybe the Trump administration is motivated reasoning on this score. They kept saying it’s fine. There aren’t very many deaths yet. But it’s easy for someone to project the trajectory a little bit further in the future and say, “Wow, there are going to be.” So I think that’s actually been a major policy issue too is people can’t believe the exponential growth.

Lucas Perry: There’s this tension between not trying to panic everyone in the country or you’re unsure if this is the kind of thing that’s an exponential or you just don’t really intuit how exponentials work. For the longest time, our federal government were like, “Oh, it’s just a person. There’s just like one or two people. They’re just going to get better and that will let go away or something.” What’s your perspective on that? Is that just trying to assuage the populace while they try to figure out what to do or do you think that they actually just don’t understand how exponentials work?

Robert de Neufville: I’m not confident with my theory of mind with people in power. I think one element is this idea that we need to avoid panic and I think that’s probably, they believe in good faith, that’s a thing that we need to do. I am not necessarily an expert on the role of panic in crises, but I think that that’s overblown personally. We have this image of, hey, in the movies, if there’s a disaster, all of a sudden everyone’s looting and killing each other and stuff, and we think that’s what’s going to happen. But actually often in disasters people really pull together and if anything have a stronger sense of community and help their neighbors rather than immediately go and try to steal their supplies. We did see some people fighting over toilet paper on news rolls and there are always people like that, but even this idea that people were hoarding toilet paper, I don’t even think that’s the explanation for why it was out of the stores.

If you tell everyone in the country they need two to three weeks and toilet paper right now today, yeah, of course they’re going to buy it off the shelf. That’s actually just what they need to buy. I haven’t seen a lot of panic. And I honestly am someone, if I had been an advisor to the administrations, I would have said something along the lines of “It’s better to give people accurate information so we can face it squarely than to try to sugarcoat it.”

But I also think that there was a hope that if we pretended things weren’t about to happen or that maybe they would just go away, I think that that was misguided. There seems to be some idea that you could reopen the economy and people would just die but the economy would end up being fine. I don’t think that would be worth it any way. Even if you don’t shut down, the economy’s going to be disrupted by what’s happening. So I think there are a bunch of different motivations for why governments weren’t honest or weren’t dealing squarely with this. It’s hard to know what’s not honesty and what is just genuine confusion.

Lucas Perry: So what organizations exist that are focused on superforecasting? Where or what are the community hubs and prediction aggregation mechanisms for superforecasters?

Robert de Neufville: So originally in the IARPA Forecasting Tournament, there were a bunch of different competing teams, and one of them was run by a group called Good Judgment. And that team ended up doing so well. They ended up basically taking over the later years of the tournament and it became the Good Judgment project. There was then a spinoff. Phil Tetlock and others who were involved with that spun off into something called Good Judgment Incorporated. That is the group that I work with and a lot of the superforecasters that were identified in that original tournament continue to work with Good Judgment.

We do some public forecasting and I try to find private clients interested in our forecasts. It’s really a side gig for me and part of the reason I do it is that it’s really interesting. It gives me an opportunity to think about things in a way and I feel like I’m much better up on certain issues because I’ve thought about them as forecasting questions. So there’s Good Judgment Inc. and they also have something called the Good Judgment Open. They have an open platform where you can forecast the kinds of questions we do. I should say that we have a forecasting platform. They come up with forecastable questions, but forecastable means that they’re a relatively clear resolution criteria.

But also you would be interested in knowing the answer. It wouldn’t be just some picky trivial answer. They’ll have a set resolution date so you know that if you’re forecasting something happening, it has to happen by a certain date. So it’s all very well-defined. And coming up with those questions is a little bit of its own skill. It’s pretty hard to do. So Good Judgment will do that. And they put it on a platform where then as a group we discuss the questions and give our probability estimates.

We operate to some extent in teams and they found there’s some evidence that teams of forecasters, at least good forecasters, can do a little bit better than people on their own. I find it very valuable because other forecasters do a lot of research and they critique my own ideas. There’s concerns about group think, but I think that we’re able to avoid those. I can talk about why if you want. Then there’s also this public platform called Good Judgment Open where they use the same kind of questions and anyone can participate. And they’ve actually identified some new superforecasters who participated on this public platform, people who did exceptionally well, and then they invited them to work with the company as well. There are others. I know a couple of superforecasters who are spinning off their own group. They made an app. I think it’s called Maybe, where you can do your own forecasting and maybe come up with your own questions. And that’s a neat app. There is Metaculus, which certainly tries to apply the same principles. And I know some superforecasters who forecast on Metaculus. I’ve looked at it a little bit, but I just haven’t had time because forecasting takes a fair amount of time. And then there are always prediction markets and things like that. There are a number of other things, I think, that try to apply the same principles. I don’t know enough about the space to know of all of the other platforms and markets that exist.

Lucas Perry: For some more information on the actual act of forecasting that will be put onto these websites, can you take us through something which you have forecasted recently that ended up being true? And tell us how much time it took you to think about it? And what your actual thinking was on it? And how many variables and things you considered?

Robert de Neufville: Yeah, I mean it varies widely. And to some extent it varies widely on the basis of how many times have I forecasted something similar. So sometimes we’ll forecast the change in interest rates, the fed moves. That’s something that’s obviously a lot of interest to people in finance. And at this point, I’ve looked at that kind of thing enough times that I have set ideas about what would make that likely or not likely to happen.

But some questions are much harder. We’ve had questions about mortality in certain age groups in different districts in England and I didn’t know anything about that. And all sorts of things come into play. Is the flu season likely to be bad? What’s the chance of flu season will be bad? Is there a general trend among people who are dying of complications from diabetes? Does poverty matter? How much would Brexit affect mortality chances? Although a lot of what I did was just look at past data and project trends, just basically projecting trends you can get a long way towards an accurate forecast in a lot of circumstances.

Lucas Perry: When such a forecast is made and added to these websites and the question for the thing which is being predicted resolves, what are the ways in which the websites aggregate these predictions? Or are we at the stage of them often being put to use? Or is the utility of these websites currently primarily honing the epistemic acuity of the forecasters?

Robert de Neufville: There are a couple of things. Like I hope that my own personal forecasts are potentially pretty accurate. But when we work together on a platform, we will essentially produce an aggregate, which is, roughly speaking, the median prediction. There’s some proprietary elements to it. They extremize it a little bit, I think, because once you aggregate it kind of blurs things towards the middle. They maybe weight certain forecasts and more recent forecasts differently. I don’t know the details of it. But you can improve accuracy not just by taking the median of our forecast or in a prediction market, but doing a little algorithmic tweaking they found they can improve accuracy a little bit. That’s sort of what happens with our output.

And then as far as how people use it, I’m afraid not very well. There are people who are interested in Good Judgement’s forecasts and who pay them to produce forecasts. But it’s not clear to me what decision makers do with it or if they know what to do.

I think a big problem selling forecasting is that people don’t know what to do with a 78% chance of this, or let’s say a 2% chance of a pandemic in a given year, I’m just making that up. But somewhere in that ballpark, what does that mean about how you should prepare? I think that people don’t know how to work with that. So it’s not clear to me that our forecasts are necessarily affecting policy. Although it’s the kind of thing that gets written up in the news and who knows how much that affects people’s opinions, or they talk about it at Davos and maybe those people go back and they change what they’re doing.

Certain areas, I think people in finance know how to work with probabilities a little bit better. But they also have models that are fairly good at projecting certain types of things, so they’re already doing a reasonable job, I think.

I wish it were used better. If I were the advisor to a president, I would say you should create a predictive intelligence unit using superforecasters. Maybe give them access to some classified information, but even using open source information, have them predict probabilities of certain kinds of things and then develop a system for using that in your decision making. But I think we’re a fair ways away from that. I don’t know any interest in that in the current administration.

Lucas Perry: One obvious leverage point for that would be if you really trusted this group of superforecasters. And the key point for that is just simply how accurate they are. So just generally, how accurate is superforecasting currently? If we took the top 100 superforecasters in the world, how accurate are they over history?

Robert de Neufville: We do keep score, right? But it depends a lot on the difficulty of the question that you’re asking. If you ask me whether the sun will come up tomorrow, yeah, I’m very accurate. If you asked me to predict a random number generator, but you want a 100, I’m not very accurate. And it’s hard often to know with a given question how hard it is to forecast.

I have what’s called a Brier score. Essentially a mathematical way of correlating your forecast, the probabilities you give with the outcomes. A lower Brier score essentially is a better fit. I can tell you what my Brier score was on the questions I forecasted in the last year. And I can tell you that it’s better than a lot of other people’s Brier scores. And that’s the way you know I’m doing a good job. But it’s hard to say how accurate that is in some absolute sense.

It’s like saying how good are NBA players and taking jump shots. It depends where they’re shooting from. That said, I think broadly speaking, we are the most accurate. So far, superforecasters had a number of challenges. And I mean I’m proud of this. We pretty much crushed all comers. They’ve tried to bring artificial intelligence into it. We’re still, I think as far as I know, the gold standard of forecasting. But we’re not prophets by any means. Accuracy for us is saying there’s a 15% chance of this thing in politics happening. And then when we do that over a bunch of things, yeah, 15% of them end up happening. It is not saying this specific scenario will definitely come to pass. We’re not prophets. Getting the well calibrated probabilities over a large number of forecasts is the best that we can do, I think, right now and probably in the near future for these complex political social questions.

Lucas Perry: Would it be skillful to have some sort of standardized group of expert forecasters rank the difficulty of questions, which then you would be able to better evaluate and construct a Brier score for persons?

Robert de Neufville: It’s an interesting question. I think I could probably tell you, I’m sure other forecasters could tell you which questions are relatively easier or harder to predict. Things where there’s a clear trend and there’s no good reason for it changing are relatively easy to predict. Things where small differences could make it tip into a lot of different end states are hard to predict. And I can sort of have a sense initially what those would be.

I don’t know what the advantage of ranking questions like that and then trying to do some weighted adjustment. I mean maybe you could. But the best way that I know of to really evaluate forecasting scale is to compare it with other forecasters. I’d say it’s kind of a baseline. What do you know other good forecasters come up with and what do average forecasters come up with? And can you beat prediction markets? I think that’s the best way of evaluating relative forecasting ability. But I’m not sure it’s possible that some kind of weighting would be useful in some context. I hadn’t really thought about it.

Lucas Perry: All right, so you work both as a superforecaster, as we’ve been talking about, but you also have a position at the Global Catastrophic Risk Institute. Can you provide a little bit of explanation for how superforecasting and existential and global catastrophic risk analysis are complimentary?

Robert de Neufville: What we produce at GCRI, a big part of our product is academic research. And there are a lot of differences. If I say there’s a 10% chance of something happening on a forecasting platform, I have an argument for that. I can try to convince you that my rationale is good. But it’s not the kind of argument that you would make in an academic paper. It wouldn’t convince people it was 100% right. My warrant for saying that on the forecasting platform is I have a track record. I’m good at figuring out what the correct argument is or have been in the past, but producing an academic paper is a whole different thing.

There’s some of the same skills, but we’re trying to produce a somewhat different output. What superforecasters say is an input in writing papers about catastrophic risk or existential risk. We’ll use what superforecasters think as a piece of data. That said, superforecasters are validated at doing well at certain category of political, social economic questions. And over a certain timeline, we know that we outperform others up to like maybe two years.

We don’t really know if we can do meaningful forecasting 10 years out. That hasn’t been validated. You can see why that would be difficult to do. You would have to have a long experiment to even figure that out. And it’s often hard to figure out what the right questions to ask about 2030 would be. I generally think that the same techniques we use would be useful for forecasting 10 years out, but we don’t even know that. And so a lot of the things that I would look at in terms of global catastrophic risk would be things that might happen at some distant point in the future. Now what’s the risk that there will be a nuclear war in 2020, but also over the next 50 years? It’s a somewhat different thing to do.

They’re complementary. They both involve some estimation of risk and they use some of the same techniques. But the longer term aspect … The fact that as I think I said, one of the best ways superforecasters do well is that they use the past as a guide to the future. A good rule of thumb is that the status quo is likely to be the same. There’s a certain inertia. Things are likely to be similar in a lot of ways to the past. I don’t know if that’s necessarily very useful for predicting rare and unprecedented events. There is no precedent for an artificial intelligence catastrophe, so what’s the base rate of that happening? It’s never happened. I can use some of the same techniques, but it’s a little bit of a different kind of thing.

Lucas Perry: Two people are coming to my mind of late. One is Ray Kurzweil, who has made a lot of longterm technological predictions about things that have not happened in the past. And then also curious to know if you’ve read The Precipice: Existential Risk and the Future of Humanity by Toby Ord. Toby makes specific predictions about the likelihood of existential and global catastrophic risks in that book. I’m curious if you have any perspective or opinion or anything to add on either of these two predictors or their predictions?

Robert de Neufville: Yeah, I’ve read some good papers by Toby Ord. I haven’t had a chance to read the book yet, so I can’t really comment on that. I really appreciate Ray Kurzweil. And one of the things he does that I like is that he holds himself accountable. He’s looked back and said, how accurate are my predictions? Did this come true or did that not come true? I think that is a basic hygiene point of forecasting. You have to hold yourself accountable and you can’t just go back and say, “Look, I was right,” and not rationalize whatever somewhat off forecasts you’ve made.

That said, when I read Kurzweil, I’m skeptical, maybe that’s my own inability to handle exponential change. When I look at his predictions for certain years, I think he does a different set of predictions for seven year periods. I thought, “Well, he’s actually seven years ahead.” That’s pretty good actually, if you’re predicting what things are going to be like in 2020, but you just think it’s going to be 2013. Maybe they get some credit for that. But I think that he is too aggressive and optimistic about the pace of change. Obviously exponential change can happen quickly.

But I also think another rule of thumb is that things take a long time to go through beta. There’s the planning fallacy. People always think that projects are going to take less time than they actually do. And even when you try to compensate for the planning fallacy and double the amount of time, it still takes twice as much time as you come up with. I tend to think Kurzweil sees things happening sooner than they will. He’s a little bit of a techno optimist, obviously. But I haven’t gone back and looked at all of his self evaluation. He scores himself pretty well.

Lucas Perry: So we’ve spoken a bit about the different websites. And what are they technically called, what is the difference between a prediction market and … I think Metaculus calls itself a massive online prediction solicitation and aggregation engine, which is not a prediction market. What are the differences here and how’s the language around these platforms used?

Robert de Neufville: Yeah, so I don’t necessarily know all the different distinction categories someone would make. I think a prediction market particularly is where you have some set of funds, some kind of real or fantasy money. We used one market in the Good Judgement project. Our money was called Inkles and we could spend that money. And essentially, they traded probabilities like you would trade a share. So if there was a 30% chance of something happening on the market, that’s like a price of 30 cents. And you would buy that for 30 cents and then if people’s opinions about how likely that was changed and a lot of people bought it, then we could bid up to 50% chance of happening and that would be worth 50 cents.

So if I correctly realize that something … that the market says is a 30% chance of happening, if I correctly realized that, that’s more likely, I would buy shares of that. And then eventually either other people would realize it, too, or it would happen. I should say that when things happened, then you’d get a dollar, then it’s suddenly it’s 100% chance of happening.

So if you recognize that something had a higher percent chance of happening than the market was valuing at, you could buy a share of that and then you would make money. That basically functions like a stock market, except literally what you’re trading is directly the probability of a question will answer yes or no.

The stock market’s supposed to be really efficient, and I think in some ways it is. I think prediction markets are somewhat useful. Big problem with prediction markets is that they’re not liquid enough, which is to say that a stock market, there’s so much money going around and people are really just on it to make money, that it’s hard to manipulate the prices.

There’s plenty of liquidity on the prediction markets that I’ve been a part of. Like for the one on the Good Judgement project, for example, sometimes there’d be something that would say there was like a 95% chance of it happening on the prediction market. In fact, there would be like a 99.9% chance of it happening. But I wouldn’t buy that share, even though I knew it was undervalued, because the return on investment wasn’t as high as it was on some other questions. So it would languish at this inaccurate probability, because there just wasn’t enough money to chase all the good investments.

So that’s one problem you can have in a prediction market. Another problem you can have … I see it happen with PredictIt, I think. They used to be the IO Exchange predicting market. People would try to manipulate the market for some advertising reason, basically.

Say you were working on a candidate’s campaign and you wanted to make it look like they were a serious contender, it was a cheap investment and you put a lot of money in the prediction market and you boost their chances, but that’s not really boosting their chances. That’s just market manipulation. You can’t really do that with the whole stock market, but prediction markets aren’t well capitalized, you can do that.

And then I really enjoy PredictIt. PredictIt’s one of the prediction markets that exists for political questions. They have some dispensation so that it doesn’t count as gambling in the U.S. Add it’s research purposes: is there some research involved with PredictIt. But they have a lot of fees and they use their fees to pay for the people who run the market. And it’s expensive. But the fees mean that the prices are very sticky and it’s actually pretty hard to make money. Probabilities have to be really out of whack before you can make enough money to cover your fees.

So things like that make these markets not as accurate. I also think that although we’ve all heard about the wisdom of the crowds, and broadly speaking, crowds might do better than just a random person. They can also do a lot of herding behavior that good forecasters wouldn’t do. And sometimes the crowds overreact to things. And I don’t always think the probabilities that prediction markets come up with are very good.

Lucas Perry: All right. Moving along here a bit. Continuing the relationship of superforecasting with global catastrophic and existential risk. How narrowly do you think that we can reduce the error range for superforecasts on low probability events like global catastrophic risks and existential risks? If a group of forecasters settled on a point estimate of 2% chance for some kind of global catastrophic for existential risk, but with an error range of like 1%, that dramatically changes how useful the prediction is, because of its major effects on risk. How accurate do you think we can get and how much do you think we can squish the probability range?

Robert de Neufville: That’s a really hard question. When we produce forecasts, I don’t think there’s necessarily clear error bars built in. One thing that Good Judgement will do, is it will show where forecasters all agreed the probability is 2% and then it will show if there’s actually a wide variation. I’m thinking 0%, some think it’s 4% or something like that. And that maybe tells you something. And if we had a lot of very similar forecasts, maybe you could look back and say, we tend to have an error of this much. But for the kinds of questions we look at with catastrophic risk, it might really be hard to have a large enough “n”. Hopefully it’s hard to have a large “n” where you could really compute an error range. If our aggregate spits out a probability of 2%, it’s difficult to know in advance for a somewhat unique question how far off we could be.

I don’t spend a lot of time thinking about frequentist or Bayesian interpretations or probability or counterfactuals or whatever. But at some point, if I say it has a 2% probability of something and then it happens, I mean it’s hard to know what my probability meant. Maybe we live in a deterministic universe and that was 100% going to happen and I simply failed to see the signs of it. I think that to some extent, what kind of probabilities you assign things depend on the amount of information you get.

Often we might say that was a reasonable probability to assign to something because we couldn’t get much better information. Given the information we had, that was our best estimate of the probability. But it might always be possible to know with more confidence if we got better information. So I guess one thing I would say is if you want to reduce the error on our forecasts, it would help to have better information about the world.

And that’s some extent where what I do with GCRI comes in. We’re trying to figure out how to produce better estimates. And that requires research. It requires thinking about these problems in a systematic way to try to decompose them into different parts and figure out what we can look at the past and use to inform our probabilities. You can always get better information and produce more accurate probabilities, I think.

The best thing to do would be to think about these issues more carefully. Obviously, it’s a field. Catastrophic risk is something that people study, but it’s not the most mainstream field. There’s a lot of research that needs to be done. There’s a lot of low hanging fruit, work that could easily be done applying research done in other fields, to catastrophic risk issues. But they’re just aren’t enough researchers and there isn’t enough funding to do all the work that we should do.

So my answer would be, we need to do better research. We need to study these questions more closely. That’s how we get to better probability estimates.

Lucas Perry: So if we have something like a global catastrophic or existential risk, and say a forecaster says that there’s a less than 1% chance that, that thing is likely to occur. And if this less than 1% likely thing happens in the world, how does that update our thinking about what the actual likelihood of that risk was? Given this more meta point that you glossed over about how if the universe is deterministic, then the probability of that thing was actually more like 100%. And the information existed somewhere, we just didn’t have access to that information or something. Can you add a little bit of commentary here about what these risks mean?

Robert de Neufville: I guess I don’t think it’s that important when forecasting, if I have a strong opinion about whether or not we live in a single deterministic universe where outcomes are in some sense in the future, all sort of baked in. And if only we could know everything, then we would know with a 100% chance everything that was going to happen. Or whether there are some fundamental randomness, or maybe we live in a multiverse where all these different outcomes are happening, you could say that in 30% of the universes in this multiverse, this outcome comes true. I don’t think that really matters for the most part. I do think as a practical question, we may make forecast on the basis of the best information we have, that’s all you can do. But there are some times you look back and say, “Well, I missed this. I should’ve seen this thing.” I didn’t think that Donald Trump would win the 2016 election. That’s literally my worst Brier score ever. I’m not alone in that. And I comfort myself by saying there was actually genuinely small differences made a huge impact.

But there are other forecasters who saw it better than I did. Nate Silver didn’t think that Trump was a lock, but he thought it was more likely and he thought it was more likely for the right reasons. That you would get this correlated polling error in a certain set of states that would hand Trump the electoral college. So in retrospect, I think, in that case I should’ve seen something like what Nate Silver did. Now I don’t think in practice it’s possible to know enough about an election to get in advance who’s going to win.

I think we still have to use the tools that we have, which are things like polling. In complex situations, there’s always stuff that I missed when I make a mistake and I can look back and say I should have done a better job figuring that stuff out. I do think though, with the kinds of questions we forecast, there’s a certain irreducible, I don’t want to say randomness because I’m not making a position on whether the university is deterministic, but irreducible uncertainty about what we’re realistically able to know and we have to base our forecasts on the information that’s possible to get. I don’t think metaphysical interpretation is that important to figuring out these questions. Maybe it comes up a little bit more with unprecedented one-off events. Even then I think you’re still trying to use the same information to estimate probabilities.

Lucas Perry: Yeah, that makes sense. There’s only the set of information that you have access to.

Robert de Neufville: Something actually occurs to me. One of the things that superforecaster are proud of is that we beat these intelligence analysts that had access to classified information and I think that if we had access to more information, I mean we’re doing our research on Google, right? Or maybe occasionally we’ll write a government official and get a FOIA request or something, but we’re using open source intelligence and it, I think it would probably help if we had access to more information that would inform our forecasts, but sometimes more information actually hurts you.

People have talked about a classified information bias that if you have secret information that other people don’t have, you are likely to think that is more valuable and useful than it actually is and you overweight the classified information. But if you had that secret information, I don’t know if it’s an ego thing, you want to have a different forecast than other people don’t have access to. It makes you special. You have to be a little bit careful. More information isn’t always better. Sometimes the easy to find information is actually really dispositive and is enough. And if you search for more information, you can find stuff that is irrelevant to your forecast, but think that it is relevant.

Lucas Perry: So if there’s some sort of risk and the risk occurs, after the fact how does one update what the probability was more like?

Robert de Neufville: It depends a little bit of the context. If you want to evaluate my prediction. If I say I thought there was a 30% chance of the original Brexit vote would be to leave England. That actually was more accurate than some other people, but I didn’t think it was likely. Now in hindsight, should I have said 100%. Somebody might argue that I should have, that if you’d really been paying attention, you would have known 100%.

Lucas Perry: But like how do we know it wasn’t 5% and we live in a rare world?

Robert de Neufville: We don’t. You basically can infer almost nothing from an n of 1. Like if I say there’s a 1% chance of something happening and it happens, you can be suspicious that I don’t know what I’m talking about. Even from that n of 1, but there’s also a chance that there was a 1% chance that it happened and that was the 1 time in a 100. To some extent that could be my defense of my prediction that Hillary was going to win. I should talk about my failures. The night before, I thought there was a 97% chance that Hillary would win the election and that’s terrible. And I think that that was a bad forecast in hindsight. But I will say that typically when I’ve said there’s a 97% chance of something happening, they have happened.

I’ve made more than 30-some predictions that things are going to be 97% percent likely and that’s the only one that’s been wrong. So maybe I’m actually well calibrated. Maybe that was the 3% thing that happened. You can only really judge over a body of predictions and if somebody is always saying there’s a 1% chance of things happening and they always happen, then that’s not a good forecaster. But that’s a little bit of a problem when you’re looking at really rare, unprecedented events. It’s hard to know how well someone does at that because you don’t have an n of hopefully more than 1. It is difficult to assess those things.

Now we’re in the middle of a pandemic and I think that the fact that this pandemic happened maybe should update our beliefs about how likely pandemics will be in the future. There was the Spanish flu and the Asian flu and this. And so now we have a little bit more information about the base rate, which these things happen. It’s a little bit difficult because 1918 is very different from 2020. The background rate of risk, may be very different from what it was in 1918 so you want to try to take those factors into account, but each event does give us some information that we can use for estimating the risk in the future. You can do other things. A lot of what we do as a good forecaster is inductive, right? But you can use deductive reasoning. You can, for example, with rare risks, decompose them into the steps that would have to happen for them to happen.

What systems have to fail for a nuclear war to start? Or what are the steps along the way to potentially an artificial intelligence catastrophe. And I might be able to estimate the probability of some of those steps more accurately than I estimate the whole thing. So that gives us some kind of analytic methods to estimate probabilities even without real base rate of the thing itself happening.

Lucas Perry: So related to actual policy work and doing things in the world. The thing that becomes skillful here seems to be to use these probabilities to do expected value calculations to try and estimate how much resources should be fed into mitigating certain kinds of risks.

Robert de Neufville: Yeah.

Lucas Perry: The probability of the thing happening requires a kind of forecasting and then also the value that is lost requires another kind of forecasting. What are your perspectives or opinions on superforecasting and expected value calculations and their use in decision making and hopefully someday more substantially in government decision making around risk?

Robert de Neufville: We were talking earlier about the inability of policymakers to understand probabilities. I think one issue is that a lot of times when people make decisions, they want to just say, “What’s going to happen? I’m going to plan for the single thing that’s going to happen.” But as a forecaster, I don’t know what’s going to happen. I might if I’m doing a good job, know there’s a certain percent chance that this will happen, a certain percent chance that that will happen. And in general, I think that policymakers need to make decisions over sort of the space of possible outcomes with the planning for contingencies. And I think that is a more complicated exercise than a lot of policymakers want to do. I mean I think it does happen, but it requires being able to hold in your mind all these contingencies and plan for them simultaneously. And I think that with expected value calculations to some extent, that’s what you have to do.

That gets very complicated very quickly. When we forecast questions, we might forecast some discrete fact about the world and how many COVID deaths will there be by a certain date. And it’s neat that I’m good at that, but there’s a lot that that doesn’t tell you about the state of the world at that time. There’s a lot of information that would be valuable making decisions. I don’t want to say infinite because it may be sort of technically wrong, but there is essentially uncountable amount of things you might want to know and you might not even know what the relevant questions to ask about a certain space. So it’s always going to be somewhat difficult to get an expected value calculation because you can sort of not possibly forecast all the things that might determine the value of something.

I mean, this is a little bit of a philosophical critique of consequentialist kind of analyses of things too. Like if you ask if something is good or bad, it may have an endless chain of consequences rippling throughout future history and maybe it’s really a disaster now, but maybe it means that future Hitler isn’t born. How do you evaluate that? It might seem like a silly trivial point, but the fact is it may be really difficult to know enough about the consequences of your action to an expected value calculation. So your expected value calculation may have to be kind of a approximation in a certain sense, given broad things we know these are things that are likely to happen. I still think expected value calculations are good. I just think there’s a lot of uncertainty in them and to some extent it’s probably irreducible. I think it’s always better to think about things clearly if you can. It’s not the only approach. You have to get buy-in from people and that makes a difference. But the more you can do accurate analysis about things, I think the better your decisions are likely to be.

Lucas Perry: How much faith or confidence do you have that the benefits of superforecasting and this kind of thought will increasingly be applied to critical government or non-governmental decision-making processes around risk?

Robert de Neufville: Not as much as I’d like. I think now that we know that people can do a better or worse job of predicting the future, we can use that information and it will eventually begin to be integrated into our governance. I think that that will help. But in general, you know my background’s in political science and political science is, I want to say, kind of discouraging. You learn that even under the best circumstances, outcomes of political struggles over decisions are not optimal. And you could imagine some kind of technocratic decision-making system, but even that ends up having its problems or the technocrats end up just lining their own pockets without even realizing they’re doing it or something. So I’m a little bit skeptical about it and right now what we’re seeing with the pandemic, I think we systematically underprepare for certain kinds of things, that there are reasons why it doesn’t help leaders very much to prepare for things that will never happen.

And with something like a public health crisis, the deliverable is for nothing to happen and if you succeed, it looks like all your money was wasted, but in fact you’ve actually prevented anything from happening and that’s great. The problem is that that creates an underincentive for leaders. They don’t get credit for preventing the pandemic that no one even knew could have happened and they don’t necessarily win the next election or business leaders may not improve their quarterly profits much by preparing for rare risks for that and other reasons too. I think that we’re probably… have a hard time believing cognitively that certain kinds of things that seem crazy like this could happen. I’m somewhat skeptical about that. Now I think in this case we had institutions who did prepare for this, but for whatever reason a lot of governments fail to do what was necessary.

Failed to respond quickly enough or minimize that what was happening. There are worse actors than others, right, but this isn’t a problem that’s just about the US government. This is a problem in Italy, in China, and it’s disheartening because COVID-19 is pretty much exactly one of the major scenarios that infectious disease experts have been warning about. The novel coronavirus that jumps from animals to humans that spread through some kind of respiratory pathway that’s highly infectious, that spreads asymptomatically. This is something that people worried about and knew about and in a sense it was probably only a matter of time that this was going to happen and there might be a small risk in any given year and yet we weren’t ready for it, didn’t take the steps, we lost time. It could have been used saving lives. That’s really disheartening.

I would like to see us learn a lesson from this and I think to some extent, once this is all over, whenever that is, we will probably create some institutional structures, but then we have to maintain them. We tend to forget a generation later about these kinds of things. We need to create governance systems that have more incentive to prepare for rare risks. It’s not the only thing we should be doing necessarily, but we are underprepared. That’s my view.

Lucas Perry: Yeah, and I mean the sample size of historic pandemics is quite good, right?

Robert de Neufville: Yeah. It’s not like we were invaded by aliens. Something like this happens in just about every person’s lifetime. It’s historically not that rare and this is a really bad one, but the Spanish flu and the Asian flu were also pretty bad. We should have known this was coming.

Lucas Perry: What I’m also reminded here of and some of these biases you’re talking about, we have climate change on the other hand, which is destabilizing and kind of global catastrophic risky, depending on your definition and for people who are against climate change, there seems to be A) lack in trust of science and B) then not wanting to invest in expensive technologies or something that seemed wasteful. I’m just reflecting here on all of the biases that fed into our inability to prepare for COVID.

Robert de Neufville: Well, I don’t think the distrust of science is sort of a thing that’s out there. I mean, maybe to some extent it is, but it’s also a deliberate strategy that people with interests in continuing, for example, the fossil fuel economy, have deliberately tried to cloud the issue to create distrust in science to create phony studies that make it seem that climate change isn’t real. We thought a little bit about this at GCRI about how this might happen with artificial intelligence. You can imagine that somebody with a financial interest might try to discredit the risks and make it seem safer than it is, and maybe they even believe that to some extent, nobody really wants to believe that the thing that’s getting them a lot of money is actually evil. So I think distrust in science really isn’t an accident and it’s a deliberate strategy and it’s difficult to know how to combat it. There are strategies you can take, but it’s a struggle, right? There are people who have an interest in keeping scientific results quiet.

Lucas Perry: Yeah. Do you have any thoughts then about how we could increase the uptake of using forecasting methodologies for all manner of decision making? It seems like generally you’re pessimistic about it right now.

Robert de Neufville: Yeah. I am a little pessimistic about it. I mean one thing is that I think that we’ve tried to get people interested in our forecasts and a lot of people just don’t know what to do with them. Now one thing I think is interesting is that often people, they’re not interested in my saying, “There’s a 78% chance of something happening.” What they want to know is, how did I get there? What is my arguments? That’s not unreasonable. I really like thinking in terms of probabilities, but I think it often helps people understand what the mechanism is because it tells them something about the world that might help them make a decision. So I think one thing that maybe can be done is not to treat it as a black box probability, but to have some kind of algorithmic transparency about our thinking because that actually helps people, might be more useful in terms of making decisions than just a number.

Lucas Perry: So is there anything else here that you want to add about COVID-19 in particular? General information or intuitions that you have about how things will go? What the next year will look like? There is tension in the federal government about reopening. There’s an eagerness to do that, to restart the economy. The US federal government and the state governments seem totally unequipped to do the kind of testing and contact tracing that is being done in successful areas like South Korea. Sometime in the short to medium term we’ll be open and there might be the second wave and it’s going to take a year or so for a vaccine. What are your intuitions and feelings or forecasts about what the next year will look like?

Robert de Neufville: Again, with the caveat that I’m not a virologist or not an expert in vaccine development and things like that, I have thought about this a lot. I think there was a fantasy, still is a fantasy that we’re going to have what they call a V-shape recovery that… you know everything crashed really quickly. Everyone started filing for unemployment as all the businesses shut down. Very different than other types of financial crises, this virus economics. But there was this fantasy that we would sort of put everything on pause, put the economy into some cryogenic freeze, and somehow keep people able to pay their bills for a certain amount of time. And then after a few months, we’d get some kind of therapy or vaccine or it would die down and suppress the disease somehow. And then we would just give it a jolt of adrenaline and we’d be back and everyone would be back in their old jobs and things would go back to normal. I really don’t think that is what’s going to happen. I think it is almost thermodynamically harder to put things back together than it is to break them. That there are things about the US economy in particular, the fact that in order to keep getting paid, you actually need to lose your job and go on unemployment, in many cases. It’s not seamless. It’s hard to even get through on the phone lines or to get the funding.

I think that even after a few months, the US economy is going to look like a town that’s been hit by a hurricane and we’re going to have to rebuild a lot of things. And maybe unemployment will go down faster than it did in previous recessions where it was more about a bubble popping or something, but I just don’t think that we go back to normal.

I also just don’t think we go back to normal in a broader sense. This idea that we’re going to have some kind of cure. Again, I’m not a virologist, but I don’t think we typically have a therapy that cures viruses the way you know antibiotics might be super efficacious against bacteria. Typically, viral diseases, I think are things we have to try to mitigate and some cocktail may improve treatments and we may figure out better things to do with ventilators. Well, you might get the fatality rate down, but it’s still going to be pretty bad.

And then there is this idea maybe we’ll have a vaccine. I’ve heard people who know more than I do say maybe it’s possible to get a vaccine by November. But, the problem is until you can simulate with a supercomputer what happens in the human body, you can’t really speed up biological trials. You have to culture things in people and that takes time.

You might say, well, let’s don’t do all the trials, this is an emergency. But the fact is, if you don’t demonstrate that a vaccine is safe and efficacious, you could end up giving something to people that has serious adverse effects, or even makes you more susceptible to disease. That was problem one of the SARS vaccines they tried to come up with. Originally, is it made people more susceptible. So you don’t want to hand out millions and millions of doses of something that’s going to actually hurt people, and that’s the danger if you skip these clinical trials. So it’s really hard to imagine a vaccine in the near future.

I don’t want to sell short human ingenuity because we’re really adaptable, smart creatures, and we’re throwing all our resources at this. But, there is a chance that there is really no great vaccine for this virus. We haven’t had great luck with finding vaccines for coronaviruses. It seems to do weird things to the human immune system and maybe there is evidence that immunity doesn’t stick around that long. It’s possible that we come up with a vaccine that only provides partial immunity and doesn’t last that long. And I think there is a good chance that essentially we have to keep social distancing well into 2021 and that this could be a disease that remains dangerous and we have to continue to keep fighting for years potentially.

I think that we’re going to open up and it is important to open up as soon as we can because what’s happening with the economy will literally kill people and cause famines. But on the other hand, we’re going to get outbreaks that come back up again. You know it’s going to be a like fanning coals if we open up too quickly and in some places we’re not going to get it right and that doesn’t save anyone’s life. I mean, if it starts up again and the virus disrupts the economy again. So I think this is going to be a thing we are struggling to find a balance to mitigate and that we’re not going to go back to December 2019 for a while, not this year. Literally, it may be years.

And I think that although humans have amazing capacity to forget things and go back to normal life. I think that we’re going to see permanent changes. I don’t know exactly what they are. But, I think we’re going to see permanent changes in the way we live. And I don’t know if I’m ever shaking anyone’s hands again. We’ll see about that. A whole generation of people are going to be much better at washing their hands.

Lucas Perry: Yeah. I’ve already gotten a lot better at washing my hands watching tutorials.

Robert de Neufville: I was terrible at it. I had no idea how bad I was.

Lucas Perry: Yeah, same. I hope people who have shaken my hand in the past aren’t listening. So the things that will stop this are sufficient herd immunity to some extent or a vaccine that is efficacious. Those seem like the, okay, it’s about time to go back to normal points, right?

Robert de Neufville: Yeah.

Lucas Perry: A vaccine is not a given thing given the class of coronavirus diseases and how they behave?

Robert de Neufville: Yeah. Eventually now this is where I really feel like I’m not a virologist, but eventually diseases evolve and we co-evolve with them. Whatever the Spanish Flu was, it didn’t continue to kill as many people years down the line. I think that’s because people did develop immunity.

But also, viruses don’t get any evolutionary advantage from killing their hosts. They want to use us to reproduce. Well, they don’t want anything, but that advantages them. If they kill us and make us use mitigation strategies, that hurts their ability to reproduce. So in the long run, and I don’t know how long that run is, but eventually we co-evolve with it and it becomes endemic instead of epidemic and it’s presumably not as lethal. But, I think that it is something that we could be fighting for a while.

There is chances of additional disasters happening on top of it. We could get another disease popping out of some animal population while our immune systems are weak or something like that. So we should probably be rethinking the way we interact with caves full of bats and live pangolins.

Lucas Perry: All right. We just need to be prepared for the long haul here.

Robert de Neufville: Yeah, I think so.

Lucas Perry: I’m not sure that most people understand that.

Robert de Neufville: I don’t think they do. I mean, I guess I don’t have my finger on the pulse and I’m not interacting with people anymore, but I don’t think people want to understand it. It’s hard. I had plans. I did not intend to be staying in my apartment. Having your health is more important and the health of others, but it’s hard to face that we may be dealing with a very different new reality.

This thing, the opening up in Georgia, it’s just completely insane to me. Their cases have been slowing, but if it’s shrinking, it seems to be only a little bit. To me, when they talk about opening up, it sounds like they’re saying, well, we reduced the extent of this forest fire by 15%, so we can stop fighting it now. Well, it’s just going to keep growing. But, you have to actually stamp it out or get really close to it before you can stop fighting it. I think people want to stop fighting the disease sooner than we should because it sucks. I don’t want to be doing this.

Lucas Perry: Yeah, it’s a new sad fact and there is a lot of suffering going on right now.

Robert de Neufville: Yeah. I feel really lucky to be in a place where there aren’t a lot of cases, but I worry about family members in other places and I can’t imagine what it’s like in places where it’s bad.

I mean, in Hawaii, people in the hospitality industry and tourism industry have all lost their jobs all at once and they still have to pay our super expensive rent. Maybe that’ll be waived and they won’t be evicted. But, that doesn’t mean they can necessarily get medications and feed their family. And all of these are super challenging for a lot of people.

Nevermind that other people are in the position of, they’re lucky to have jobs, but they’re maybe risking getting an infection going to work, so they have to make this horrible choice. And maybe they have someone with comorbidities or who is elderly living at home. This is awful. So I understand why people really want to get past this part of it soon.

Was it Dr. Fauci that said, “The virus has its own timeline?”

One of the things I think that this may be teaching us, it’s certainly reminding me that humans are not in charge of nature, not the way we think we are. We really dominate the planet in a lot of ways, but it’s still bigger than us. It’s like the ocean or something. You know? You may think you’re a good swimmer, but if you get a big wave, you’re not in control anymore and this is a big wave.

Lucas Perry: Yeah. So back to the point of general superforecasting. Suppose you’re a really good superforecaster and you’re finding well-defined things to make predictions about, which is, as you said, sort of hard to do and you have carefully and honestly compared your predictions to reality and you feel like you’re doing really well.

How do you convince other people that you’re a great predictor when almost everyone else is making lots of vague predictions and cherry picking their successes or their interests groups that are biasing and obscuring things to try to have a seat at the table? Or for example, if you want to compare yourself to someone else who has been keeping a careful track as well, how do you do that technically?

Robert de Neufville: I wish I knew the answer to that question. I think it is probably a long process of building confidence and communicating reasonable forecasts and having people see that they were pretty accurate. People trust something like FiveThirthyEight, Nate Silvers’, or Nick Cohen, or someone like that because they have been communicating for a while and people can now see it. They have this track record and they also are explaining how it happens, how they get to those answers. And at least a lot of people started to trust what Nate Silver says. So I think something like that really is the longterm strategy.

But, I think it’s hard because a lot of times there is always someone who is saying every different thing at any given time. And if somebody says there is definitely a pandemic going to happen, and they do it in November 2019, then a lot of people may think, “Wow, that person’s a prophet and we should listen to them.”

To my mind, if you were saying that in November of 2019, that wasn’t a great prediction. I mean, you turned out to be right, but you didn’t have good reasons for it. At that point, it was still really uncertain unless you had access to way more information than as far as I know anyone had access to.

But, you know sometimes those magic tricks where somebody throws a dart at something and happens to hit the bullseye might be more convincing than an accurate probabilistic forecast. I think that in order to sell the accurate probabilistic forecasts, you really need to build a track record of communication and build confidence slowly.

Lucas Perry: All right, that makes sense.

So on prediction markets and prediction aggregators, they’re pretty well set up to treat questions like will X happen by Y date where X is some super well-defined thing. But lots of things we’d like to know are not really of this form. So what are other useful forms of question about the future that you come across in your work and what do you think are the prospects for training and aggregating skilled human predictors to tackle them?

Robert de Neufville: What are the other forms of questions? There is always a trade off with designing question between sort of the rigor of the question, how easy it is to say whether it turned out to be true or not and how relevant it is to things you might actually want to know. Now, that’s often difficult to balance.

I think that in general we need to be thinking more about questions, so I wouldn’t say here is the different type of question that we should be answering. But rather, let’s really try to spend a lot of time thinking about the questions. What questions could be useful to answer? I think just that exercise is important.

I think things like science fiction are important where they brainstorm a possible scenario and they often fill it out with a lot of detail. But, I often think in forecasting, coming up with very specific scenarios is kind of the enemy. If you come up with a lot of things that could plausibly happen and you build it into one scenario and you think this is the thing that’s going to happen, well the more specific you’ve made that scenario, the less likely it is to actually be the exact right one.

We need to do more thinking about spaces of possible things that could happen, ranges of things, different alternatives rather than just coming up with scenarios and anchoring on them as the thing that happens. So I guess I’d say more questions and realize that at least as far as we’re able to know, I don’t know if the universe is deterministic, but at least as far as we are able to know, a lot of different things are possible and we need to think about those possibilities and potentially plan for them.

Lucas Perry: All right. And so, let’s say you had 100 professors with deep subject matter expertise in say, 10 different subjects and you had 10 superforecasters, how would you make use of all of them and on what sorts of topics would you consult, what group or combination of groups?

Robert de Neufville: That’s a good question. I think we bash on subject matter experts because they’re bad at producing probabilistic forecasts. But the fact is that I completely depend on subject matter experts. When I try to forecast what’s going to happen on the pandemic, I am reading all the virologists and infectious disease experts because I don’t know anything about this. I mean, I know I get some stuff wrong. Although, I’m in a position where I can actually ask people, hey what is this, and get their explanations for it.

But, I would like to see them working together. To some extent, having some of the subject matter experts recognize that we may know some things about estimating probabilities that they don’t. But also, the more I can communicate with people that know specific facts about things, the better the forecasts I can produce are. I don’t know what the best system for that is. I’d like to see more communication. But, I also think you could get some kind of a thing where you put them in a room or on a team together to produce forecasts.

When I’m forecasting, typically, I come up with my own forecast and then I see what other people have said. But, I do that so as not to anchor on somebody else’s opinion and to avoid groupthink. You’re more likely to get groupthink if you have a leader and a team that everyone defers to and then they all anchor on whatever the leader’s opinion is. So, I try to form my own independent opinion.

But, I think some kind of a Delphi technique where people will come up with their own ideas and then share them and then revise their ideas could be useful and you could involve subject matter experts in that. I would love to be able to just sit and talk with epidemiologist about this stuff. I don’t know if they would love it as much to talk to me and I don’t know. But I think that, that would help us collectively produce better forecasts.

Lucas Perry: I am excited and hopeful for the top few percentage of superforecasters being integrated into more decision making about key issues. All right, so you have your own podcast.

Robert de Neufville: Yeah.

Lucas Perry: If people are interested in following you or looking into more of your work at the Global Catastrophic Riss Institute, for example, or following your podcast or following you on social media, where can they do that?

Robert de Neufville: Go to the Global Catastrophic Risk Institute’s website, it’s gcrinstitute.org, so you can see and read about our work. It’s super interesting and I believe super important. We’re doing a lot of work now on artificial intelligence risk. There has been a lot of interest in that. But, we also talk about nuclear war risk and there is going to be I think a new interest in pandemic risk. So these are things that we think about. I also do have a podcast. I co-host it with two other superforecasters, which sometimes becomes sort of like a forecasting politics variety hour. But we have a good time and we do some interviews with other superforecasters and we’ve also talked to people about existential risk and artificial intelligence. That’s called NonProphets. We have a blog, nonprophetspod.wordpress.org. But Nonprophets, it’s N-O-N-P-R-O-P-H-E-T-S like prophet like someone who sees the future, because we are not prophets. However, there is also another podcast, which I’ve never listened to and feel like I should, which also has the same name. There is an atheist podcast out of Texas and atheist comedians. I apologize for taking their name, but we’re not them, so if there is any confusion. One of the things about forecasting is it’s super interesting and it’s a lot of fun, at least for people like me to think about things in this way, and there are ways like Good Judgment Open you can do it too. So we talk about that. It’s fun. And I recommend everyone get into forecasting.

Lucas Perry: All right. Thanks so much for coming on and I hope that more people take up forecasting. And it’s a pretty interesting lifelong thing that you can participate in and see how well you do over time and keep resolving over actual real world stuff. I hope that more people take this up and that it gets further and more deeply integrated into communities of decision makers on important issues.

Robert de Neufville: Yeah. Well, thanks for having me on. It’s a super interesting conversation. I really appreciate talking about this stuff.

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute’s Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk.

Topics discussed in this episode include:

  • The importance of taking expected value calculations seriously
  • The need for making accurate predictions
  • The difficulty of taking probabilities seriously
  • Human psychological bias around estimating and acting on risk
  • The massive online prediction solicitation and aggregation engine, Metaculus
  • The risks and benefits of synthetic biology in the 21st Century

Timestamps: 

0:00 Intro 

2:35 How has COVID-19 demonstrated weakness in human systems and risk preparedness 

4:50 The importance of expected value calculations and considering risks over timescales 

10:50 The importance of being able to make accurate predictions 

14:15 The difficulty of trusting probabilities and acting on low probability high cost risks

21:22 Taking expected value calculations seriously 

24:03 The lack of transparency, explanation, and context around how probabilities are estimated and shared

28:00 Diffusion of responsibility and other human psychological weaknesses in thinking about risk

38:19 What Metaculus is and its relevance to COVID-19 

45:57 What is the accuracy of predictions on Metaculus and what has it said about COVID-19?

50:31 Lessons for existential risk from COVID-19 

58:42 The risk of synthetic bio enabled pandemics in the 21st century 

01:17:35 The extent to which COVID-19 poses challenges to democratic institutions

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is a special focused on lessons from COVID-19 with two members of the Future of Life Institute team, Anthony Aguirre and Emilia Javorsky. The ongoing coronavirus pandemic has helped to illustrate the frailty of human systems, the difficulty of international coordination on global issues and our general underpreparedness for risk. This podcast is focused on what COVID-19 can teach us about being better prepared for future risk from the perspective of global catastrophic and existential risk. The AI Alignment Podcast and the end of the month Future of Life Institute podcast will release as normally scheduled. 

Anthony Aguirre has been on the podcast recently to discuss the ultimate nature of reality and problems of identity. He is a physicist that studies the formation, nature, and evolution of the universe, focusing primarily on the model of eternal inflation—the idea that inflation goes on forever in some regions of universe—and what it may mean for the ultimate beginning of the universe and time. He is the co-founder and Associate Scientific Director of the Foundational Questions Institute and is also a Co-Founder of the Future of Life Institute. He also co-founded Metaculus, which is something we get into during the podcast, which is an effort to optimally aggregate predictions about scientific discoveries, technological breakthroughs, and other interesting issues.

Emilia Javorsky develops tools to improve human health and wellbeing and has a background in healthcare and research. She leads clinical research and work on translation of science from academia to commercial setting at Artic Fox, and is the Chief Scientific Officer and Co-Founder of Sundaily, as well as the Director of Scientists Against Inhumane Weapons. Emilia is an advocate for the safe and ethical deployment of technology, and is currently heavily focused on lethal autonomous weapons issues.  

And with that, let’s get into our conversation with Anthony and Emilia on COVID-19. 

We’re here to try and get some perspective on COVID-19 for how it is both informative surrounding issues regarding global catastrophic and existential risk and to see ways in which we can learn from this catastrophe and how it can inform existential risk and global catastrophic thought. Just to start off then, what are ways in which COVID-19 has helped demonstrate weaknesses in human systems and preparedness for risk?

Anthony Aguirre: One of the most upsetting things I think to many people is how predictable it was and how preventable it was with sufficient care taken as a result of those predictions. It’s been known by epidemiologists for decades that this sort of thing was not only possible, but likely given enough time going by. We had SARS and MERS as kind of dry runs that almost were pandemics, but didn’t have quite the right characteristics. Everybody in the community of people thinking hard about this, and I would like to hear more of Emilia’s perspective on this knew that something like this was coming eventually. That it might be a few percent probable each year, but after 10 or 20 or 30 years, you start to get large probability of something like this happening. So it was known that it was coming eventually and pretty well known what needed to happen to be well prepared for it.

And yet nonetheless, many countries have found themselves totally unprepared or largely unprepared and unclear on what exactly to do and making very poor decisions in response to things that they should be making high quality decisions on. So I think part of what I’m interested in doing is thinking about why has that happened, even though we scientifically understand what’s going on? We numerically model what could happen, we know many of the things that should happen in response. Nonetheless, as a civilization, we’re kind of being caught off guard in a way and making a bad situation much, much worse. So why is that happening and how can we do it better now and next time?

Lucas Perry: So in short, the ways in which this is frustrating is that it was very predictable and was likely to happen given computational models and then also, lived experience given historical cases like SARS and MERS.

Anthony Aguirre: Right. This was not some crazy thing out of the blue, this was just a slightly worse version of things that have happened before. Part of the problem, in my mind, is the sort of mismatch between the likely cost of something like this and how many resources society is willing to put into planning and preparing and preventing it. And so here, I think a really important concept is expected value. So, the basic idea that when you’re calculating the value of something that is unsure that you want to think about different probabilities for different values that that thing might have and combine them.

So for example, if I’m thinking I’m going to spend some money on something and there’s a 50% chance that it’s going to cost a dollar and there’s a 50% chance that it’s going to cost $1,000, so how much should I expect to pay for it? So on one hand, I don’t know, it’s a 50/50 chance, it could be a dollar, it could be $1,000, but if I think I’m going to do this over and over again, you can ask how much am I going to pay on average? And that’s about 50% of a dollar plus 50% of $1,000 so about $500, $500 and 50 cents. The idea of thinking in terms of expected value is that when I have probabilities for something, I should always think as if I’m going to do this thing many, many, many times, like I’m going to roll the dice many, many times and I should reason in a way that makes sense if I’m going to do it a lot of times. So I’d want to expect that I’m going to spend something like $500 on this thing, even though that’s not either of the two possibilities.

So, if we’re thinking about a pandemic, if you imagine the cost just in dollars, let alone all the other things that are going to happen, but just purely in terms of dollars, we’re talking about trillions of dollars. So if this was something that is going to cost trillions and trillions of dollars and there was something like a 10% chance of this happening over a period of a decade say, we should have been willing to pay hundreds and hundreds of billions of dollars to prevent this from happening or to dramatically decrease the cost when it does happen. And that is way, way, way orders of magnitude, more money than we have in fact spent on that.

So, part of the tricky thing is that people don’t generally think in these terms, they think of “What is the most likely thing?” And then they plan for that. But if the most likely thing is relatively cheap and a fairly unlikely thing is incredibly expensive, people don’t like to think about the incredibly expensive, unlikely thing, right? They think, “That’s scary. I don’t want to think about it. I’m going to think about the likely thing that’s cheap.” But of course, that’s terrible planning. You should put some amount of resources into planning for the unlikely incredibly expensive thing.

And it’s often, and it is in this case, that even a small fraction of the expected cost of this thing could have prevented the whole thing from happening in the sense that there’s going to be trillions and trillions of dollars of costs. It was anticipated at 10% likely, so it’s hundreds of billions of dollars that in principle society should have been willing to pay to prevent it from happening, but even a small fraction of that, in fact, could have really, really mitigated the problem. So it’s not even that we actually have to spend exactly the amount of money that we think we will lose in order to prevent something from happening.

Even a small fraction would have done. The problem is that we spend not even close to that. These sorts of situations where there’s a small probability of something extraordinarily costly happening, our reaction in society tends to be to just say, “It’s a small probability, so I don’t want to think about it.” Rather than “It’s a small probability, but the cost is huge, so I should be willing to pay some fraction of that small probability times that huge cost to prevent it from happening.” And I think if we could have that sort of calculation in mind a little bit more firmly, then we could prevent a lot of terrible things from happening at a relatively modest investment. But the tricky thing is that it’s very hard to take seriously those small probability, high cost things without really having a firm idea of what they are, what the probability of that happening is and what the cost will be.

Emilia Javorsky: I would add to that, but in complete agreement with Anthony, part of what is at issue here too is needing to think overtime scales, because if something has a certain probability that is small at any given short term horizon, but that probability rises to something that’s more significant with a tremendously high cost over a longer term time scale, you need to be able to be willing to think on those longer term timescales in order to act. And from the perspective of medicine, this is something we’ve struggled with a lot, at both the individual level, at the healthcare system level and at the societal public health policy level, is that prevention, while we know it’s much cheaper to prevent a disease than to treat it, the same thing with pandemic preparedness, a lot of the things we’re talking about were actually quite cheap mitigation measures to put in place. Right now, we’re seeing a crisis of personal protective equipment.

We’re talking about basic cheap supplies like gloves and masks and then national stockpiles of ventilators. These are very basic, very conserved across any pandemic type, right? We know that in all likelihood when a pandemic arises, it is some sort of respiratory borne illness. Things like masks and respirators are a very wise thing to stockpile and have on hand. Yet despite having several near misses, even in the very recent past, we’re talking about the past 20 years, there was not a critical will or a critical lobby or a critical voice that enabled us to do these very basic, relatively cheap measures to be prepared for something like this to happen.

If you talk about something like vaccine development, that’s something that you need to prepare pretty much in real time. That’s pathogen specific, but the places that were fumbling to manage this epidemic today are things that were totally basic, cheap and foreseeable. We really need to find ways in the here and now to motivate thinking on any sort of longterm horizon. Not even 50 years, a hundred years down the line, but one to five years are things that we struggle with.

Anthony Aguirre: To me, another surprising thing has been the sudden discovery of how important it is to be able to predict things. It’s of course, always super important. This is what we do throughout our life. We’re basically constantly predicting things, predicting the consequences of certain actions or choices we might make, and then making those choices dependent on which things we want to have happen. So we’re doing it all the time and yet when confronted with this pandemic, suddenly, we extra super realize how important it is to have good predictions, because what’s unusual I would say about a situation like this is that all of the danger is sort of in the future. If you look at it in any given time, you say, “Oh, there’s a couple of dozen cases here in my county, everything’s under control.” Unbelievably ineffective and wishful thinking, because of course, the number of cases is growing exponentially and by the time you notice that there’s any problem that’s of significance at all, the next day or the next few days, it’s going to be doubly as big.

So the fact that things are happening exponentially in a pandemic or an epidemic, makes it incredibly vital that you have the ability to think about what’s going to happen in the future and how bad things can get quite quickly, even if at the moment, everything seems fine. Everybody who thinks in this field or who just is comfortable with how exponentials work know this intellectually, but it still isn’t always easy to get the intuitive feeling for that, because it just seems like so not a big deal for so long, until suddenly it’s the biggest thing in the world.

This has been a particularly salient lesson that we really need to understand both exponential growth and how to do good projections and predictions about things, because there could be lots of things that are happening under the radar. Beyond the pandemic, there are lots of things that are exponentially growing that if we don’t pay attention to the people who are pointing out those exponentially growing things and just wait until they’re a problem, then it’s too late to do anything about the problem.

At the beginning stages, it’s quite easy to deal with. If we take ourselves back to sometime in late December, early January or something, there was a time where this pandemic could have easily been totally prevented by the actions of the few people, if they had just known exactly what the right things to do were. I don’t think you can totally blame people for that. It’s very hard to see what it would turn into, but there is a time at the beginning of the exponential where action is just so much easier and every little bit of delay just makes it incredibly harder to do anything about it. It really brings home how important it is to have good predictions about things and how important it is to believe those predictions if you can and take decisive action early on to prevent exponentially growing things from really coming to bite you.

Lucas Perry: I see a few central issues here and lessons from COVID-19 that we can draw on. The first is that this is something that was predictable and was foreseeable and that experts were saying had a high likelihood of happening, and the ways in which we failed were either in the global system, there aren’t the kinds of incentives for private organizations or institutions to work towards mitigating these kinds of risks or people just aren’t willing to listen to experts making these kinds of predictions. The second thing seems to be that even when we do have these kinds of predictions, we don’t know how basic decision theory works and we’re not able to feel and intuit the reality of exponential growth sufficiently well. So what are very succinct ways of putting solutions to these problems?

Anthony Aguirre: The really hard part is having probabilities that you feel like you can trust. If you go to a policy maker and tell them there’s a danger of this thing happening, maybe it’s a natural pandemic, maybe it’s a human engineered pandemic or a AI powered cyber attack, something that if it happens, is incredibly costly to society and you say, “I really think we should be devoting some resources to preventing this from happening, because I think there’s a 10% chance that this is going to happen in the next 10 years.” They’re going to ask you, “Where does that 10% chance come from?” And “Are you sure that it’s not a 1% chance or a 0.1% chance or a .00001% chance?” And that makes a huge difference, right? If something really is a tiny, tiny fraction of a percent likely, then that plays directly into how much effort you should go in to preventing it if it has some fixed cost.

So I think the reaction that people have often to low probability, high cost things is to doubt exactly what the probability is and having that doubt in their mind, just avoid thinking about the issue at all, because it’s so easy to not think about it if the probability is really small. A big part of it is really understanding what the probabilities are and taking them seriously. And that’s a hard thing to do, because it’s really, really hard to estimate what the probabilities say of a gigantic AI powered cyber attack is, where do you even start with that? It has all kinds of ingredients that there’s no model for, there’s no set quantitative assessment strategy for it. That’s a part of the root of the conundrum that even for things like this pandemic that everybody knew was coming at some level, I would say nobody knew whether it was a 5% chance over 10 years or a 50% chance over 10 years.

It’s very hard to get firm numbers, so one thing I think we need are better ways of assessing probabilities of different sorts of low probability, high cost things. That’s something I’ve been working a lot on over the past few years in the form of Metaculus which maybe we can talk about, but I think in general, most people and policy makers can understand that if there’s some even relatively low chance of a hugely costly thing that we should do some planning for it. We do that all the time, we do it with insurance, we do it with planning for wars. There are all kinds of low probability things that we plan for, but if you can’t tell people what the probability is and it’s small and the thing is weird, then it’s very, very hard to get traction.

Emilia Javorsky: Part of this is how do we find the right people to make the right predictions and have the ingredients to model those out? But the other side of this is how do we get the policy makers and decision makers and leaders in society to listen to those predictions and to have trust and confidence in them? From the perspective of that, when you’re communicating something that is counterintuitive, which is how many people end up making decisions, there really has to be a foundation of trust there, where you’re telling me something that is counterintuitive to how I would think about decision making and planning in this particular problem space. And so, it has to be built on a foundation and trust. And I think one of the things that characterize good models and good predictions is exactly as you say, they’re communicated with a lot of trepidation.

They explain what the different variables are that go into them and the uncertainty that bounds each of those variables and an acknowledgement that some things are known and unknown. And I think that’s very hard in today’s world where information is always at maximum volume and it’s very polarized and you’re competing against voices, whether they be in a policy maker’s ear or a CEO’s ear, that will speak in absolutes and speak in levels of certainty, overestimating risk, or underestimating risk.

That is the element that is necessary for these predictions to have impact is how do you connect ambiguous and qualified and cautious language that characterizes these kind of long term predictions with a foundation of trust so people can hear and appreciate those and you don’t get drowned out by the noise on either side of things that are much likely to be less well founded if they’re speaking in absolutes and problem spaces that we know just have a tremendous amount of uncertainty.

Anthony Aguirre: That’s a very good point. You’re mentioning of the kind of unfamiliarity with these things is an important one in the sense that, as an individual, I can think of improbable things that might happen to me and they seem, well, that’s probably not going to happen to me, but I know intellectually it will and I can look around the world and see that that improbable thing is happening to lots of people all the time. Even if there’s kind of a psychological barrier to my believing that it might happen to me, I can’t deny that it’s a thing and I can’t really deny what sort of probability it might have to happen to me, because I see it happening all around. Whereas when we’re talking about things that are happening to a country or a civilization, we don’t have a whole lot of statistics on them.

We can’t just say of all the different planets that are out there with civilizations like ours, 3% of them are undergoing pandemics right now. If we could do that then we could really count on those probabilities. We can’t do that. We can look historically at what happened in our world, but of course, since it’s really changing dramatically over the years, that’s not always such a great guide and so, we’re left with reasoning by putting together scientific models, all the uncertainties that you were mentioning that we have to feed into those sorts of models or just other ways of making predictions about things through various means and trying to figure out how can we have good confidence in those predictions. And this is an important point that you bring up, not so much in terms of certainty, because there are all of these complex things that we’re trying to predict about the possibility of good or bad things happening to our society as a whole, none of them can be predicted with certainty.

I mean, almost nothing in the world can be predicted with certainty, certainly not these things, and so it’s always a question of giving probabilities for things and both being confident in those probabilities and taking seriously what those probabilities mean. And as you say, people don’t like that. They want to be told what is going to happen or what isn’t going to happen and make a decision on that basis. That is unfortunately not information that’s available on most important things and so, we’d have to accept that they’re going to be probabilities, but then where do we them from? How do we use them? There’s a science and an art to that I think, and a subtlety to it as you say, that we really have to get used to and get comfortable with.

Lucas Perry: There seems to be lots of psychological biases and problems around human beings understanding and fully integrating probabilistic estimations into our lives and decision making. I’m sure there’s probably literature that already exists upon this, but it would be skillful I think to apply it to existential and global catastrophic risk. So, assuming that we’re able to sufficiently develop our ability to generate accurate and well-reasoned probabilistic estimations of risks, and Anthony, we’ll get into Metaculus shortly, then you mentioned that the prudent and skillful thing to do would be to feed that into a proper decision theory, which explain a little bit more about the nerdy side of that if you feel it would be useful, and in particular, you talked a little bit about expected value, could you say a little bit more about how if policy and government officials were able to get accurate probabilistic reasoning and then fed it into the correct decision theoretic models that it would produce better risk mitigation efforts?

Anthony Aguirre: I mean, there’s all kinds of complicated discussions and philosophical explorations of different versions of decision theory. We really don’t need to think about things in such complicated terms in the sense that what it really is about is just taking expected values seriously and thinking about actions we might take based on how much value we expect given each decision. When you’re gambling, this is exactly what you’re doing, you might say, “Here, I’ve got some cards in my hand. If I draw, there’s a 10% chance that I’ll get nothing and a 20% chance that I’ll get a pair and a tiny percent chance that I’ll fill out my flush or something.” And with each of those things, I want to think of, “What is the probable payoff when I have that given outcome?” And I want to make my decisions based on the expected value of things rather than just what is the most probable or something like that.

So it’s a willingness to quantitatively take into account, if I make decision A, here is the likely payoff of making decision A, if I make decision B, here’s the likely payoff that is the expected value of my payoff in decision B, looking at which one of those is higher and making that decision. So it’s not very complicated in that sense. There are all kinds of subtleties, but in practice it can be very complicated because usually you don’t know, if I make decision A, what’s going to happen? If I make decision B, what’s going to happen? And exactly what value can I associate with those things? But this is what we do all the time, when we weigh the pros and cons of things, we’re kind of thinking, “Well, if I do this, here are the things that I think are likely to happen. Here’s what I think I’m going to feel and experience and maybe gain in doing A, let me think through the same thing in my mind with B and then, which one of those feels better is the one that I do.”

So, this is what we do all the time on an intuitive level, but we can do quantitative and systematic method of it. If we are more carefully thinking about what the actual numerical and quantitative implications of something are and if we have actual probabilities that we can assign to the different outcomes in order to make our decision. All of this, I think, is quite well known to decision makers of all sorts. What’s hard is that often decision makers won’t really have those sorts of tools in front of them. They won’t have ability to look at different possibilities, ability to attribute probabilities and costs and payoffs to those things in order to make good decisions. So those are tools that we could put in people’s hands and I think would just allow people to make better decisions.

Emilia Javorsky: And what I like about what you’re saying, Anthony, implicit in that is that it’s a standardized tool. The way you assign the probabilities and decide between different optionalities is standardized. And I think one thing that can be difficult in the policy space is different advocacy groups or different stakeholders will present data and assign probabilities based on different assumptions and vested interests, right? So, when a policy maker is making a decision, they’re using probabilities and using estimates and outcomes that are developed using completely different models with completely different assumptions and different biases baked into them and different interests baked into them. What I think is so vital is to make sure as best one can, again knowing the inherent ambiguity that’s existing in modeling in general, that you’re having an apples to apples comparison when you’re assigning different probabilities and making decisions based off of them.

Anthony Aguirre: Yeah, that’s a great point that part of the problem is that people are just used to probabilities not meaning anything because they’re often given without context, without explanation and by groups that have a vested interest in them looking a certain way. If I ask someone, what’s the probability that this thing is going to happen, and they’d tell me 17%, I don’t know what to do with that. Do I believe them? I mean, on what basis are they telling me 17%? In order for me to believe that, I have to either have an understanding of what exactly went into that 17% and really agree step-by-step with all their assumptions and modeling and so on, or maybe I have to believe them from some other reason.

Like they’ve provided probabilities for lots of things before, and they’ve given accurate probabilities for all these different things that they provided, so I kind of trust their ability to give accurate probabilities. But usually that’s not available. That’s part of the problem. Our general lesson has been if people are giving you probabilities, usually they don’t mean much, but that’s not always the case. There are probabilities we use all the time, like for the weather where we more or less know what they mean. You see that there’s a 15% chance of rain.

That’s a meaningful thing, and it’s meaningful because both of you sort of trust that the weather people know what they’re doing, which they sort of do, and it’s meaningful in that it has a particular interpretation, which is that if I look at the weather forecast for a year and look at all the days where it said that there was a 15% chance of rain, about 15% of all those days it will have been raining. There’s a real meaning to that, and those numbers come from a careful calibration of weather models for exactly that reason. When you get 15% chance of rain from the weather forecast, what that generally means is that they’ve run a whole bunch of weather models with slightly different initial conditions and in 15% of them it’s raining today in your location.

They’re carefully calibrated usually, like the National Weather Service calibrates them, so that it really is true that if you look at all the days of, whatever, it’s 15% chance, about 15% of those days it was in fact raining. Those are probabilities that you can really use and you can say, “15% chance of rain, is it worth taking an umbrella? The umbrella is kind of annoying to carry around. Am I willing to take my chances for 15%? Yeah, maybe. If it was 30%, I’d probably take the umbrella. If it was 5%, I definitely wouldn’t.” That’s a number that you can fold into your decision theory because it means something. Whereas when somebody says, “There’s a 18% chance at this point that some political thing is going to happen, that some bill is going to pass,” maybe that’s true, but you have no idea where that 18% comes from. It’s really hard to make use of it.

Lucas Perry: Part of them proving this getting prepared for risks is better understanding and taking seriously the reasoning and reasons behind different risk estimations that experts or certain groups provide. You guys explained that there are many different vested interests or interest groups who may be biasing or framing percentages and risks in a certain way, so that policy and action can be directed towards things which may benefit them. Are there other facets to our failure to respond here other than our inability to take risks seriously?

Emilia Javorsky: If we had a sufficiently good understanding of the probabilities and we were able to see all of the reasons behind the probabilities and take them all seriously, and then we took those and we fed them into a standardized and appropriate decision theory, which used expected value calculations and some agreed upon risk tolerance to determine how much resources should be put into mitigating risks, are there other psychological biases or weaknesses in human virtue that would still lead to us insufficiently acting on these risks? An example that comes to mind maybe of something like a diffusion of responsibility.

That’s very much what COVID-19 in many ways has played out to be, right? We kind of started this with the assumptions that this was quite a foreseeable risk, and any which way you looked at the probabilities, it was a sufficiently high probability that basic levels of preparedness and a robustness of preparedness should have been employed. I think what you allude to in terms of diffusion of responsibility is certainly one aspect of it. It’s difficult to say where that decision-making fell apart, but we did hear very early on a lot of discussion of this is something that is a problem localized to China.

Anyone that has any familiarity with these models would have told you, “Based on the probabilities we already knew about, plus what we’re witnessing from this early data, which was publicly available in January, we had a pretty good idea of what was going on, that this would become something that was in all likelihood be global.” This next question becomes, why wasn’t anything done or acted on at that time? I think part of that comes with a lack of advocacy and a lack of having the ears of the key decision makers of what was actually coming. It is very, very easy when you have to make difficult decisions to listen to the vocal voices that tell you not to do something and provide reasons for inaction.

Then the voices of action are perhaps more muted coming from a scientific community, spoken in language that’s not as definitive as the other voices in the room and the other stakeholders in the room that have a vested interest in policymaking. The societal incentives to act or not act aren’t just from a pure, what’s the best long-term course of action, they’re very, very much vested in what are the loudest voices in the room, what is the kind of clout and power that they hold, and weighing those. I think there’s a very real political and social atmosphere and economic atmosphere that this happens in that dilutes some of the writing that was very clearly on the wall of what was coming.

Anthony Aguirre: I would add I think that it’s especially easy to ignore something that is predicted and quite understandable to experts who understand the dynamics of it, but unfamiliar or where historically you’ve seen it turn out the other way. Like on one hand, we had multiple warnings through near pandemics that this could happen, right? We had SARS and MERS and we had H1N1 and there was Ebola. All these things were clear indications of how possible it was for this to happen. But at the same time, you could easily take the opposite lesson, which is yes, an epidemic arises in some foreign country and people go and take care of it and it doesn’t really bother me.

You can easily take the lesson from that that the tendency of these things is to just go away on their own and the proper people will take care of them and I don’t have to worry about this. What’s tricky is understanding from the actual characteristics of the system and your understanding of the system what makes it different from those other previous examples. In this case, something that is more transmissible, transmissible when it’s not very symptomatic, yet has a relatively high fatality rate, not very high like some of these other things, which would have been catastrophic, but a couple of percent or whatever it turns out to be.

I think people who understood the dynamics of infectious disease and saw high transmissibility and potential asymptomatic transmission and a death rate that was much higher than the flu immediately put those three things together and saw, oh my god, this is a major problem and a little bit different from some of those previous ones that had a lower fatality rate or were very, very obviously symptomatic when they were transmissible, and so it was much easier to quarantine people and so on. Those characteristics you can understand if you’re trained for that sort of thing to look for it, and those people did, but if not, you just sort of see it as another far away disease in a far off land that people will take care of and it’s very easy to dismiss it.

I think it’s not really a failure of imagination, but a failure to take seriously something that could happen that is perfectly plausible just because something like it hasn’t really happened like that before. That’s a very dangerous one I think.

Emilia Javorsky: It comes back to human nature sometimes and the frailty of our biases and our virtue. It’s very easy to convince yourself and recall examples where things did not come to pass. Because dealing with the reality of the negative outcome that you’re looking at, even if it looks like it has a fairly high probability, is something that is innately adverse for people, right? We look at negative outcomes and we look for reasons that those negative outcomes will not come to pass.

It’s easy to say, “Well, yes, it’s only let’s say a 40% probability and we’ve had these before,” and it becomes very easy to identify reasons and not look at a situation completely objectively as to why the best course of action is not to take the kind of drastic measures that are necessary to avoid the probability of the negative outcome, even if you know that it’s likely to come to pass.

Anthony Aguirre: It’s even worst that when people do see something coming and take significant action and mitigate the problem, they rarely get the sort of credit that they should.

Emilia Javorsky: Oh, completely.

Anthony Aguirre: Because you never see the calamity unfold that they avoided.

Emilia Javorsky: Yes.

Anthony Aguirre: The tendency will be, “Oh, you overreacted, or oh, that was never a big problem in the first place.” It’s very hard to piece together like Y2K. I think it’s still unclear, at least it is to me, what exactly would have happened if we hadn’t made a huge effort to mitigate Y2K. There are many similar other things where it could be that there really was a calamity there and we totally prevented it by just being on top of it and putting a bunch of effort in, or it could be that it wasn’t that big of a deal, and it’s very, very hard to tell in retrospect.

That’s another unfortunate bias that if we could see the counterfactual world in which we didn’t do anything about Y2K and saw all this terrible stuff unfold, then we could make heroes out of the people that put all that effort in and sounded the warning and did all the mitigation. But we don’t see that. It’s rather unrewarding in a literal sense. It’s just you don’t get much reward for preventing catastrophes and you get lots of blame if you don’t prevent them.

Emilia Javorsky: This is something we deal with all the time on the healthcare side of things. This is why preventative health and public health and basic primary care really suffer to get the funding, get the attention that they really need. It’s exactly this. Nobody cares about the disease that they didn’t get, the heart attack they didn’t have, the stroke that they didn’t have. For those of us that come from a public health background, it’s been kind of a collective banging our head against the wall for a very long time because we know looking at the data that this is the best way to take care of population level health.

Emilia Javorsky: Yet knowing that and having the data to back it up, it’s very difficult to get the attention across all levels of the healthcare system, from getting the individual patient on board all the way up to how do we fund healthcare research in the US and abroad.

Lucas Perry: These are all excellent points. What I’m seeing from everything that you guys said is to back it up to what Anthony said quite while ago, there is a kind of risk exceptionalism where we feel that our country or ourselves won’t be exposed to catastrophic risks. It’s other people’s families who lose someone in a car accident but not mine, even though the risk of that is fairly high. There’s this second kind of bias going on that acting on risk in order to mitigate it based off pure reasoning alone seems to be very difficult, especially when the intervention to mitigate the risk is very expensive because it requires a lot of trust in the experts and the reasoning that goes behind it, like spending billions of dollars to prevent the next pandemic.

It feels more tangible and intuitive now, but maybe for people of newer generations it felt a little bit more silly and would have had to have been more of a rational cognitive decision. Then the last thing here seems to be that there’s asymmetry between different kinds of risks. Like if someone mitigates a pandemic from happening, it’s really hard to appreciate how good that was of a thing to do, but that seems to not be true of all risks. For example, with risks where the risk actually just exists somewhere like in a lab or a nuclear missile silo. For example, people like Stanislav Petrov and Vasili Arkhipov we’re able to appreciate it very easily just because there was a concrete event and there was a big dangerous thing and they have stopped it from happening.

It seems also skillful here to at least appreciate which kinds of risks are the kinds where if they would have happened, but they didn’t because we prevented them, we can notice that versus the kinds of risks where if we stop them from happening, we can’t even notice that we stopped it from happening. Adjusting our attitude towards those with each feature would seem skillful. Let’s focus in then on making good predictions. Anthony, earlier you brought up Metaculus, could you explain what Metaculus is and what it’s been doing and how it’s been involved in COVID-19?

Anthony Aguirre: Metaculus is at some level an effort to deal with precisely the problem that we’ve been discussing, that it’s difficult to make predictions and it’s difficult to have a reason to trust predictions, especially when they’re probabilistic ones about complicated things. The idea of Metaculus is sort of twofold or threefold maybe I would say. One part of it is that it’s been shown through the years and this is work by Tetlock and The Good Judgment Project and a whole series of projects within IARPA, the Intelligence Advanced Research Projects Agency, that groups of people making predictions about things and having those predictions carefully combined can make better predictions often than even small numbers of experts. There tend to be kind of biases on different sides.

If you carefully aggregate people’s predictions, you can at some level wash out those biases. As well, making predictions is something that some people are just really good at. It’s a skill that varies person to person and can be trained. There are people who are just really good at making predictions across a wide range of domains. Sometimes in making a prediction, general prediction skill can trump actual subject matter expertise. Of course, it’s good to have both if you possibly can, but lots of times experts have a huge understanding of the subject matter.

But if they’re not actually practiced or trained or spend a lot of time making predictions, they may not make better predictions than someone who is really good at making predictions, but has less depth of understanding of the actual topic. That’s something that some of these studies made clear. The idea of combining those two is to create a system that solicits predictions from lots of different people on questions of interest, aggregates those predictions, and identifies which people are really good at making predictions and kind of counts their prediction and input more heavily than other people.

So that if someone has just a year’s long track record of over and over again making good predictions about things, they have a tremendous amount of credibility and that gives you a reason to think that they’re going to make good predictions about things in the future. If you take lots of people, all of whom are good at making predictions in that way and combine their predictions together, you’re going to get something that’s much, much more reliable than just someone off the street or even an expert making a prediction in a one-off way about something.

That’s one aspect of it is identify good predictors, have them accrue a very objective track record of being right, and then have them in aggregate make predictions about things that are just going to be a lot more accurate than other methods you can come up with. Then the second thing, and it took me a long time to really see the importance of this, but I think our earlier conversation has kind of brought this out, is that if you have a single system or a single consistent set of predictions and checks on those predictions. Metaculus is a system that has many, many questions that have had predictions made on them and have resolved that has been checked against what actually happened.

What you can do then is start to understand what does it mean when Metaculus as a system says that there’s a 10% chance of something happening. You can really say of all the things on Metaculus that have a 10% chance of happening, about 10% of those actually happen. There’s a meaning to the 10%, which you can understand quite well, that if you say I went to Metaculus and where to go and make bets based on a whole bunch of predictions that were on it, you would know that the 10% predictions on Metaculus come true about 10% of the time, and you can use those numbers and actually making decisions. Whereas when you go to some random person and they say, “Oh, there’s a 10% chance,” as we discussed earlier, it’s really hard to know what exactly to make of that, especially if it’s a one-off event.

The idea of Metaculus was to both make a system that makes highly accurate predictions as best as possible, but also a kind of collection of events that have happened or not happened in the world that you can use to ground the probabilities and give meaning to them, so that there’s some operational meaning to saying that something on the system has a 90% chance of happening. This has been going on since about 2014 or ’15. It was born basically at the same time as the Future of Life Institute actually for very much the same reason, thinking about what can we do to positively affect the future.

In my mind, I went through exactly the reasoning of, if we want to positively affect the future, we have to understand what’s going to happen in probabilistic terms and how to think about what we can decide now and what sort of positive or negative effects will that have. To do that, you need predictions and you need probabilities. That got me thinking about, how could we generate those? What kind of system could give us the sorts of predictions and probabilities that we want? It’s now grown pretty big. Metaculus now has 1,800 questions that are live on the site and 210,000 predictions on them, sort of of order of a hundred predictions per question.

The questions are all manner of things from who is going to be elected in some election to will we have a million residents on Mars by 2052, to what will the case fatality rate be for COVID-19. It spans all kinds of different things. The track record has been pretty good. Something that’s unusual in the world is that you can just go on the site and see every prediction that the system has made and how it’s turned out and you can score it in various ways, but you can get just a clear sense of how accurate the system has been over time. Each user also has a similar track record that you can see exactly how accurate each person has been over time. They get a reputation and then the system folds that reputation in when it’s making predictions about new things.

With COVID-19, as I mentioned earlier, lots of people suddenly realized that they really wanted good predictions about things. We’ve had a huge influx of people and interest in the site focused on the pandemic. That suggested to us that this was something that people were really looking for and was helpful to people, so we put a bunch of effort into creating a kind of standalone subset of Metaculus called pandemic.metaculus.com that’s hosting just COVID-19 and pandemic related things. That has 120 questions or so live on it now with 23,000 predictions on them. All manner of how many cases, how many deaths will there be and various things, what sort of medical interventions might turn out to be useful, when will a lock down in a certain place be lifted. Of course, all these things are unknowable.

But again, the point here is to get a best estimate of the probabilities that can be folded into planning. I also find that even when it’s not a predictive thing, it’s quite useful as just an information aggregator. For example, one of the really frustratingly hard to pin down things in the COVID-19 pandemic is the infection or case fatality, like what is the ratio of fatalities to the total number of identified cases or symptomatic cases or infections. Those really are all over the place. There’s a lot of controversy right now about whether that’s more like 2% or more like 0.2% or even less. There are people advocating views like that. It’s a little bit surprising that it’s so hard to pin down, but that’s all tied up in the prevalence of testing and asymptomatic cases and all these sorts of things.

Even a way to have a sort of central aggregation place for people to discuss and compare and argue about and then make numerical estimates of this rate, even if it’s less a prediction, right, because this is something that exists now, there is some value of this ratio, so even something like that, having people come together and have a specific way to put in their numbers and compare and combine those numbers I think is a really useful service.

Lucas Perry: Can you say a little bit more about the efficacy of the predictions? Like for example, I think that you mentioned that Metaculus predicted COVID-19 at a 10% probability?

Anthony Aguirre: Well, somewhat amusingly, somewhat tragically, I guess, there was a series of questions on Metaculus about pandemics in general long before this one happened. In December, one of those questions closed, that is no more predictions were made on it, and that question was, will there be a naturally spawned pandemic leading to at least a hundred million reported infections or at least 10 million deaths in a 12 month period by the end of 2025? The probability that was given to that was 36% on Metaculus. It’s a surprisingly high number. We now know that that was more like 100% but of course we didn’t know that at the time, but I think that was a much higher number than a fair number of people would have given it and certainly a much higher number than we were taking into account in our decisions. If anyone in a position of power had really believed that there were 36% chance of that happening, that would have led, as we discussed earlier, to a lot different actions taken. So that’s one particular question that I found interesting, but I think the more interesting thing really is to look across a very large number of questions and how accurate the system is overall. And then again, to have a way to say that there’s a meaning to the probabilities that are generated by the system, even for things that are only going to happen once and never again.

Like there’s just one time that chloroquine is either going to work or not work. We’re going to discover that it does or that it doesn’t. Nonetheless, we can usefully take probabilities from the system predicting it, that are more useful than probabilities you’re going to get through almost any other way. If you ask most doctors what’s the probability that chloroquine is going to turn out to be useful? They’ll say, “Well we don’t know. Let’s do the clinical trials” and that’s a perfectly good answer. That’s true. We don’t know. But if you wanted to make a decision in terms of resource allocation say, you really want to know how is it looking, what’s the probability of that versus some other possible things that I might put resources into. Now in this case, I think we should just put resources into all of them if we possibly can because it’s so important that it makes sense to try everything.

But you can imagine lots of cases where there would be a finite set of resources and even in this case there is a finite set of resources. You might want to think about where are the highest probability things and you’d want numbers ideally associated with those things. And so that’s the hope is to help provide those numbers and more clarity of thinking about how to make decisions based on those numbers.

Lucas Perry: Are there things like Metaculus for experts?

Anthony Aguirre: Well, I would say that it is already for experts in that we certainly encourage people with subject matter expertise to be involved and often they are. There are lots of people who have training in infectious disease and so on that are on pandemic.metaculus and I think hopefully that expertise will manifest itself in being right. Though as I said, you could be very expert in something but pretty bad at making predictions on it and vice versa.

So I think there’s already a fairly high level of expertise, and I should plug this for the listeners. If you like making or reading predictions and having in depth discussions and getting into the weeds about the numbers. Definitely check this out. Metaculus could use more people making predictions and making discussion on it. And I would also say we’ve been working very hard to make it useful for people who want accurate predictions about things. So we really want this to be helpful and useful to people and if there are things that you’d like to see on it, questions you’d like to have answered, capabilities whatever. The system is there, ask for those, give us feedback and so on. So yeah, I think Metaculus is already aimed at being a system that experts in a given topic would use but it doesn’t base its weightings on expertise.

We might fold this in at some point if it proves useful, it doesn’t at the moment say, oh you’ve got a PhD in this so I’m going to triple the weight that I give to your prediction. It doesn’t do that. Your PhD should hopefully manifest itself as being right and then that would give you extra weight. That’s less useful though in something that is brand new. Like when we have lots of new people coming in and making predictions. It might be useful to fold in some weighting according to what their credentials or expertise are or creating some other systems where they can exhibit that on the system. Like say, “Here I am, I’m such and such an expert. Here’s my model. Here are the details, here’s the published paper. This is why you should believe me”. That might influence other people to believe their prediction more and use it to inform their prediction and therefore could end up having a lot of weight. We’re thinking about systems like that. That could add to just the pure reputation based system we have now.

Lucas Perry: All right. Let’s talk about this from a higher level. From the view of people who are interested and work in global catastrophic and existential risks and the kinds of broader lessons that we’re able to extract from COVID-19. For example, from the perspective of existential risk minded people, we can appreciate how disruptive COVID-19 is to human systems like the economy and the healthcare system, but it’s not a tail risk and its severity is quite low. The case fatality rate is somewhere around a percent plus or minus 0.8% or so and it’s just completely shutting down economies. So it almost makes one feel worse and more worried about something which is just a little bit more deadly or a little bit more contagious. The lesson or framing on this is the lesson of the fragility of human systems and how the world is dangerous and that we lack resilience.

Emilia Javorsky: I think it comes back to part of the conversation on a combination of how we make decisions and how decisions are made as a society being one part, looking at information and assessing that information and the other part of it being experience. And past experience really does steer how we think about attacking certain problem spaces. We have had near misses but we’ve gone through quite a long period of time where we haven’t had anything this in the case of pandemic or we can think of other categories of risk as well that’s been sufficient to disturb society in this way. And I think that there is some silver lining here that people now acutely understand the fragility of the system that we live in and how something like the COVID-19 pandemic can have such profound levels of disruption. Where on the spectrum of the types of risks that we’re assessing and talking about. This would be on the more milder end of the spectrum.

And so I do think that there is an opportunity potentially here where people now unfortunately have had the experience of seeing how severely life can be disrupted, and how quickly our systems break down, and that absence of fail-safes and sort of resilience baked into them to be able to deal with these sorts of things. From one perspective I can see how you would feel worse. From another perspective I definitely think there’s a conversation to have. And start to take seriously some of the other risks that fall into the category of being catastrophic on a global scale and not entirely remote in terms of their probabilities. Now that people are really listening and paying attention.

Anthony Aguirre: The risk of a pandemic has probably been going up with population density and people pushing into animals habitats and so on, but not maybe dramatically increasing with time. Whereas there are other things like a deliberately or accidentally human caused pandemic where people have deliberately taken a pathogen and made it more dangerous in one way or another. And there are risks, for example, in synthetic biology where things that would never have occurred naturally can be designed by people. These are risks and possibilities that I think are growing very, very rapidly because the technology is growing so rapidly and may therefore be very, very underestimated when we’re basing our risks on frequencies of things happening in the past. This really gets worse the more you think about it because the idea that a naturally occurring thing could be so devastating and that when you talk to people in infectious disease about what in principle could be made, there are all kinds of nasty properties of different pathogens that if combined would be something really, really terrible and nature wouldn’t necessarily combine them like that. There’s no particular reason to, but humans could.

Then you really open up really, really terrifying scenarios. I think this does really drive home in an intuitive, very visceral way that we’re not somehow magically immune to those things happening and that there isn’t necessarily some amazing system in place that’s just going to prevent or stop those things from happening if those things get out into the world. We’ve seen containment fail, what this lesson tells us that we should be doing and what we should be paying more attention to. And I think it’s something we really, really urgently need to discuss.

Emilia Javorsky: So much of the cultural psyche that we’ve had around these types of risks has focused so much primarily on bad actors. When we talk about the risks that arise from pandemics, tools like genetic engineering and synthetic biology. We hear a lot about bad actors and the risks of bio-terrorism, but what you’re discussing, and I think really rightly highlighting, is that there doesn’t have to be any sort of ill will baked into these kinds of risks for them to occur. There can just be sloppy science that’s part of this or science with inadequate safety engineering. I think that that’s something people are starting to appreciate now that we’re experiencing a naturally occurring pandemic where there’s no actor to point to. There’s no ill will, there’s no enemy so to speak. Which is how I think so much of the pandemic conversation has happened up until this point and other risks as well where everyone assumes that it’s some sort of ill will.

When we talk about nuclear risk, people think about generally the risk of a nuclear war starting. Well we know that the risk of nuclear war versus the risk of nuclear accident, those two things are very different and its accidental risk that is much more likely to be devastating than purposeful initiation of some global nuclear war. So I think that’s important too, is just getting an appreciation that these things can happen either naturally occurring or when we think about emerging technologies, just a failure to understand and appreciate and engage in the precautions and safety measures that are needed when dealing with largely unknown science.

Anthony Aguirre: I completely agree with you, while also worrying a little bit that our human tendency is to react more strongly against things that we see as deliberate. If you look at just the numbers of people that have died of terrorist attacks say, they’re tiny compared to many, many other causes. And yet we feel as a society very threatened and have spent incredible amounts of energy and resources protecting ourselves against those sorts of attacks. So there’s some way in which we tend to take much more seriously for some reason, problems and attacks that are willful and where we can identify a wrongdoer, an enemy.

So I’m not sure what to think. I totally agree with you that there are lots of problems that won’t have an enemy to be fighting against. Maybe I’m agreeing with you that I worry that we’re not going to take them seriously for that reason. So I wonder in terms of pandemic preparedness, whether we shouldn’t keep emphasizing that there are bad actors that could cause these things just because people might pay more attention to that, whereas they seem to be awfully dismissive of the natural ones. I’m not sure how to think about that.

Emilia Javorsky: I actually think I’m in complete agreement with you, Anthony, that my point is coming from perhaps misplaced optimism that this could be an inflection point in that kind of thinking.

Anthony Aguirre: Fair enough.

Lucas Perry: I think that what we like to do is actually just declare war on everything, at least in America. So maybe we’ll have to declare a war on pathogens or something and then people will have an enemy to fight against. So continuing here on trying to consider what lessons the coronavirus situation can teach us about global catastrophic and existential risks. We have an episode with Toby Ord coming out tomorrow, at the time of this recording. In that conversation, global catastrophic risk was defined as something which kills 10% of the global population. Coronavirus is definitely not going to do that via its direct effects nor its indirect effects. There are real risks and a real class of risks which are far more deadly and widely impacting than COVID-19 and one of these that I’d like to pivot into now is what you guys just mentioned briefly was the risk of synthetic bio.

So that would be like AI enabled synthetic biology. So pathogens or viruses which are constructed and edited in labs via new kinds of biotechnology. Could you explain this risk and how it may be a much greater risk in the 21st century than naturally occurring pandemics?

Emilia Javorsky: I think what I would separate out is thinking about synthetic biology vs genetic engineering. So there are definitely tools we can use to intervene in pathogens that we already know and exist and one can foresee and thinking down sort of the bad actor train of thought, how you could intervene in those to increase their lethality, increase their transmissibility. The other side of this that’s a more unexplored side and you alluded to it being sort of AI enabled. It can be enabled by AI, it can be enabled by human intelligence, which is the idea of synthetic biology and creating life forms, sort of nucleotide by nucleotide. So we now have that capacity to really design DNA, to design life in ways that we previously just did not have that capacity to do. There’s certainly a pathogen angle that, but there’s also a tremendously unknown element.

We could end up creating life forms that are not things that we would intuitively think of as sort of human designers of life. And so what are the certain risks that are posed by potential entirely new classes of pathogens that we have not yet encountered before? When we talk about tools for either intervening and pathogens that already exist and changing their characteristics or creating designer ones from scratch, is just how cheap and ubiquitous these technologies have become. They’re far more accessible in terms of how cheap they are, how available they are and the level of expertise required to work with them. There’s that aspect of being a highly accessible, dangerous technology that also changes how we think about that.

Anthony Aguirre: Unfortunately, it seems not hard for me or I think anyone, but unfortunately not also for the biologists that I’ve talked to, to imagine pathogens that are just categorically worse than the sorts of things that have happened naturally. With AIDS, HIV, it took us decades and we still don’t have a vaccine and that’s something that was able to spread quite widely before anyone even noticed that it existed. So you can imagine awful combinations of long asymptomatic transmission combined with terrible consequences and difficulty of any kind of countermeasures being deliberately combined into something that just would be really, really orders of magnitude more terrible in the things we’ve experienced. It’s hard to imagine why someone would do that, but there are lots of things that are hard to imagine that people nonetheless do unfortunately. I think everyone whose thought much about this agrees that it’s just a huge problem, potentially the sort of super pathogen that could in principle wipe out a significant fraction of the world’s population.

What is the cost associated with that? The value of the world is hard to even know how to calculate it. It is just a vast number.

Lucas Perry: Plus the deep future.

Emilia Javorsky: Right.

Anthony Aguirre: I suppose there’s a 0.01% chance of someone developing something like that in the next 20 years and deploying it. That’s a really tiny chance, probably not going to happen, but when you multiply it by quadrillions of dollars, that still merits a fairly large response because it’s a huge expected cost. So we should not be putting thousands or hundreds of thousands or even millions of dollars into worrying about that. We really should be putting billions of dollars into worrying about that, if we were running the numbers even within an order of magnitude correctly. So I think that’s an example where our response to a low probability, high impact threat is utterly, utterly tiny compared to where it should be. And there are some other examples, but that’s one of those ones where I think it would be hard to find someone who would say that that isn’t 0.1 or even 1% likely over the next 20 years.

But if you really take that seriously, we should be doing a ton about this and we’re just not. Looking at many such examples and there are not a huge number, but there are enough that it takes a fair amount of work to look at them. And that’s part of what the future of Life Institute is here to do. And I’m looking forward to hearing your interview with Toby Ord as well along those lines. We really should be taking those things more seriously as a society and we don’t have to put in the right amount of money in the sense that if it’s 1% likely we don’t have to put in 1% of a quadrillion dollars because fortunately it’s way, way cheaper to prevent these things than to actually deal with them. But at some level, money should be no object when it comes to making sure that our entire civilization doesn’t get wiped out.

We can take as a lesson from this current pandemic that terrible things do happen even if nobody wants them to or almost nobody wants them to, they can easily outstrip our ability to deal with them after they’ve happened, particularly if we haven’t correctly planned for them. But that we are at a place in the world history where we can see them potentially coming and do something about it. I do think when we’re stuck at home thinking about in this terrible case scenario, 1% or even a few percent of our citizens could be killed by this disease. And I think back to what it must’ve been like in the middle ages when a third of Europe was destroyed by the Black Death and they had no idea what was going on. Imagine how terrifying that was and as bad as it is now, we’re not in that situation. We know exactly what’s going on at some level. We know what we can do to prevent it and there’s no reason why we shouldn’t be doing that.

Emilia Javorsky: Something that keeps me up at night about these scenarios is that prevention is really the only key strategy that has a good shot at being effective because we see how much, and I take your HIV example as being a great one, of how long it takes us to even to begin to understand the consequences of a new pathogen on the human body and nevermind to figure out how to intervene. We are at the infancy of our understanding about human physiology and even more so in how do we intervene in it. And when you see the strategies that are happening today with vaccine development, we still know about approximately how long that takes. A lot of that’s driven by the need for clinical studies. We don’t have good models to predict how things perform in people. That’s on the vaccine side, It’s also on the therapeutic side.

This is why clinical trials are long and expensive and still fail quite late stage. Even when we get to the point of knowing that something works in a Petri dish and then a mouse and then an early pilot study. At a phase three clinical study, that drug can fail its efficacy endpoint. And that’s quite common and that’s part of what drives up the cost of drug development. And so from my perspective, having come from the human biology side, it just strikes me given where medical knowledge is and the rate at which it’s progressing, which is quick, but it’s not revolutionary and it’s dwarfed by the rate of progress in some of these other domains, be it AI or synthetic biology. And so I’m just not confident that our field will move fast enough to be able to deal with an entirely novel pathogen if it comes 10, 20 even 50 years down the road. Personally what motivates me and gets me really passionate is thinking about these issues and mitigation strategies today because I think that is the best place for our efforts at the moment.

Anthony Aguirre: One thing that’s encouraging I would say about the COVID-19 pandemic is seeing how many people are working so quickly and so hard to do things about it. There are all kinds of components to that. There’s vaccine and antivirals and then all of the things that we’re seeing play out are inventions that we’ve devised to fight against this new pathogen. You can imagine a lot of those getting better and more effective and some of them much more effective so you can in principle, imagine really quick and easy vaccine development, that seems super hard.

But you can imagine testing if there were sort of all over the place, little DNA sequencers that could just sequence whatever pathogens are around in the air or in a person and spit out the list of things that are in there. That would seem to be just an enormous extra tool in our toolkit. You can imagine things like, and I suspect that this is coming in the current crisis because it exists in other countries and it probably will exist with us. Something where if I am tested and either have or don’t have an infection, that that will go into a hopefully, but not necessarily privacy preserving and encrypted database that will then be coordinated and shared in some way with other people so that the system as a whole can assess the likelihood that the people that I’ve been in contact with, their risk has gone up and they might be notified, they might be told, “Oh, you should get a test this week instead of next week,” or something like that.

So you can imagine the sort of huge amount of data that are gathered on people now, as part of our modern, somewhat sketchy online ecosystem being used for this purpose. I think they probably will, if we could do so in a way that we actually felt comfortable with, like if I had a system where I felt like I can share my personal health data and feel like I’ve got trust in the system to respect my privacy and my interest, and to be a good fiduciary, like a doctor would, and keeping my interest paramount. Of course I’d be happy to share that information, and in return get useful information from the system.

So I think lots of people would want to buy into that, if they trusted the system. We’ve unfortunately gotten to this place where nobody trusts anything. They use it, even though they don’t trust it, but nobody actually trusts much of anything. But you can imagine having a trusted system like that, which would be incredibly useful for this sort of thing. So I’m curious what you see as the competition between these dangers and the new components of the human immune system.

Emilia Javorsky: I am largely in agreement that on the very short term, we have technologies available today. The system you just described is one of them that can deal with this issue of data, and understanding who, what, when where are these symptoms and these infections. And we can make so much smarter decisions as a society, and really have prevented a lot of what we’re seeing today, if such a system was in place. That system could be enabled by the technology we have today. I mean, it’s not a far reach to think that that would be out of grasp or require any kind of advances in science and technology to put in place. They require perhaps maybe advances in trust in society, but that’s not a technology problem. I do think that’s something that there will be a will to do after the dust settles on this particular pandemic.

I think where I’m most concerned is actually our short term future, because some of the technologies we’re talking about, genetic engineering, synthetic biology, will ultimately also be able to be harnessed to be mitigation strategies for the kinds of things that we will face in the future. What I guess I’m worried about is this gap between when we’ve advanced these technologies to a place that we’re confident that they’re safe and effective in people, and we have the models and robust clinical data in place to feel comfortable using them, versus how quickly the threat is advancing.

So I think in my vision towards the longer term future, maybe on the 100 year horizon, which is still relatively very short, beyond that I think there could be a balance between the risks and the ability to harness these technologies to actually combat those risks. I think in the shorter term future, to me there’s a gap between the rate at which the risk is increasing because of the increased availability and ubiquity of these tools, versus our understanding of the human body and ability to harness these technologies against those risks.

So for me, I think there’s total agreement that there’s things we can do today based on data and tesingt, and rapid diagnostics. We talk a lot about wearables and how those could be used to monitor biometric data to detect these things before people become symptomatic, those are all strategies we can do today. I think there’s longer term strategies of how we harness these new tools in biology to be able to be risk mitigators. I think there’s a gap in between there where the risk is very high and the tools that we have that are scalable and ready to go are still quite limited.

Lucas Perry: Right, so there’s a duality here where AI and big data can both be applied to helping mitigate the current threats and risks of this pandemic, but also future pandemics. Yet, the same technology can also be applied for speeding up the development of potentially antagonistic synthetic biology, organisms which bad actors or people who are deeply misanthropic, or countries wish to gain power and hold the world hostage, may be able to use to realize a global catastrophic or existential risk.

Emilia Javorsky: Yeah, I mean, I think AI’s part of it, but I also think that there’s a whole category of risk here that’s probably even more likely in the short term, which is just the risks introduced by human level intelligence with these pathogens. That knowledge exists of how to make things more lethal and more transmissible with the technology available today. So I would say both.

Lucas Perry: Okay, thanks for that clarification. So there’s clearly a lot of risks in the 21st Century from synthetic bio gone wrong, or used for nefarious purposes. What are some ways in which synthetic bio might be able to help us with pandemic preparedness, or to help protect us against bad actors?

Emilia Javorsky: When we think about the tools that are available to us today within the realm of biotechnology, so I would include genetic engineering and synthetic biology in that category. The upside is actually tremendously positive. Where we see the future for these tools, the benefits have the potential to far outweigh the risks. When we talk about using these tools, these are the same tools, very similar to when we think about developing more powerful AI systems that are very fundamental and able to solve many problems. So when you start to be able to intervene in really fundamental biology, that really unlocks the potential to treat so many of the diseases that lack good treatments today, and that are largely incurable.

But beyond that, they can take that a step further, and being able to increase our health spans and our life spans. Even more broadly than that, really are key to some of the things we think about as existential risks and existential hope for our species. Today we are talking in depth about pandemics and the role that biology can play as a risk factor. But those same tools can be harnessed. We’re seeing it now with more rapid vaccine development, but things like synthetic biology and genetic engineering, are fundamental leaps forward in being able to protect ourselves against these threats with new mitigation strategies, and making our own biology and immune systems more resilient to these types of threats.

That ability for us to really now engineer and intervene in human biology, and thinking towards the medium to longterm future, unlocks a lot of possibilities for us, beyond just being able to treat and cure diseases. We think about how our own planet and climate is evolving, and we can use these same tools to evolve with it, and evolve to be more tolerant to some of the challenges that lie ahead. We all kind of know that eventually, whether that eventual will be sooner or much later, the survival of our species is contingent on becoming multi planetary. When we think about enduring the kind of stressors that even near term space travel impose and living in alien environments and adapting to alien environments, these are the fundamental tools that will really enable us to do that.

Well today, we’re starting to see the downsides of biology and some of the limitations of the tools we have today to intervene, and understanding what some of the near term risks are that the science of today poses in terms of pandemics. But really the future here is very, very bright for how these tools can be used to mitigate risk in the future, but also take us forward.

Lucas Perry:You have me thinking here about a great Carl Sagan quote that I really like where he says, “It will not be who reach Alpha Centauri and the other nearby stars, it will be a species very like us, but with more of our strengths and fewer of our weaknesses.” So, yeah, that seems to be in line with the upsides of synthetic bio.

Emilia Javorsky: You could even see the foundations of how we could use the tools that we have today to start to get to Proxima B. I think that quote would be realized in hopefully the not too distant future.

Lucas Perry: All right. So, taking another step back here, let’s get a little bit more perspective again on extracting some more lessons.

Anthony Aguirre: There were countries that were prepared for this and acted fairly quickly, and efficaciously, partly because they maybe had more firsthand experience with the previous perspective pandemics, but also maybe they just had a slightly different constituted society and leadership structure. There’s a danger here, I think, of seeing that top down and authoritarian governments have seen to be potentially more effective in dealing with this, because they can just take quick action. They don’t have to do a bunch of red tape or worry about pesky citizen’s rights and things, and they can just do what they want and crush the virus.

I don’t think that’s entirely accurate, but to the degree that it is, or that people perceive it to be, that worries me a little bit, because I really do strongly favor open societies and western democratic institutions over more totalitarian ones. I do worry that when our society and system of government so abjectly fails in serving its people, that people will turn to something rather different, or become very tolerant of something rather different, and that’s really bad news for us, I think.

So that worries me, a kind of competition of forms of government level that I really would like to see a better version of ours making itself seen and being effective in something like this, and sort of proving that there isn’t necessarily a conflict between having a right conferring, open society, with a strong voice of the people, and having something that is competent and serves its people well, and is capable in a crisis. They should not be mutually exclusive, and if we make them so, then we do so at great peril, I think.

Emilia Javorsky: That same worry keeps me up at night. I’ll try an offer an optimistic take on it.

Anthony Aguirre: Please.

Emilia Javorsky: Which is that authoritarian regimes are also the type that are not noted for their openness, and their transparency, and their ability to share realtime data on what’s happening within their borders. And so I think when we think about this pandemic or global catastrophic risk more broadly, the we is inherently the global community. That’s the nature of a global catastrophic risk. I think part of what has happened in this particular pandemic is it hit in the time where the spirit of multilateralism and global cooperation is arguably, in modern memory, partially the weakest its been. And so I think that the other way to look at it is, how do we cultivate systems of government that are capable of working together and acting on a global scale, and understanding that pandemics and global catastrophic risk is not confined to national borders. And how do you develop the data sharing, the information sharing, and also the ability to respond to that data in realtime at a global scale?

The strongest argument for forms of government that comes out of this is a pivot towards one that is much more open, transparent, and cooperative than perhaps we’ve been seeing as of late.

Anthony Aguirre: Well, I hope that is the lesson that’s taken. I really do.

Emilia Javorsky: I hope so, too. That’s the best perspective I can offer on it, because I too, am a fan of democracy and human rights. I believe these are generally good things.

Lucas Perry: So wrapping things up here, let’s try to get some perspective and synthesis of everything that we’ve learned from the COVID-19 crisis and what we can do in the future, what we’ve learned about humanity’s weaknesses and strengths. So, if you were to have a short pitch each to world leaders about lessons from COVID-19, what would that be? We can start with Anthony.

Anthony Aguirre: This crisis has thrust a lot of leaders and policy makers into the situation where they’re realizing that they have really high stakes decisions to make, and simply not the information that they need to make them well. They don’t have the expertise on hand. They don’t have solid predictions and modeling on hand. They don’t have the tools to fold those things together to understand what the results of their decisions will be and make the best decision.

So I think, I would suggest strongly that policy makers put in place those sorts of systems, how am I going to get reliable information from experts that allows me to understand from them, and model what is going to happen given different choices that I could make and make really good decisions so that when a crisis like this hits, we don’t find ourselves in the situation of simply not having the tools at our disposal to handle the crisis. And then I’d say having put those things in place, don’t wait for a crisis to use them. Just use those things all the time and make good decisions for society based on technology and expertise and understanding that we now are able to put in place together as a society, rather than whatever decision making processes we’ve generated socially and historically and so on. We actually can do a lot better and have a really, really well run society if we do so.

Lucas Perry: All right, and Emilia?

Emilia Javorsky: Yeah, I want to echo Anthony’s sentiment there with the need for evidence based realtime data at scale. That’s just so critical to be able to orchestrate any kind of meaningful response. And also to be able to act as Anthony eludes to, before you get to the point of a crisis, because there was a lot of early indicators here that could have prevented this situation that we’re in today. I would add that the next step in that process is also developing mechanisms to be able to respond in realtime at a global scale, and I think we are so caught up in sort of moments of an us verse them, whether that be on a domestic or international level, but the spirit of multilateralism is just at an all-time low.

I think we’ve been sorely reminded that when there’s global level threats, they require a global level response. No matter how much people want to be insular and think that their countries have borders, the fact of the matter is is that they do not. And we’re seeing the interdependency of our global system. So I think that in addition to building those data structures to get information to policy makers, there also needs to be a sort of supply chain and infrastructure built, and decision making structure to be able to respond to that information in real time.

Lucas Perry: You mentioned information here. One of the things that you did want to talk about on the podcast was information problems and how information is currently extremely partisan.

Emilia Javorsky: It’s less so that it’s partisan, and more so that it’s siloed and biased and personalized. I think one aspect of information that’s been very difficult in this current information environment, is the ability to communicate to a large audience accurate information, because the way that we communicate information today is mainly through click bait style titles. When people are mainly consuming information in a digital format, and it’s highly personalized, it’s highly tailored to their preferences, both in terms of the news outlets that they innately turn to for information, but also their own personal algorithms that know what kind of news to show you, whether it be in your social feeds or what have you.

I think when the structure of how we disseminate information is so personalized and partisan, it becomes very difficult to bring through all of that noise to communicate to people accurate balanced, measured, information. Because even when you do, it’s human nature that that’s not the types of things people are innately going to seek out. So what in times like this are mechanisms of disseminating information that we can think about that supersede all of that individualized media, and really get through to say, “All right, everyone needs to be on the same page and be operating off the best state of information that we have at this point. And this is what that is.”

Lucas Perry: All right, wonderful. I think that helps to more fully unpack this data structure point that Anthony and you were making. So yeah, thank you both so much for your time, and for helping us to reflect on lessons from COVID-19.

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity” has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. “The Precipice” thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.

Topics discussed in this episode include:

  • An overview of Toby’s new book
  • What it means to be standing at the precipice and how we got here
  • Useful arguments for why existential risk matters
  • The risks themselves and their likelihoods
  • What we can do to safeguard humanity’s potential

Timestamps: 

0:00 Intro 

03:35 What the book is about 

05:17 What does it mean for us to be standing at the precipice? 

06:22 Historical cases of global catastrophic and existential risk in the real world

10:38 The development of humanity’s wisdom and power over time  

15:53 Reaching existential escape velocity and humanity’s continued evolution

22:30 On effective altruism and writing the book for a general audience 

25:53 Defining “existential risk” 

28:19 What is compelling or important about humanity’s potential or future persons?

32:43 Various and broadly appealing arguments for why existential risk matters

50:46 Short overview of natural existential risks

54:33 Anthropogenic risks

58:35 The risks of engineered pandemics 

01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity 

01:09:43 How and where to follow Toby and pick up his book

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This episode is with Toby Ord and covers his new book “The Precipice: Existential Risk and the Future of Humanity.” This is a new cornerstone piece in the field of existential risk and I highly recommend this book for all persons of our day and age. I feel this work is absolutely critical reading for living an informed, reflective, and engaged life in our time. And I think even for those well acquainted with this topic area will find much that is both useful and new in this book. Toby offers a plethora of historical and academic context to the problem, tons of citations and endnotes, useful definitions, central arguments for why existential risk matters that can be really helpful for speaking to new people about this issue, and also novel quantitative analysis and risk estimations, as well as what we can actually do to help mitigate these risks. So, if you’re a regular listener to this podcast, I’d say this is a must add to your science, technology, and existential risk bookshelf. 

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

Toby Ord is a Senior Research Fellow in Philosophy at Oxford University. His work focuses on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

Toby’s earlier work explored the ethics of global health and global poverty, demonstrating that aid has been highly successful on average and has the potential to be even more successful if we were to improve our priority setting. This led him to create an international society called Giving What We Can, whose members have pledged over $1.5 billion to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.

His current research is on the long-term future of humanity,  and the risks which threaten to destroy our entire potential.

Finally, the Future of Life Institute podcasts have never had a central place for conversation and discussion about the episodes and related content. In order to facilitate such conversation, I’ll be posting the episodes to the LessWrong forum at Lesswrong.com where you’ll be able to comment and discuss the episodes if you so wish. The episodes more relevant to AI alignment will be crossposted from LessWrong to the Alignment Forum as well at alignmentforum.org.  

And so with that, I’m happy to present Toby Ord on his new book “The Precipice.”

We’re here today to discuss your new book, The Precipice: Existential Risk and the Future of Humanity. Tell us a little bit about what the book is about.

Toby Ord: The future of humanity, that’s the guiding idea, and I try to think about how good our future could be. That’s what really motivates me. I’m really optimistic about the future we could have if only we survive the risks that we face. There have been various natural risks that we have faced for as long as humanity’s been around, 200,000 years of Homo sapiens or you might include an even broader definition of humanity that’s even longer. That’s 2000 centuries and we know that those natural risks can’t be that high or else we wouldn’t have been able to survive so long. It’s quite easy to show that the risks should be lower than about 1 in 1000 per century.

But then with humanity’s increasing power over that time, the exponential increases in technological power. We reached this point last century with the development of nuclear weapons, where we pose a risk to our own survival and I think that the risks have only increased since then. We’re in this new period where the risk is substantially higher than these background risks and I call this time the precipice. I think that this is a really crucial time in the history and the future of humanity, perhaps the most crucial time, this few centuries around now. And I think that if we survive, and people in the future, look back on the history of humanity, schoolchildren will be taught about this time. I think that this will be really more important than other times that you’ve heard of such as the industrial revolution or even the agricultural revolution. I think this is a major turning point for humanity. And what we do now will define the whole future.

Lucas Perry: In the title of your book, and also in the contents of it, you developed this image of humanity to be standing at the precipice, could you unpack this a little bit more? What does it mean for us to be standing at the precipice?

Toby Ord: I sometimes think of humanity has this grand journey through the wilderness with dark times at various points, but also moments of sudden progress and heady views of the path ahead and what the future might hold. And I think that this point in time is the most dangerous time that we’ve ever encountered, and perhaps the most dangerous time that there will ever be. So I see it in this central metaphor of the book, humanity coming through this high mountain pass and the only path onwards is this narrow ledge along a cliff side with this steep and deep precipice at the side and we’re kind of inching our way along. But we can see that if we can get past this point, there’s ultimately, almost no limits to what we could achieve. Even if we can’t precisely estimate the risks that we face, we know that this is the most dangerous time so far. There’s every chance that we don’t make it through.

Lucas Perry: Let’s talk a little bit then about how we got to this precipice and our part in this path. Can you provide some examples or a story of global catastrophic risks that have happened and near misses of possible existential risks that have occurred so far?

Toby Ord: It depends on your definition of global catastrophe. One of the definitions that’s on offer is 10%, or more of all people on the earth at that time being killed in a single disaster. There is at least one time where it looks like we’ve may have reached that threshold, which was the Black Death, which killed between a quarter and a half of people in Europe and may have killed many people in South Asia and East Asia as well and the Middle East. It may have killed one in 10 people across the whole world. Although because our world was less connected than it is today, it didn’t reach every continent. In contrast, the Spanish Flu 1918 reached almost everywhere across the globe, and killed a few percent of people.

But in terms of existential risk, none of those really posed an existential risk. We saw, for example, that despite something like a third of people in Europe dying, that there wasn’t a collapse of civilization. It seems like we’re more robust than some give us credit for, but there’ve been times where there hasn’t been an actual catastrophe, but there’s been near misses in terms of the chances.

There are many cases actually connected to the Cuban Missile Crisis, a time of immensely high tensions during the Cold War in 1962. I think that the closest we have come is perhaps the events on a submarine that was unknown to the U.S. that it was carrying a secret nuclear weapon and the U.S. Patrol Boats tried to force it to surface by dropping what they called practice depth charges, but the submarine thought that there were real explosives aimed at hurting them. The submarine was made for the Arctic and so it was overheating in the Caribbean. People were dropping unconscious from the heat and the lack of oxygen as they tried to hide deep down in the water. And during that time the captain, Captain Savitsky, ordered that this nuclear weapon be fired and the political officer gave his consent as well.

On any of the other submarines in this flotilla, this would have been enough to launch this torpedo that then would have been a tactical nuclear weapon exploding and destroying the fleet that was oppressing them, but on this one, it was lucky that the flotilla commander was also on board this submarine, Captain Vasili Arkhipov and so, he overruled this and talked Savitsky down from this. So this was a situation at the height of this tension where a nuclear weapon would have been used. And we’re not quite sure, maybe Savitsky would have decided on his own not to do it, maybe he would have backed down. There’s a lot that’s not known about this particular case. It’s very dramatic.

But Kennedy had made it very clear that any use of nuclear weapons against U.S. Armed Forces would lead to an all-out full scale attack on the Soviet Union, so they hadn’t anticipated that tactical weapons might be used. They assumed it would be a strategic weapon, but it was their policy to respond with a full scale nuclear retaliation and it looks likely that that would have happened. So that’s the case where ultimately zero people were killed in that event. The submarine eventually surfaced and surrendered and then returned to Moscow where people were disciplined, but it brought us very close to this full scale nuclear war.

I don’t mean to imply that that would have been the end of humanity. We don’t know whether humanity would survive the full scale nuclear war. My guess is that we would survive, but that’s its own story and it’s not clear.

Lucas Perry: Yeah. The story to me has always felt a little bit unreal. It’s hard to believe we came so close to something so bad. For listeners who are not aware, the Future of Life Institute gives out a $50,000 award each year, called the Future of Life Award to unsung heroes who have contributed greatly to the existential security of humanity. We actually have awarded Vasili Arkhipov’s family with the Future of Life Award, as well as Stanislav Petrov and Matthew Meselson. So if you’re interested, you can check those out on our website and see their particular contributions.

And related to nuclear weapons risk, we also have a webpage on nuclear close calls and near misses where there were accidents with nuclear weapons which could have led to escalation or some sort of catastrophe. Is there anything else here you’d like to add in terms of the relevant historical context and this story about the development of our wisdom and power over time?

Toby Ord: Yeah, that framing, which I used in the book comes from Carl Sagan in the ’80s when he was one of the people who developed the understanding of nuclear winter and he realized that this could pose a risk to humanity on the whole. The way he thought about it is that we’ve had this massive development over the hundred billion human lives that have come before us. This succession of innovations that have accumulated building up this modern world around us.

If I look around me, I can see almost nothing that wasn’t created by human hands and this, as we all know, has been accelerating and often when you try to measure exponential improvements in technology over time, leading to the situation where we have the power to radically reshape the Earth’s surface, both say through our agriculture, but also perhaps in a moment through nuclear war. This increasing power has put us in a situation where we hold our entire future in the balance. A few people’s actions over a few minutes could actually potentially threaten that entire future.

In contrast, humanity’s wisdom has grown only falteringly, if at all. Many people would suggest that it’s not even growing. And by wisdom here, I mean, our ability to make wise decisions for human future. I talked about this in the book under the idea about civilizational virtues. So if you think of humanity as a group of agents, in the same way that we think of say nation states as group agents, we talk about is it in America’s interest to promote this trade policy or something like that? We can think of what’s in humanity’s interests and we find that if we think about it this way, humanity is crazily impatient and imprudent.

If you think about the expected lifespan of humanity, a typical species lives for about a million years. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life, like an adolescent who’s just coming into his or her own power, but doesn’t have the wisdom or the patience to actually really pay any attention to this possible whole future ahead of them and so they’re just powerful enough to get themselves in trouble, but not yet wise enough to avoid that.

If you continue this analogy, what is often hard for humanity at the moment to think more than a couple of election cycles ahead at best, but that would correspond say eight years to just the next eight hours within this person’s life. For the kind of short term interests during the rest of the day, they put the whole rest of their future at risk. And so I think that that helps to see what this lack of wisdom looks like. It’s not that it’s just a highfalutin term of some sort, but you can kind of see what’s going on is that the person is incredibly imprudent and impatient. And I think that many others virtues or vices that we think of in an individual human’s life can be applied in this context and are actually illuminating about where we’re going wrong.

Lucas Perry: Wonderful. Part of the dynamic here in this wisdom versus power race seems to be one of the solutions being slowing down power seems untenable or that it just wouldn’t work. So it seems more like we have to focus on amplifying wisdom. Is this also how you view the dynamic?

Toby Ord: Yeah, that is. I think that if humanity was more coordinated, if we were able to make decisions in a unified manner better than we actually can. So, if you imagine this was a single player game, I don’t think it would be that hard. You could just be more careful with your development of power and make sure that you invest a lot in institutions, and in really thinking carefully about things. I mean, I think that the game is ours to lose, but unfortunately, we’re less coherent than that and if one country decides to hold off on developing things, then other countries might run ahead and produce similar amount of risk.

Theres this kind of the tragedy of the commons at this higher level and so I think that it’s extremely difficult in practice for humanity to go slow on progress of technology. And I don’t recommend that we try. So in particular, there’s only at the moment, only a small number of people who really care about these issues and are really thinking about the long-term future and what we could do to protect it. And if those people were to spend their time arguing against progress of technology, I think that it would be a really poor use of their energies and probably just annoy and alienate the people they were trying to convince. And so instead, I think that the only real way forward is to focus on improving wisdom.

I don’t think that’s impossible. I think that humanity’s wisdom, as you could see from my comment before about how we’re kind of disunified, partly, it involves being able to think better about things as individuals, but it also involves being able to think better collectively. And so I think that institutions for overcoming some of these tragedies of the commons or prisoner’s dilemmas at this international level, are an example of the type of thing that will make humanity make wiser decisions in our collective interest.

Lucas Perry: It seemed that you said by analogy, that humanity’s lifespan would be something like a million years as compared with other species.

Toby Ord: Mm-hmm (affirmative).

Lucas Perry: That is likely illustrative for most people. I think there’s two facets of this that I wonder about in your book and in general. The first is this idea of reaching existential escape velocity, where it would seem unlikely that we would have a reason to end in a million years should we get through the time of the precipice and the second is I’m wondering your perspective on Nick Bostrom calls what matters here in the existential condition, Earth-originating intelligent life. So, it would seem curious to suspect that even if humanity’s existential condition were secure that we would still be recognizable as humanity in some 10,000, 100,000, 1 million years’ time and not something else. So, I’m curious to know how the framing here functions in general for the public audience and then also being realistic about how evolution has not ceased to take place.

Toby Ord: Yeah, both good points. I think that the one million years is indicative of how long species last when they’re dealing with natural risks. It’s I think a useful number to try to show why there are some very well-grounded scientific reasons for thinking that a million years is entirely in the ballpark of what we’d expect if we look at other species. And even if you look at mammals or other hominid species, a million years still seems fairly typical, so it’s useful in some sense for setting more of a lower bound. There are species which have survived relatively unchanged for much longer than that. One example is the horseshoe crab, which is about 450 million years old whereas complex life is only about 540 million years old. So that’s something where it really does seem like it is possible to last for a very long period of time.

If you look beyond that the Earth should remain habitable for something in the order of 500 million or a billion years for complex life before it becomes too hot due to the continued brightening of our sun. If we took actions to limit that brightening, which look almost achievable with today’s technology, we would only need to basically shade the earth by about 1% of the energy coming at it and increase that by 1%, I think it’s every billion years, we will be able to survive as long as the sun would for about 7 billion more years. And I think that ultimately, we could survive much longer than that if we could reach our nearest stars and set up some new self-sustaining settlement there. And then if that could then spread out to some of the nearest stars to that and so on, then so long as we can reach about seven light years in one hop, we’d be able to settle the entire galaxy. There are stars in the galaxy that will still be burning in about 10 trillion years from now and there’ll be new stars for millions of times as long as that.

We could have this absolutely immense future in terms of duration and the technologies that are beyond our current reach and if you look at the energy requirements to reach nearby stars, they’re high, but they’re not that high compared to say, the output of the sun over millions of years. And if we’re talking about a scenario where we’d last millions of years anyway, it’s unclear why it would be difficult with the technology would reach them. It seems like the biggest challenge would be lasting that long in the first place, not getting to the nearest star using technology for millions of years into the future with millions of years of stored energy reserves.

So that’s the kind of big picture question about the timing there, but then you also ask about would it be humanity? One way to answer that is, unless we go to a lot of effort to preserve Homo sapiens as we are now then it wouldn’t be Homo sapiens. We might go to that effort if we decide that it’s really important that it be Homo sapiens and that we’d lose something absolutely terrible. If we were to change, we could make that choice, but if we decide that it would be better to actually allow evolution to continue, or perhaps to direct it by changing who we are with genetic engineering and so forth, then we could make that choice as well. I think that that is a really critically important choice for the future and I hope that we make it in a very deliberate and careful manner rather than just going gung-ho and letting people do whatever they want, but I do think that we will develop into something else.

But in the book, my focus is often on humanity in this kind of broad sense. Earth-originating intelligent life would kind of be a gloss on it, but that has the issue that suppose humanity did go extinct and suppose we got lucky and some other intelligent life started off again, I don’t want to count that in what I’m talking about, even though it would technically fit into Earth-originating intelligent life. Sometimes I put it in the book as humanity or our rightful heirs something like that. Maybe we would create digital beings to replace us, artificial intelligences of some sort. So long as they were the kinds of beings that could actually fulfill the potential that we have, they could realize one of the best trajectories that we could possibly reach, then I would count them. It could also be that we create something that succeeds us, but has very little value, then I wouldn’t count it.

So yeah, I do think that we may be greatly changed in the future. I don’t want that to distract the reader, if they’re not used to thinking about things like that because they might then think, “Well, who cares about that future because it will be some other things having the future.” And I want to stress that there will only be some other things having the future if we want it to be, if we make that choice. If that is a catastrophic choice, then it’s another existential risk that we have to deal with in the future and which we could prevent. And if it is a good choice and we’re like the caterpillar that really should become a butterfly in order to fulfill its potential, then we need to make that choice. So I think that is something that we can leave to future generations that it is important that they make the right choice.

Lucas Perry: One of the things that I really appreciate about your book is that it tries to make this more accessible for a general audience. So, I actually do like it when you use lower bounds on humanity’s existential condition. I think talking about billions upon billions of years can seem a little bit far out there and maybe costs some weirdness points and as much as I like the concept of Earth-originating intelligent life, I also think it costs some weirdness points.

And it seems like you’ve taken some effort to sort of make the language not so ostracizing by decoupling it some with effective altruism jargon and the kind of language that we might use in effective altruism circles. I appreciate that and find it to be an important step. The same thing I feel feeds in here in terms of talking about descendant scenarios. It seems like making things simple and leveraging human self-interest is maybe important here.

Toby Ord: Thanks. When I was writing the book, I tried really hard to think about these things, both in terms of communications, but also in terms of trying to understand what we have been talking about for all of these years when we’ve been talking about existential risk and similar ideas. Often when in effective altruism, there’s a discussion about the different types of cause areas that effective altruists are interested in. There’s people who really care about global poverty, because we can help others who are much poorer than ourselves so much more with our money, and also about helping animals who are left out of the political calculus and the economic calculus and we can see why it is that they’re interests are typically neglected and so we look at factory farms, and we can see how we could do so much good.

And then also there’s this third group of people and then the conversation drifts off a bit, it’s like who have this kind of idea about the future and it’s kind of hard to describe and how to kind of wrap up together. So I’ve kind of seen that as one of my missions over the last few years is really trying to work out what is it that that third group of people are trying to do? My colleague, Will MacAskill, has been working on this a lot as well. And what we see is that this other group of effective altruists are this long-termist group.

The first group is thinking about this cosmopolitan aspect as much as me and it’s not just people in my country that matter, it’s people across the whole world and some of those could be helped much more. And the second group is saying, it’s not just humans that could be helped. If we widen things up beyond the species boundary, then we can see that there’s so much more we could do for other conscious beings. And then this third group is saying, it’s not just our time that we can help, there’s so much we can do to help people perhaps across this entire future of millions of years or further into the future. And so the difference there, the point of leverage is this difference between what fraction of the entire future is our present generation is perhaps just a tiny fraction. And if we can do something that will help that entire future, then that’s where this could be really key in terms of doing something amazing with our resources and our lives.

Lucas Perry: Interesting. I actually had never thought of it that way. And I think it puts it really succinctly the differences between the different groups that people focused on global poverty are reducing spatial or proximity bias in people’s focus on ethics or doing good. Animal farming is a kind of anti-speciesism, broadening our moral circle of compassion to other species and then the long-termism is about reducing time-based ethical bias. I think that’s quite good.

Toby Ord: Yeah, that’s right. In all these cases, you have to confront additional questions. It’s not just enough to make this point and then it follows that things are really important. You need to know, for example, that there really are ways that people can help others in distant countries and that the money won’t be squandered. And in fact, for most of human history, there weren’t ways that we could easily help people in other countries just by writing out a check to the right place.

When it comes to animals, there’s a whole lot of challenging questions there about what is the effects of changing your diet or the effects of donating to a group that prioritize animals in campaigns against factory farming or similar and when it comes to the long-term future, there’s this real question about “Well, why isn’t it that people in the future would be just as able to protect themselves as we are? Why wouldn’t they be even more well-situated to attend to their own needs?” Given the history of economic growth and this kind of increasing power of humanity, one would expect them to be more empowered than us, so it does require an explanation.

And I think that the strongest type of explanation is around existential risk. Existential risks are things that would be an irrevocable loss. So, as I define them, which is a simplification, I think of it as the destruction of humanity’s long-term potential. So I think of our long term potential as you could think of this set of all possible futures that we could instantiate. If you think about all the different collective actions of humans that we could take across all time, this kind of sets out this huge kind of cloud of trajectories that humanity could go in and I think that this is absolutely vast. I think that there are ways if we play our cards right of lasting for millions of years or billions or trillions and affecting billions of different worlds across the cosmos, and then doing all kinds of amazing things with all of that future. So, we’ve got this huge range of possibilities at the moment and I think that some of those possibilities are extraordinarily good.

If we were to go extinct, though, that would collapse this set of possibilities to a much smaller set, which contains much worse possibilities. If we went extinct, there would be just one future, whatever it is that would happen without humans, because there’d be no more choices that humans could make. If we had an irrevocable collapse of civilization, something from which we could never recover, then that would similarly reduce it to a very small set of very meager options. And it’s possible as well that we could end up locked into some dystopian future, perhaps through economic or political systems, where we end up stuck in some very bad corner of this possibility space. So that’s our potential. Our potential is currently the value of the best realistically realizable worlds available to us.

If we fail in an existential catastrophe, that’s the destruction of almost all of this value, and it’s something that you can never get back, because it’s our very potential that would be being destroyed. That then has an explanation as to why it is that people in the future wouldn’t be better able to solve their own problems because we’re talking about things that could fail now, that helps explain why it is that there’s room for us to make such a contribution.

Lucas Perry: So if we were to very succinctly put the recommended definition or framing on existential risk that listeners might be interested in using in the future when explaining this to new people, what is the sentence that you would use?

Toby Ord: An existential catastrophe is the destruction of humanity’s long-term potential, and an existential risk is the risk of such a catastrophe.

Lucas Perry: Okay, so on this long-termism point, can you articulate a little bit more about what is so compelling or important about humanity’s potential into the deep future and which arguments are most compelling to you with a little bit of a framing here on the question of whether or not the long-termist’s perspective is compelling or motivating for the average person like, why should I care about people who are far away in time from me?

Toby Ord: So, I think that a lot of people if pressed and they’re told “does it matter equally much if a child 100 years in the future suffers as a child at some other point in time?” I think a lot of people would say, “Yeah, it matters just as much.” But that’s not how we normally think of things when we think about what charity to donate to or what policies to implement, but I do think that it’s not that foreign of an idea. In fact, the weird thing would be why it is that people in virtue of the fact that they live in different times matter different amounts.

A simple example of that would be suppose you do think that things further into the future matter less intrinsically. Economists sometimes represent this by a pure rate of time preference. It’s a component of a discount rate, which is just to do with things mattering less in the future, whereas most of the discount rate is actually to do with the fact that money is more important to have earlier which is actually a pretty solid reason, but that component doesn’t affect any of these arguments. It’s only this little extra aspect about things matter less just because we’re in the future. Suppose you have that 1% discount rate of that form. That means that someone’s older brother matters more than their younger brother, that their life is equally long and has the same kinds of experiences is fundamentally more important for their older child than the younger child, things like that. This just seems kind of crazy to most people, I think.

And similarly, if you have these exponential discount rates, which is typically the only kind that economists consider, it has these consequences that what happens in 10,000 years is way more important than what happens in 11,000 years. People don’t have any intuition like that at all, really. Maybe we don’t think that much about what happens in 10,000 years, but 11,000 is pretty much the same as 10,000 from our intuition, but these other views say, “Wow. No, it’s totally different. It’s just like the difference between what happens next year and what happens in a thousand years.”

It generally just doesn’t capture our intuitions and I think that what’s going on is not so much that we have a kind of active intuition that things that happen further into the future matter less and in fact, much less because they would have to matter a lot less to dampen the fact that we can have millions of years of future. Instead, what’s going on is that we just aren’t thinking about it. We’re not really considering that our actions could have irrevocable effects over the long distant future. And when we do think about that, such as within environmentalism, it’s a very powerful idea. The idea that we shouldn’t sacrifice, we shouldn’t make irrevocable changes to the environment that could damage the entire future just for transient benefits to our time. And people think, “Oh, yeah, that is a powerful idea.”

So I think it’s more that they’re just not aware that there are a lot of situations like this. It’s not just the case of a particular ecosystem that could be an example of one of these important irrevocable losses, but there could be these irrevocable losses at this much grander scale affecting everything that we could ever achieve and do. I should also explain there that I do talk a lot about humanity in the book. And the reason I say this is not because I think that non-human animals don’t count or they don’t have intrinsic value, I do. It’s because instead, only humanity is responsive to reasons and to thinking about this. It’s not the case that chimpanzees will choose to save other species from extinction and will go out and work out how to safeguard them from natural disasters that could threaten their ecosystems or things like that.

We’re the only ones who are even in the game of considering moral choices. So in terms of the instrumental value, humanity has this massive instrumental value, because what we do could affect, for better or for worse, the intrinsic value of all of the other species. Other species are going to go extinct in about a billion years, basically, all of them when the earth becomes uninhabitable. Only humanity could actually extend that lifespan. So there’s this kind of thing where humanity ends up being key because we are the decision makers. We are the relevant agents or any other relevant agents will spring from us. That will be things that our descendants or things that we create and choose how they function. So, that’s the kind of role that we’re playing.

Lucas Perry: So if there are people who just simply care about the short term, if someone isn’t willing to buy into these arguments about the deep future or realizing the potential of humanity’s future, like “I don’t care so much about that, because I won’t be alive for that.” There’s also an argument here that these risks may be realized within their lifetime or within their children’s lifetime. Could you expand that a little bit?

Toby Ord: Yeah, in the precipice, when I try to think about why this matters. I think the most obvious reasons are rooted in the present. The fact that it will be terrible for all of the people who are alive at the time when the catastrophe strikes. That needn’t be the case. You could imagine things that meet my definition of an existential catastrophe that it would cut off the future, but not be bad for the people who were alive at that time, maybe we all painlessly disappear at the end of our natural lives or something. But in almost all realistic scenarios that we’re thinking about, it would be terrible for all of the people alive at that time, they would have their lives cut short and witness the downfall of everything that they’ve ever cared about and believed in.

That’s a very obvious natural reason, but the reason that moves me the most is thinking about our long-term future, and just how important that is. This huge scale of everything that we could ever become. And you could think of that in very numerical terms or you could just think back over time and how far humanity has come over these 200,000 years. Imagine that going forward and how small a slice of things our own lives are and you can come up with very intuitive arguments to exceed that as well. It doesn’t have to just be multiply things out type argument.

But then I also think that there are very strong arguments that you could also have rooted in our past and in other things as well. Humanity has succeeded and has got to where we are because of this partnership of the generations. Edmund Burke had this phrase. It’s something where, if we couldn’t promulgate our ideas and innovations to the next generation, what technological level would be like. It would be like it was in the Paleolithic time, even a crude iron shovel would be forever beyond our reach. It was only through passing down these innovations and iteratively improving upon them, we could get billions of people working in cooperation over deep time to build this world around us.

If we think about the wealth and prosperity that we have the fact that we live as long as we do. This is all because this rich world was created by our ancestors and handed on to us and we’re the trustees of this vast inheritance and if we would have failed, if we’d be the first of 10,000 generations to fail to pass this on to our heirs, we will be the worst of all of these generations. We’d have failed in these very important duties and these duties could be understood as some kind of reciprocal duty to those people in the past or we could also consider it as duties to the future rooted in obligations to people in the past, because we can’t reciprocate to people who are no longer with us. The only kind of way you can get this to work is to pay it forward and have this system where we each help the next generation with the respect for the past generations.

So I think there’s another set of reasons more deontological type reasons for it and you could all have the reasons I mentioned in terms of civilizational virtues and how that kind of approach rooted in being a more virtuous civilization or species and I think that that is a powerful way of seeing it as well, to see that we’re very impatient and imprudent and so forth and we need to become more wise or alternatively, Max Tegmark has talked about this and Martin Rees, Carl Sagan and others have seen it as something based on a cosmic significance of humanity, that perhaps in all of the stars and all of the galaxies of the universe, perhaps this is the only place where there is either life at all or we’re the only place where there’s intelligent life or consciousness. There’s different versions of this and that could make this exceptionally important place and this very rare thing that could be forever gone.

So I think that there’s a whole lot of different reasons here and I think that previously, a lot of the discussion has been in a very technical version of the future directed one where people have thought, well, even if there’s only a tiny chance of extinction, our future could have 10 to the power of 30 people in it or something like that. There’s something about this argument that some people find it compelling, but not very many. I personally always found it a bit like a trick. It is a little bit like an argument that zero equals one where you don’t find it compelling, but if someone says point out the step where it goes wrong, you can’t see a step where the argument goes wrong, but you still think I’m not very convinced, there’s probably something wrong with this.

And then people who are not from the sciences, people from the humanities find it an actively alarming argument that anyone who would make moral decisions on the grounds of an argument like that. What I’m trying to do is to show that actually, there’s this whole cluster of justifications rooted in all kinds of principles that many people find reasonable and you don’t have to accept all of them by any means. The idea here is that if any one of these arguments works for you, then you can see why it is that you have reasons to care about not letting our future be destroyed in our time.

Lucas Perry: Awesome. So, there’s first this deontological argument about transgenerational duties to continue propagating the species and the projects and value which previous generations have cultivated. We inherit culture and art and literature and technology, so there is a duties-based argument to continue the stewardship and development of that. There is this cosmic significance based argument that says that consciousness may be extremely precious and rare, and that there is great value held in the balance here at the precipice on planet Earth and it’s important to guard and do the proper stewardship of that.

There is this short-term argument that says that there is some reasonable likelihood I think, total existential risk for the next century you put at one in six, which we can discuss a little bit more later, so that would also be very bad for us and our children and short-term descendants should that be realized in the next century. Then there is this argument about the potential of humanity in deep time. So I think we’ve talked a bit here about there being potentially large numbers of human beings in the future or our descendants or other things that we might find valuable, but I don’t think that we’ve touched on the part and change of quality.

There are these arguments on quantity, but there’s also I think, I really like how David Pearce puts it where he says, “One day we may have thoughts as beautiful as sunsets.” So, could you expand a little bit here this argument on quality that I think also feeds in and then also with regards to the digitalization aspect that may happen, that there are also arguments around subjective time dilation, which may lead to more better experience into the deep future. So, this also seems to be another important aspect that’s motivating for some people.

Toby Ord: Yeah. Humanity has come a long way and various people have tried to catalog the improvements in our lives over time. Often in history, this is not talked about, partly because history is normally focused on something of the timescale of a human life and things don’t change that much on that timescale, but when people are thinking about much longer timescales, I think they really do. Sometimes this is written off in history as Whiggish history, but I think that that’s a mistake.

I think that if you were to summarize the history of humanity in say, one page, I think that the dramatic increases in our quality of life and our empowerment would have to be mentioned. It’s so important. You probably wouldn’t mention the Black Death, but you would mention this. Yet, it’s very rarely talked about within history, but there are people talking about it and there are people who have been measuring these improvements. And I think that you can see how, say in the last 200 years, lifespans have more than doubled and in fact, even in the poorest countries today, lifespans are longer than they were in the richest countries 200 years ago.

We can now almost all read whereas very few people could read 200 years ago. We’re vastly more wealthy. If you think about this threshold we currently use of extreme poverty, it used to be the case 200 years ago that almost everyone was below that threshold. People were desperately poor and now almost everyone is above that threshold. There’s still so much more that we could do, but there have been these really dramatic improvements.

Some people seem to think that that story of well-being in our lives getting better, increasing freedoms, increasing empowerment of education and health, they think that that story runs somehow counter to their concern about existential risk that one is an optimistic story and one’s a gloomy story. Ultimately, what I’m thinking is that it’s because these trends seem to point towards very optimistic futures that would make it all the more important to ensure that we survive to reach such futures. If all the trends suggested that the future was just going to inevitably move towards a very dreary thing that had hardly any value in it, then I wouldn’t be that concerned about existential risk, so I think these things actually do go together.

And it’s not just in terms of our own lives that things have been getting better. We’ve been making major institutional reforms, so while there is regrettably still slavery in the world today, there is much less than there was in the past and we have been making progress in a lot of ways in terms of having a more representative and more just and fair world and there’s a lot of room to continue in both those things. And even then, a world that’s kind of like the best lives lived today, a world that has very little injustice or suffering, that’s still only a lower bound on what we could achieve.

I think one useful way to think about this is in terms of your peak experiences. These moments of luminous joy or beauty, the moments that you’ve been happiest, whatever they may be and you think about how much better they are than the typical moments. My typical moments are by no means bad, but I would trade hundreds or maybe thousands for more of these peak experiences, and that’s something where there’s no fundamental reason why we couldn’t spend much more of our lives at these peaks and have lives which are vastly better than our lives are today and that’s assuming that we don’t find even higher peaks and new ways to have even better lives.

It’s not just about the well-being in people’s lives either. If you have any kind of conception about the types of value that humanity creates, so much of our lives will be in the future, so many of our achievements will be in the future, so many of our societies will be in the future. There’s every reason to expect that these greatest successes in all of these different ways will be in this long future as well. There’s also a host of other types of experiences that might become possible. We know that humanity only has some kind of very small sliver of the space of all possible experiences. We see in a set of colors, this three-dimensional color space.

We know that there are animals that see additional color pigments, that can see ultraviolet, can see parts of reality that we’re blind to. Animals with magnetic sense that can sense what direction north is and feel the magnetic fields. What’s it like to experience things like that? We could go so much further exploring this space. If we can guarantee our future and then we can start to use some of our peak experiences as signposts to what might be experienceable, I think that there’s so much further that we could go.

And then I guess you mentioned the possibilities of digital things as well. We don’t know exactly how consciousness works. In fact, we know very little about how it works. We think that there’s some suggestive reasons to think that minds including consciousness are computational things such that we might be able to realize them digitally and then there’s all kinds of possibilities that would follow from that. You could slow yourself down like slow down the rate at which you’re computed in order to see progress zoom past you and kind of experience a dizzying rate of change in the things around you. Fast forwarding through the boring bits and skipping to the exciting bits one’s life if one was digital could potentially be immortal, have backup copies, and so forth.

You might even be able to branch into being two different people, have some choice coming up as to say whether to stay on earth or to go to this new settlement in the stars, and just split with one copy go into this new life and one staying behind or a whole lot of other possibilities. We don’t know if that stuff is really possible, but it’s just to kind of give a taste of how we might just be seeing this very tiny amount of what’s possible at the moment.

Lucas Perry: This is one of the most motivating arguments for me, the fact that the space of all possible minds is probably very large and deep and that the kinds of qualia that we have access to are very limited and the possibility of well-being not being contingent upon the state of the external world which is always in flux and is always impermanent, we’re able to have a science of well-being that was sufficiently well-developed such that well-being was information and decision sensitive, but not contingent upon the state of the external world that seems like a form of enlightenment in my opinion.

Toby Ord: Yeah. Some of these questions are things that you don’t often see discussed in academia, partly because there isn’t really a proper discipline that says that that’s the kind of thing you’re allowed to talk about in your day job, but it is the kind of thing that people are allowed to talk about in science fiction. Many science fiction authors have something more like space opera or something like that where the future is just an interesting setting to play out the dramas that we recognize.

But other people use the setting to explore radical, what if questions, many of which are very philosophical and some of which are very well done. I think that if you’re interested in these types of questions, I would recommend people read Diaspora by Greg Egan, which I think is the best and most radical exploration of this and at the start of the book, it’s a setting in a particular digital system with digital minds substantially in the future from where we are now that have been running much faster than the external world. Their lives lived thousands of times faster than the people who’ve remained flesh and blood, so culturally that vastly further on, and then you get to witness what it might be like to undergo various of these events in one’s life. And in the particular setting it’s in. It’s a world where physical violence is against the laws of physics.

So rather than creating utopia by working out how to make people better behaved, the longstanding project have tried to make us all act nicely and decently to each other. That’s clearly part of what’s going on, but there’s this extra possibility that most people hadn’t even thought about, where because it’s all digital. It’s kind of like being on a web forum or something like that, where if someone attempts to attack you, you can just make them disappear, so that they can no longer interfere with you at all. And it explores what life might be like in this kind of world where the laws of physics are consent based and you can just make it so that people have no impact on you if you’re not enjoying the kind of impact that they’re having is a fascinating setting to explore radically different ideas about the future, which very much may not come to pass.

But what I find exciting about these types of things is not so much that they’re projections of where the future will be, but that if you take a whole lot of examples like this, they span a space that’s much broader than you were initially thinking about for your probability distribution over where the future might go and they help you realize that there are radically different ways that it could go. This kind of expansion of your understanding about the space of possibilities, which is where I think it’s best as opposed to as a direct prediction that I would strongly recommend some Greg Egan for anyone who wants to get really into that stuff.

Lucas Perry: You sold me. I’m interested in reading it now. I’m also becoming mindful of our time here and have a bunch more questions I would like to get through, but before we do that, I also want to just throw out here. I’ve had a bunch of conversations recently on the question of identity and open individualism and closed individualism and empty individualism are some of the views here.

For the long-termist perspective, I think that it’s pretty much very or deeply informative for how much or how little one may care about the deep future or digital minds or our descendants in a million years or humans that are around a million years later. I think for many people who won’t be motivated by these arguments, they’ll basically just feel like it’s not me, so who cares? And so I feel like these questions on personal identity really help tug and push and subvert many of our commonly held intuitions about identity. So, sort of going off of your point about the potential of the future and how it’s quite beautiful and motivating.

A little funny quip or thought there is I’ve sprung into Lucas consciousness and I’m quite excited, whatever “I” means, for there to be like awakening into Dyson sphere consciousness in Andromeda or something, and maybe a bit of a wacky or weird idea for most people, but thinking more and more endlessly about the nature of personal identity makes thoughts like these more easily entertainable.

Toby Ord: Yeah, that’s interesting. I haven’t done much research on personal identity. In fact, the types of questions I’ve been thinking about when it comes to the book are more on how radical change would be needed before it’s no longer humanity, so kind of like the identity of humanity across time as opposed to the identity for a particular individual across time. And because I’m already motivated by helping others and I’m kind of thinking more about the question of why just help others in our own time as opposed to helping others across time. How do you direct your altruism, your altruistic impulses?

But you’re right that they could also be possibilities to do with individuals lasting into the future. There’s various ideas about how long we can last with lifespans extending very rapidly. It might be that some of the people who are alive now actually do directly experience some of this long-term future. Maybe there are things that could happen where their identity wouldn’t be preserved, because it’d be too radical a break. You’d become two different kinds of being and you wouldn’t really be the same person, but if being the same person is important to you, then maybe you could make smaller changes. I’ve barely looked into this at all. I know Nick Bostrom has thought about it more. There’s probably lots of interesting questions there.

Lucas Perry: Awesome. So could you give a short overview of natural or non-anthropogenic risks over the next century and why they’re not so important?

Toby Ord: Yeah. Okay, so the main natural risks I think we’re facing are probably asteroid or comet impacts and super volcanic eruptions. In the book, I also looked at stellar explosions like supernova and gamma ray bursts, although since I estimate the chance of us being wiped out by one of those in the next 100 years to be one in a billion, we don’t really need to worry about those.

But asteroids, it does appear that the dinosaurs were destroyed 65 million years ago by a major asteroid impact. It’s something that’s been very well studied scientifically. I think the main reason to think about it is A, because it’s very scientifically understood and B, because humanity has actually done a pretty good job on it. We only worked out 40 years ago that the dinosaurs were destroyed by an asteroid and that they could be capable of causing such a mass extinction. In fact, it was only in 1960, 60 years ago that we even confirmed that craters on the Earth’s surface were caused by asteroids. So we knew very little about this until recently.

And then we’ve massively scaled up our scanning of the skies. We think that in order to cause a global catastrophe, the asteroid would probably need to be bigger than a kilometer across. We’ve found about 95% of the asteroids between 1 and 10 kilometers across, and we think we’ve found all of the ones bigger than 10 kilometers across. We therefore know that since none of the ones were found are on a trajectory to hit us within the next 100 years that it looks like we’re very safe from asteroids.

Whereas super volcanic eruptions are much less well understood. My estimate for those for the chance that we could be destroyed in the next 100 years by one is about one in 10,000. In the case of asteroids, we have looked into it so carefully and we’ve managed to check whether any are coming towards us right now, whereas it can be hard to get these probabilities further down until we know more, so that’s why my what about the super volcanic corruptions is where it is. That the Toba eruption was some kind of global catastrophe a very long time ago, though the early theories that it might have caused a population bottleneck and almost destroyed humanity, they don’t seem to hold up anymore. It is still illuminating of having continent scale destruction and global cooling.

Lucas Perry: And so what is your total estimation of natural risk in the next century?

Toby Ord: About one in 10,000. All of these estimates are in order of magnitude estimates, but I think that it’s about the same level as I put the super volcanic eruption and the other known natural risks I would put as much smaller. One of the reasons that we can say these low numbers is because humanity has survived for 2000 centuries so far, and related species such as Homo erectus have survived for even longer. And so we just know that there can’t be that many things that could destroy all humans on the whole planet from these natural risks,

Lucas Perry: Right, the natural conditions and environment hasn’t changed so much.

Toby Ord: Yeah, that’s right. I mean, this argument only works if the risk has either been constant or expectably constant, so it could be that it’s going up and down, but we don’t know which then it will also work. The problem is if we have some pretty good reasons to think that the risks could be going up over time, then our long track record is not so helpful. And that’s what happens when it comes to what you could think of as natural pandemics, such as the coronavirus.

This is something where it’s got into humanity through some kind of human action, so it’s not exactly natural how it actually got into humanity in the first place and then its spread through humanity through airplanes, traveling to different continents very quickly, is also not natural and is a faster spread than you would have had over this long-term history of humanity. And thus, these kind of safety arguments don’t count as well as they would for things like asteroid impacts.

Lucas Perry: This class of risks then is risky, but less risky than the human-made risks, which are a result of technology, the fancy x-risk jargon for this is anthropogenic risks. Some of these are nuclear weapons, climate change, environmental damage, synthetic bio-induced pandemics or AI-enabled pandemics, unaligned artificial intelligence, dystopian scenarios and other risks. Could you say a little bit about each of these and why you view unaligned artificial intelligence as the biggest risk?

Toby Ord: Sure. Some of these anthropogenic risks we already face. Nuclear war is an example. What is particularly concerning is a very large scale nuclear war, such as between the U.S. and Russia and nuclear winter models have suggested that the soot from burning buildings could get lifted up into the stratosphere which is high enough that it wouldn’t get rained out, so it could stay in the upper atmosphere for a decade or more and cause widespread global cooling, which would then cause massive crop failures, because there’s not enough time between frosts to get a proper crop, and thus could lead to massive starvation and a global catastrophe.

Carl Sagan suggested it could potentially lead to our extinction, but the current people working on this, while they are very concerned about it, don’t suggest that it could lead to human extinction. That’s not really a scenario that they find very likely. And so even though I think that there is substantial risk of nuclear war over the next century, either an accidental nuclear war being triggered soon or perhaps a new Cold War, leading to a new nuclear war, I would put the chance that humanity’s potential is destroyed through nuclear war at about one in 1000 over the next 100 years, which is about where I’d put it for climate change as well.

There is debate as to whether climate change could really cause human extinction or a permanent collapse of civilization. I think the answer is that we don’t know. Similar with nuclear war, but they’re both such large changes to the world, these kind of unprecedentedly rapid and severe changes that it’s hard to be more than 99% confident that if that happens that we’d make it through and so this is difficult to eliminate risk that remains there.

In the book, I look at the very worst climate outcomes, how much carbon is there in the methane clathrates under the ocean and in the permafrost? What would happen if it was released? How much warming would there be? And then what would happen if you had very severe amounts of warming such as 10 degrees? And I try to sketch out what we know about those things and it is difficult to find direct mechanisms that suggests that we would go extinct or that we would collapse our civilization in a way from which you could never be restarted again, despite the fact that civilization arose five times independently in different parts of the worlds already, so we know that it’s not like a fluke to get it started again. So it’s difficult to see the direct reasons why it could happen, but we don’t know enough to be sure that it can’t happen. In my sense, that’s still an existential risk.

Then I also have a kind of catch all for other types of environmental damage, all of these other pressures that we’re putting on the planet. I think that it would be too optimistic to be sure that none of those could potentially cause a collapse from which we can never recover as well. Although when I look at particular examples that are suggested, such as the collapse of pollinating insects and so forth, for the particular things that are suggested, it’s hard to see how they could cause this, so it’s not that I am just seeing problems everywhere, but I do think that there’s something to this general style of argument that unknown effects of the stressors we’re putting on the planet could be the end for us.

So I’d put all of those kind of current types of risks at about one in 1,000 over the next 100 years, but then it’s the anthropogenic risks from technologies that are still on the horizon that scare me the most and this would be in keeping with this idea of humanity’s continued exponential growth in power where you’d expect the risks to be escalating every century. And I think that the ones that I’m most concerned about, in particular, engineered pandemics and the risk of unaligned artificial intelligence.

Lucas Perry: All right. I think listeners will be very familiar with many of the arguments around why unaligned artificial intelligence is dangerous, so I think that we could skip some of the crucial considerations there. Could you touch a little bit then on the risks of engineered pandemics, which may be more new and then give a little bit of your total risk estimate for this class of risks.

Toby Ord: Ultimately, we do have some kind of a safety argument in terms of the historical record when it comes to these naturally arising pandemics. There are ways that they could be more dangerous now than they could have been in the past, but there are also many ways in which they’re less dangerous. We have antibiotics. We have the ability to detect in real time these threats, sequence the DNA of the things that are attacking us, and then use our knowledge of quarantine and medicine in order to fight them. So we have reasons to look to our safety on that.

But there are cases of pandemics or pandemic pathogens being created to be even more spreadable or even more deadly than those that arise naturally because the natural ones are not being optimized to be deadly. The deadliness is only if that’s in service of them spreading and surviving and normally killing your host is a big problem for that. So there’s room there for people to try to engineer things to be worse than the natural ones.

One case is scientists looking to fight disease, like Ron Fouchier with the bird flu, deliberately made a more infectious version of it that could be transmitted directly from mammal to mammal. He did that because he was trying to help, but it was, I think, very risky and I think a very bad move and most of the scientific community didn’t think it was a good idea. He did it in a bio safety level three enhanced lab, which is not the highest level of biosecurity, that’s BSL four, and even at the highest level, there have been an escape of a pathogen from a BSL four facility. So these labs aren’t safe enough, I think, to be able to work on newly enhanced things that are more dangerous than anything that nature can create in a world where so far the biggest catastrophes that we know of were caused by pandemics. So I think that it’s pretty crazy to be working on such things until we have labs from which nothing has ever escaped.

But that’s not what really worries me. What worries me more is bio weapons programs and there’s been a lot of development of bio weapons in the 20th Century, in particular. The Soviet Union reportedly had 20 tons of smallpox that they had manufactured for example, and they had an accidental release of smallpox, which killed civilians in Russia. They had an accidental release of anthrax, blowing it out across the whole city and killing many people, so we know from cases like this, that they had a very large bioweapons program. And the Biological Weapons Convention, which is the leading institution at an international level to prohibit bio weapons is chronically underfunded and understaffed. The entire budget of the BWC is less than that of a typical McDonald’s.

So this is something where humanity doesn’t have its priorities in order. Countries need to work together to step that up and to give it more responsibilities, to actually do inspections and make sure that none of them are using bio weapons. And then I’m also really concerned by the dark side of the democratization of biotechnology. The fact that rapid developments that we make with things like Gene Drives and CRISPR. These two huge breakthroughs. They’re perhaps Nobel Prize worthy. That in both cases within two years, they are replicated by university students in science competitions.

So we now have a situation where two years earlier, there’s like one person in the world who could do it or no one who could do it, then one person and then within a couple of years, we have perhaps tens of thousands of people who could do it, soon millions. And so if that pool of people eventually includes people like those in the Aum Shinrikyo cults that was responsible for the Sarin gas in the Tokyo subway, who actively one of their goals was to destroy everyone in the world. Once enough people can do these things and could make engineered pathogens, you’ll get someone with this terrible but massively rare motivation, or perhaps even just a country like North Korea who wants to have a kind of blackmail policy to make sure that no one ever invades. That’s why I’m worried about that. These rapid advances are empowering us to make really terrible weapons.

Lucas Perry: All right, so wrapping things up here. How do we then safeguard the potential for humanity and Earth-originating intelligent life? You seem to give some advice on high level strategy, policy and individual level advice, and this is all contextualized within this grand plan for humanity, which is that we reach existential security by getting to a place where existential risk is decreasing every century that we then enter a period of long reflection to contemplate and debate what is good and how we might explore the universe and optimize it to express that good and then that we execute that and achieve our potential. So again, how do we achieve all this, how do we mitigate x-risk, how do we safeguard the potential of humanity?

Toby Ord: That’s an easy question to end on. So what I tried to do in the book is to try to treat this at a whole lot of different levels. You kind of refer to the most abstract level to some extent, the point of that abstract level is to show that we don’t need to get ultimate success right now, we don’t need to solve everything, we don’t need to find out what the fundamental nature of goodness is, and what worlds would be the best. We just need to make sure we don’t end up in the ones which are clearly among the worst.

The point of looking further onwards with the strategy is just to see that we can set some things aside for later. Our task now is to reach what I call existential security and that involves this idea that will be familiar to many people to do with existential risk, which is to look at particular risks and to work out how to manage them, and to avoid falling victim to them, perhaps by being more careful with technology development, perhaps by creating our protective technologies. For example, better bio surveillance systems to understand if bio weapons have been launched into the environment, so that we could contain them much more quickly or to develop say a better work on alignment with AI research.

But it also involves not just fighting fires, but trying to become the kind of society where we don’t keep lighting these fires. I don’t mean that we don’t develop the technologies, but that we build in the responsibility for making sure that they do not develop into existential risks as part of the cost of doing business. We want to get the fruits of all of these technologies, both for the long-term and also for the short-term, but we need to be aware that there’s this shadow cost when we develop new things, and we blaze forward with technology. There’s shadow cost in terms of risk, and that’s not normally priced in. We just kind of ignore that, but eventually it will come due. If we keep developing things that produce these risks, eventually, it’s going to get us.

So what we need to do to develop our wisdom, both in terms of changing our common sense conception of morality, to take this long-term future seriously or our debts to our ancestors seriously, and we also need the international institutions to help avoid some of these tragedies of the commons and so forth as well, to find these cases where we’d all be prepared to pay the cost to get the security if everyone else was doing it, but we’re not prepared to just do it unilaterally. We need to try to work out mechanisms where we can all go into it together.

There are questions there in terms of policy. We need more policy-minded people within the science and technology space. People with an eye to the governance of their own technologies. This can be done within professional societies, but also we need more technology-minded people in the policy space. We often are bemoan the fact that a lot of people in government don’t really know much about how the internet works or how various technologies work, but part of the problem is that the people who do know about how these things work, don’t go into government. It’s not just that you can blame the people in government for not knowing about your field. People who know about this field, maybe some of them should actually work in policy.

So I think we need to build that bridge from both sides and I suggest a lot of particular policy things that we could do. A good example in terms of how concrete and simple it can get is that we renew the New START Disarmament Treaty. This is due to expire next year. And as far as I understand, the U.S. government and Russia don’t have plans to actually renew this treaty, which is crazy, because it’s one of the things that’s most responsible for the nuclear disarmament. So, making sure that we sign that treaty again, it is a very actionable point that people can kind of motivate around and so on.

And I think that there’s stuff for everyone to do. We may think that existential risk is too abstract and can’t really motivate people in the way that some other causes can, but I think that would be a mistake. I’m trying to sketch a vision of it in this book that I think can have a larger movement coalesce around it and I think that if we look back a bit when it came to nuclear war, the largest protest in America’s history at that time was against nuclear weapons in Central Park in New York and it was on the grounds that this could be the end of humanity. And that the largest movement at the moment, in terms of standing up for a cause is on climate change and it’s motivated by exactly these ideas about irrevocable destruction of our heritage. It really can motivate people if it’s expressed the right way. And so that actually fills me with hope that things can change.

And similarly, when I think about ethics, and I think about how in the 1950s, there was almost no consideration of the environment within their conception of ethics. It just was considered totally outside of the domain of ethics or morality and not really considered much at all. And the same with animal welfare, it was scarcely considered to be an ethical question at all. And now, these are both key things that people are taught in their moral education in school. And we have an entire ministry for the environment and that was within 10 years of Silent Spring coming out, I think all, but one English speaking country had a cabinet level position on the environment.

So, I think that we really can have big changes in our ethical perspective, but we need to start an expansive conversation about this and start unifying these things together not to be just like the anti-nuclear movement and the anti-climate change movement where it’s fighting a particular fire, but to be aware that if we want to actually get out there preemptively for these things that we need to expand that to this general conception of existential risk and safeguarding humanity’s long-term potential, but I’m optimistic that we can do that.

That’s why I think my best guess is that there’s a one in six chance that we don’t make it through this Century, but the other way around, I’m saying there’s a five in six chance that I think we do make it through. If we really played our cards right, we could make it a 99% chance that we make it through this Century. We’re not hostages to fortune. We humans get to decide what the future of humanity will be like. There’s not much risk from external forces that we can’t deal with such as the asteroids. Most of the risk is of our own doing and we can’t just sit here and bemoan the fact we’re in some difficult prisoner’s dilemma with ourselves. We need to get out and solve these things and I think we can.

Lucas Perry: Yeah. This point on moving from these particular motivation and excitement around climate change and nuclear weapons issues to a broader civilizational concern with existential risk seems to be a crucial and key important step in developing the kind of wisdom that we talked about earlier. So yeah, thank you so much for coming on and thanks for your contribution to the field of existential risk with this book. It’s really wonderful and I recommend listeners read it. If listeners are interested in that, where’s the best place to pick it up? How can they follow you?

Toby Ord: You could check out my website at tobyord.com. You could follow me on Twitter @tobyordoxford or I think the best thing is probably to find out more about the book at theprecipice.com. On that website, we also have links as to where you can buy it in your country, including at independent bookstores and so forth.

Lucas Perry: All right, wonderful. Thanks again, for coming on and also for writing this book. I think that it’s really important for helping to shape the conversation in the world and understanding around this issue and I hope we can keep nailing down the right arguments and helping to motivate people to care about these things. So yeah, thanks again for coming on.

Toby Ord: Well, thank you. It’s been great to be here.

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O’Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O’Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally.

Topics discussed in this episode include:

  • What the Windfall Clause is and how it might function
  • The need for such a mechanism given AGI generated economic windfall
  • Problems the Windfall Clause would help to remedy 
  • The mechanism for distributing windfall profit and the function for defining such profit
  • The legal permissibility of the Windfall Clause 
  • Objections and alternatives to the Windfall Clause

Timestamps: 

0:00 Intro

2:13 What is the Windfall Clause? 

4:51 Why do we need a Windfall Clause? 

06:01 When we might reach windfall profit and what that profit looks like

08:01 Motivations for the Windfall Clause and its ability to help with job loss

11:51 How the Windfall Clause improves allocation of economic windfall 

16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems

18:45 The Windfall Clause as assisting with general norm setting

20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk

23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 

25:03 The windfall function and desiderata for guiding it’s formation 

26:56 How the Windfall Clause is different from being a new taxation scheme

30:20 Developing the mechanism for distributing the windfall 

32:56 The legal permissibility of the Windfall Clause in the United States

40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands

43:28 Historical precedents for the Windfall Clause

44:45 Objections to the Windfall Clause

57:54 Alternatives to the Windfall Clause

01:02:51 Final thoughts

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Cullen O’Keefe about a recent report he was the lead author on called The Windfall Clause: Distributing the Benefits of AI for the Common Good. For some quick background, the agricultural and industrial revolutions unlocked new degrees and kinds of abundance, and so too should the intelligence revolution currently underway. Developing powerful forms of AI will likely unlock levels of abundance never before seen, and this comes with the opportunity of using such wealth in service of the common good of all humanity and life on Earth but also with the risks of increasingly concentrated power and resources in the hands of the few who wield AI technologies. This conversation is about one possible mechanism, the Windfall Clause, which attempts to ensure that the abundance and wealth likely to be created by transformative AI systems benefits humanity globally.

For those not familiar with Cullen, Cullen is a policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism.  He currently works as a Research Scientist in Policy at OpenAI and is also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

And with that, here is Cullen O’Keefe on the Windfall Clause.

We’re here today to discuss this recent paper, that you were the lead author on called the Windfall Clause: Distributing the Benefits of AI for the Common Good. Now, there’s a lot there in the title, so we can start of pretty simply here with, what is the Windfall Clause and how does it serve the mission of distributing the benefits of AI for the common good?

Cullen O’Keefe: So the Windfall Clause is a contractual commitment AI developers can make, that basically stipulates that if they achieve windfall profits from AI, that they will donate some percentage of that to causes that benefit everyone.

Lucas Perry: What does it mean to achieve windfall profits?

Cullen O’Keefe: The answer that we give is that when a firm’s profits grow in excess of 1% of gross world product, which is just the sum of all countries GDP, then that firm has hit windfall profits. We use this slightly weird measurement of profits is a percentage of gross world product, just to try to convey the notion that the thing that’s relevant here is not necessarily the size of profits, but really the relative size of profits, relative to the global economy.

Lucas Perry: Right. And so an important background framing and assumption here seems to be the credence that one may have in transformative AI or in artificial general intelligence or in superintelligence, creating previously unattainable levels of wealth and value and prosperity. I believe that in terms of Nick Bostrom’s Superintelligence, this work in particular is striving to serve the common good principal, that superintelligence or AGI should be created in the service of and the pursuit of the common good of all of humanity and life on Earth. Is there anything here that you could add about the background to the inspiration around developing the Windfall Clause.

Cullen O’Keefe: Yeah. That’s exactly right. The phrase Windfall Clause actually comes from Bostrom’s book. Basically, the idea was something that people inside of FHI were excited about for a while, but really hadn’t done anything with because of some legal uncertainties. Basically, the fiduciary duty question that I examined in the third section of the report. When I was an intern there in the summer of 2018, I was asked to do some legal research on this, and ran away with it from there. My legal research pretty convincingly showed that it should be legal as a matter of corporate law, for a corporation to enter in to such a contract. In fact, I don’t think it’s a particularly hard case. I think it looks like things that operations do a lot already. And I think some of the bigger questions were around the implications and design of the Windfall Clause, which is also addressed in the report.

Lucas Perry: So, we have this common good principal, which serves as the moral and ethical foundation. And then the Windfall Clause it seems, is an attempt at a particular policy solution for AGI and superintelligence, serving the common good. With this background, could you expand a little bit more on why is that we need a Windfall Clause?

Cullen O’Keefe: I guess I wouldn’t say that we need a Windfall Clause. The Windfall Clause might be one mechanism that would solve some of these problems. The primary way in which cutting edge AI is being develop is currently in private companies. And the way that private companies are structured is perhaps not maximally conducive to the common good principal. This is not due to corporate greed or anything like that. It’s more just a function of the roles of corporations in our society, which is that they’re primarily vehicles for generating returns to investors. One might think that those tools that we currently have for taking some of the returns that are generated for investors and making sure that they’re distributed in a more equitable and fair way, are inadequate in the face of AGI. And so that’s kind of the motivation for the Windfall Clause.

Lucas Perry: Maybe if you could speak a little bit to the surveys of researchers of credence’s and estimates about when we might get certain kinds of AI. And then what windfall in the context of an AGI world actually means.

Cullen O’Keefe: The surveys of AGI timelines, I think this is an area with high uncertainty. We cite Katja Grace’s survey of AI experts, which is a few years old at this point. I believe that the median timeline that AI experts gave in that was somewhere around 2060, of attaining AGI as defined in a specific way by that paper. I don’t have opinions on whether that timeline is realistic or unrealistic. We just take it as a baseline, as the best specific timeline that has at least some evidence behind it. And what was the second question?

Lucas Perry: What other degrees of wealth might be brought about via transformative AI.

Cullen O’Keefe: The short and unsatisfying answer to this, is that we don’t really know. I think that the amount of economic literature really focusing on AGI in particular is pretty minimal. Some more research on this would be really valuable. A company earning profits that are defined as windfall via the report, would be pretty unprecedented in history, so it’s a very hard situation to imagine. Forecasts about the way that AI will contribute to growth are pretty variable. I think we don’t really have a good idea of what that might mean. And I think especially because the interface between economists and people thinking about AGI has been pretty minimal. A lot of the thinking has been more focused on more mainstream issues. If the strongest version of AGI were to come, the economic gains could be pretty huge. There’s a lot on the line that circumstance.

Part of what motivated the Windfall Clause, is trying to think of mechanisms that could withstand this uncertainty about what the actual economics of AGI will be like. And that’s kind of what the contingent commitment and progressively scaling commitment of the Windfall Clause is supposed to accomplish.

Lucas Perry: All right. So, now I’m going to explore here some of these other motivations that you’ve written in your report. There is the need to address loss of job opportunities. The need to improve the allocation of economic windfall, which if we didn’t do anything right now, there would actually be no way of doing that other than whatever system of taxes we would have around that time. There’s also this need to smooth the transition to advanced AI. And then there is this general norm setting strategy here, which I guess is an attempt to imbue and instantiate a kind of benevolent ethics based on the common good principle. Let’s start of by hitting on addressing the loss of job opportunities. How might transformative AI lead to the loss of job opportunities and how does the Windfall Clause help to remedy that?

Cullen O’Keefe: So I want to start of with a couple of caveats. So number one, I’m not an economist. Second is, I’m very wary of promoting Luddite views. It’s definitely true that in the past, technological innovation has been pretty universally positive in the long run, notwithstanding short term problems with transitions. So, it’s definitely by no means inevitable that advances in AI will lead to joblessness or decreased earnings. That said, I do find it pretty hard to imagine a scenario in which we achieve very general purpose AI systems, like AGI. And there are still bountiful opportunities for human employment. I think there might be some jobs which have human only employment or something like that. It’s kind of unclear, in an economy with AGI or something else resembling it, why there would be a demand for humans. There might be jobs I guess, in which people are inherently uncomfortable having non-humans. Good examples of this would be priests or clergy, probably most religions will not want to automate their clergy.

I’m not a theologian, so I can’t speak to the proper theology of that, but that’s just my intuition. People also mentioned things like psychiatrists, counselors, teachers, child care, stuff like that. That doesn’t look as automatable. And then the human meaning aspect of this, John Danaher, philosopher, recently released a book called Automation and Utopia, talking about how for most people work is the primary source of meaning. It’s certainly what they do with the great plurality of their waking hours. And I think for people like me and you, we’re lucky enough to like our jobs a lot, but for many people work is mostly a source of drudgery. Often unpleasant, unsafe, etcetera. But if we find ourselves in world in which work is largely automated, not only will we have to deal with the economic issues relating to how people who can no longer offer skills for compensation, will feed themselves and their families. But also how they’ll find meaning in life.

Lucas Perry: Right. If the category and meaning of jobs changes or is gone altogether, the Windfall Clause is also there to help meet fundamental universal basic human needs, and then also can potentially have some impact on this question of value and meaning. If the Windfall Clause allows you to have access to hobbies and nice vacations and other things that give human beings meaning.

Cullen O’Keefe: Yeah. I would hope so. It’s not a problem that we explicitly address in the paper. I think this is kind of in the broader category of what to actually do with the windfall, once it’s donated. You can think of this as like the bottom of the funnel. Whereas the Windfall Clause report is more focused at the top of the funnel, getting companies to actually commit to such a thing. And I think there’s a huge rich area of work to think about, what do we actually do with the surplus from AGI, once it manifests. And assuming that we can get it in to the coffers of a public minded organization. It’s something that I’m lucky enough to think about in my current job at OpenAI. So yeah, making sure that both material needs and psychological higher needs are taken care of. That’s not something I have great answers for yet.

Lucas Perry: So, moving on here to the second point. We also need a Windfall Clause or function or mechanism, in order to improve the allocation of economic windfall. So, could you explain that one?

Cullen O’Keefe: You can imagine a world in which employment kind of looks the same as it is today. Most people have jobs, but a lot of the gains are going to a very small group of people, namely shareholders. I think this is still a pretty sub-optimal world. There are diminishing returns on money for happiness. So all else equal and ignoring incentive effects, progressively distributing money seems better than not. Primarily firms looking to develop the AI are based in a small set of countries. In fact, within those countries, the group of people who are heavily invested in those companies is even smaller. And so in a world, even where employment opportunities for the masses are pretty normal, we could still expect to see pretty concentrated accrual of benefits, both within nations, but I think also very importantly, across nations. This seems pretty important to address and the Windfall Clause aims to do just that.

Lucas Perry: A bit of speculation here, but we could have had a kind of Windfall Clause for the industrial revolution, which probably would have made much of the world better off and there wouldn’t be such unequal concentrations of wealth in the present world.

Cullen O’Keefe: Yeah. I think that’s right. I think there’s sort of a Rawlsian or Harsanyian motivation there, that if we didn’t know whether we would be in an industrial country or a country that is later to develop, we would probably want to set up a system that has a more equal distribution of economic gains than the one that we have today.

Lucas Perry: Yeah. By Rawlsian, you meant the Rawls’ veil of ignorance, and then what was the other one you said?

Cullen O’Keefe: Harsanyi is another philosopher who is associated with the veil of ignorance idea and he argues, I think pretty forcefully, that actually the agreement that you would come to behind the veil of ignorance, is one that maximizes expected utility, just due to classic axioms of rationality. What you would actually want to do is maximize expected utility, whereas John Rawls has this idea that you would want to maximize the lot of the worst off, which Harsanyi argues doesn’t really follow from the veil of ignorance, and decision theoretic best practices.

Lucas Perry: I think that the veil of ignorance, which for listeners who don’t know what that is, it’s if you can imagine yourself not knowing how you were going to be born as in the world. You should make ethical and political and moral and social systems, with that view in mind. And if you do that, you will pretty honestly and wholesomely come up with something to your best ability, that is good for everyone. From behind that veil of ignorance, of knowing who you might be in the world, you can produce good ethical systems. Now this is relevant to the Windfall Clause, because going through your paper, there’s the tension between arguing that this is actually something that is legally permissible and that institutions and companies would want to adopt, which is in clear tension with maximizing profits for shareholders and the people with wealth and power in those companies. And so there’s this fundamental tension behind the Windfall Clause, between the incentives of those with power to maintain and hold on to the power and wealth, and the very strong and important ethical and normative views and compunctions, that say that this ought to be distributed to the welfare and wellbeing of all sentient beings across the planet.

Cullen O’Keefe: I think that’s exactly right. I think part of why I and others at the Future of Humanity Institute were interested in this project, is that we know a lot of people working in AI at all levels. And I think a lot of them do want to do the genuinely good thing. But feel the constraints of economics but also of fiduciary duties. We didn’t have any particular insights in to that with this piece, but I think part of the motivation is just that we want to put resources out there for any socially conscious AI developers to say, “We want to make this commitment and we feel very legally safe doing so,” for the reasons that I lay out.

It’s a separate question whether it’s actually in their economic interest to do that or not. But at least we think they have the legal power to do so.

Lucas Perry: Okay. So maybe we can get in to and explore the ethical aspect of this more. I think we’re very lucky to have people like you and your fellow colleagues who have the ethical compunction to follow through and be committed to something like this. But for the people that don’t have that, I’m interested in discussing more later about what to do with them. So, in terms of more of the motivations here, the Windfall Clause is also motivated by this need for a smooth transition to transformative AI or AGI or superintelligence or advanced AI. So what does that mean?

Cullen O’Keefe: As I mentioned, it looks like economic growth from AI will probably be a good thing if we manage to avoid existential and catastrophic risks. That’s almost tautological I suppose. But just as in the industrial revolution where you had a huge spur of economic growth, but also a lot of turbulence. So part of the idea of the Windfall Clause is basically to funnel some of that growth in to a sort of insurance scheme that can help make that transition smoother. An un-smooth transition would be something like a lot of countries are worried they’re not going to see any appreciable benefit from AI and indeed, might lose out a lot because a lot of their industries would be off shored or re-shored and a lot of their people would no longer be economically competitive for jobs. So, that’s the kind of stability that I think we’re worried about. And the Windfall Clause is basically just a way of saying, you’re all going to gain significantly from this advance. Everyone has a stake in making this transition go well.

Lucas Perry: Right. So I mean there’s a spectrum here and on one end of the spectrum there is say a private AI lab or company or actor, who is able to reach AGI or transformative AI first and who can muster or occupy some significant portion of the world GDP. That could be anywhere from one to 99 percent. And there could or could not be mechanisms in place for distributing that to the citizens of the globe. And so one can imagine, as power is increasingly concentrated in the hands of the few, that there could be quite a massive amount of civil unrest and problems. It could create very significant turbulence in the world, right?

Cullen O’Keefe: Yeah. Exactly. And it’s our hypothesis that having credible mechanisms ex-ante to make sure that approximately everyone gains from this, will make people and countries less likely to take destabilizing actions. It’s also a public good of sorts. You would expect that it would be in everyone’s interest for this to happen, but it’s never individually rational to commit that much to making it happen. Which is why it’s a traditional role for governments and for philanthropy to provide those sort of public goods.

Lucas Perry: So that last point here then on the motivations for why we need a Windfall Clause, would be general norm setting. So what do you have to say about general norm setting?

Cullen O’Keefe: This one is definitely a little more vague than some of the others. But if you think about what type of organization you would like to see develop AGI, it seems like one that has some legal commitment to sharing those benefits broadly is probably correlated with good outcomes. And in that sense, it’s useful to be able to distinguish between organizations that are credibly committed to that sort of benefit, from ones that say they want that sort of broad benefit but are not necessarily committed to making it happen. And so in the Windfall Clause report, we are basically trying to say, it’s very important to take norms about the development of AI seriously. One of the norms that we’re trying to develop is the common good principal. And even better is when you and develop those norms through high cost or high signal value mechanisms. And if we’re right that a Windfall Clause can be made binding, then the Windfall Clause is exactly one of them. It’s a pretty credible way for an AI developer to demonstrate their commitment to the common good principal and also show that they’re worthy of taking on this huge task of developing AGI.

The Windfall Clause makes the performance or adherence to the common good principal a testable hypothesis. It’s sets kind of a base line against which commitments to the common good principal can be measured.

Lucas Perry: Now there are also here in your paper, firm motivations. So, incentives for adopting a Windfall Clause from the perspective of AI labs or AI companies, or private institutions which may develop AGI or transformative AI. And your three points here for firm motivations are that it can generate general goodwill. It can improve employee relations and it could reduce political risk. Could you hit on each of these here for why firms might be willing to adopt the Windfall Clause?

Cullen O’Keefe: Yeah. So just as a general note, we do see private corporations giving money to charity and doing other pro-social actions that are beyond their legal obligations, so nothing here is particularly new. Instead, it’s just applying traditional explanations for why companies engage in, what’s sometimes called corporate social responsibility or CSR. And see whether that’s a plausible explanation for why they might be amenable to a Windfall Clause. The first one that we mentioned in the report, is just generating general goodwill, and I think it’s plausible that companies will want to sign a Windfall Clause because it brings some sort of reputational benefit with consumers or other intermediary businesses.

The second one we talk about is managing employee relationships. In general, we see that tech employees have had a lot of power to shape the behavior of their employers. Fellow FLI podcast guest Haydn Belfield just wrote a great paper, saying AI specifically. Tech talent is in very high demand and therefore they have a lot of bargaining power over what their firms do and I think it’s potentially very promising that tech employers lobby for commitments like the Windfall Clause.

The third is termed in a lot of legal and investment circles, as political risk, so that’s basically the risk of governments or activists doing things that hurt you, such as tighter regulation or expropriation, taxation, things like that. And corporate social responsibility, including philanthropy, is just a very common way for firms to manage that. And could be the case for AI firms as well.

Lucas Perry: How strong do you think these motivations listed here are, and what do you think will be the main things that drive firms or institutions or organizations to adopt the Windfall Clause?

Cullen O’Keefe: I think it varies from firm to firm. I think a big one that’s not listed here is how management likes the idea of a Windfall Clause. Obviously, they’re the ones ultimately making the decisions, so that makes sense. I think employee buy-in and enthusiasm about the Windfall Clause or similar ideas will ultimately be a pretty big determinate about whether this actually gets implemented. That’s why I would love to hear and see engagement around this topic from people in the technology industry.

Lucas Perry: Something that we haven’t talked about yet is the distribution mechanism. And in your paper, you come up with desiderata and important considerations for an effective and successful distribution mechanism. Philanthropic effectiveness, security from improper influences, political legitimacy and buy in from AI labs. So, these are just guiding principals for helping to develop the mechanism for distribution. Could you comment on what the mechanism for distribution is or could be and how these desiderata will guide the formation of that mechanism?

Cullen O’Keefe: A lot of this thinking is guided by a few different things. One is just involvement in the effective altruism community. I as a member of that community, spend a lot of time thinking about how to make philanthropy work well. That said, I think that the potential scale of the Windfall Clause requires thinking about factors other than effectiveness, in the way that effectiveness altruists think of that. Just because the scale of potential resources that you’re dealing here, begins to look less and less like traditional philanthropy and more and more like psuedo or para-government institution. And so that’s why I think things like accountability and legitimacy become extra important in the Windfall Clause context. And then firm buy-in I mentioned, just because part of the actual process of negotiating an eventual Windfall Clause would presumably be coming up with distribution mechanism that advances some of the firms objectives of getting positive publicity or goodwill from agreeing to the Windfall Clause, both with their consumers and also with employers and governments.

And so they’re key stakeholders in coming up with that process as well. This all happens in the backdrop of a lot of popular discussion about the role of philanthropy in society, such as recent criticism of mega-philanthropy. I take those criticisms pretty seriously and want to come up with a Windfall Clause distribution mechanism that manages those better than current philanthropy. It’s a big task in itself and one that needs to be taken pretty seriously.

Lucas Perry: Is the windfall function synonymous with the windfall distribution mechanism?

Cullen O’Keefe: No. So, the windfall function, it’s the mathematical function that determines how much money, signatories to the Windfall Clause are obligated to give.

Lucas Perry: So, the windfall function will be part of the windfall contract, and the windfall distribution mechanism is the vehicle or means or the institution by which that output of the function is distributed?

Cullen O’Keefe: Yeah. That’s exactly right. Again, I like to think of this as top of the funnel, bottom of the funnel. So the windfall function is kind of the top of the funnel. It defines how much money has to go in to the Windfall Clause system and then the bottom of the funnel is like the output, what actually gets done with the windfall, to advance the goals of the Windfall Clause.

Lucas Perry: Okay. And so here you have some desiderata for this function, in particular transparency, scale sensitivity, adequacy, pre-windfall commitment, incentive alignment and competitiveness. Are there any here that you want to comment on with regards to the windfall function.

Cullen O’Keefe: Sure. If you look at the windfall function, it looks kind of like a progressive tax system. You fall in to some bracket and the bracket that you’re in determines the marginal percentage of money that you owe. So, in a normal income tax scheme, the bracket is determined by your gross income. In the Windfall Clause scheme, the bracket is determined by a slightly modified thing, which is profits as a percent of gross world product, which we started off talking about.

We went back and forth for a few different ways that this could look, but we ultimately decided upon a simpler windfall function that looks much like an income tax scheme, because we thought it was pretty transparent and easy to understand. And for a project as potentially important as the Windfall Clause, we thought that was pretty important that people be able to understand the contract that’s being negotiated, not just the signatories.

Lucas Perry: Okay. And you’re bringing up this point about taxes. One thing that someone might ask is, “Why do we need a whole Windfall Clause when we could just have some kind of tax on benefits accrued from AI?” But the very important feature to be mindful here, about the Windfall Clause, is that it does something that taxing cannot do, which is redistribute funding from tech heavy first world countries to people around the world, rather than just to the government of the country able to tax them. So that also seems to be a very important consideration here for why the Windfall Clause is important, rather than just some new tax scheme.

Cullen O’Keefe: Yeah. Absolutely. And in talking to people about the Windfall Clause, this is one of the top concerns that comes up. So, you’re right to emphasize it. I agree that the potential for international distribution is one of the main reasons that I personally are more excited about the Windfall Clause than standard corporate taxation. Other reasons are just that it seems just more tractable to negotiate this individually with firms, a number of firms potentially in a position of developing advanced AI is pretty small now and might continue to be small for the foreseeable future. So the number of potential entities that you have persuaded to agree to this might be pretty small.

There’s also the possibility that we mention, but don’t propose an exact mechanism for in the paper of allowing taxation to supersede the Windfall Clause. So, if a government came up with a better taxation scheme, you might either release the signatories from the Windfall Clause or just have the windfall function compensate for that by reducing or eliminating total obligation. Of course, it gets tricky because then you would have to decide which types of taxes would you do that for, if you want to maintain the international motivations of the Windfall Clause. And you would also have to kind of figure out what the optimal tax rate is, which is obviously no small task. So those are definitely complicated questions, but at least in theory, there’s the possibility for accommodating those sorts of ex-post taxation efforts in a way that doesn’t burden firms too much.

Lucas Perry: Do you have any more insights or positives or negatives to comment here about the windfall function. It seems like in the paper, it is as you mention, open for a lot more research. Do you have directions for further investigation of the windfall function?

Cullen O’Keefe: Yeah. It’s one of the things that we lead out with, and it’s actually as you’re saying. This is primarily supposed illustrative and not the right windfall function. I’d be very surprised if this was ultimately the right way to do this. Just because the possibility in this space is so big and we’ve explored so little of it. One of the ideas that I am particularly excited about, and I think more and more might ultimately be the right thing to do, is instead of having a profits based trigger for the windfall function, instead having a market tap based trigger. And there are just basic accounting reasons why I’m more excited about this. Tracking profits is not as straight forward as it seems, because firms can do stuff with their money. They can spend more of it and reallocate it in certain ways. Whereas it’s much harder and they have less incentive to downward manipulate their stock price or market capitalization. So I’d be interested in potentially coming up with more value based approaches to the windfall function rather than our current one, which is based on profits.

That said, there is a ton of other variables that you could tweak here, and would be very excited to work with people or see other proposals of what this could look like.

Lucas Perry: All right. So this is an open question about how the windfall function will exactly look. Can you provide any more clarity on the mechanism for distribution, keeping mind here the difficulty of creating an effective way of distributing the windfall, which you list as the issues of effectiveness, accountability, legitimacy and firm buy-in?

Cullen O’Keefe: One concrete idea that I actually worked closely with FLI on, specifically with Anthony Aguirre and Jared Brown, was the windfall trust idea, which is basically to create a trust or kind of psuedo-trust that makes every person in world or as many people as we can, reach equal beneficiaries of a trust. So, in this structure, which is on page 41 of the report if people are interested in seeing it. It’s pretty simple. The idea is that the successful developer would satisfy their obligations by paying money to a body called the Windfall Trust. For people who don’t know what trust is, it’s a specific type of legal entity. And then all individuals would be either or actual or potential beneficiaries of the Windfall Trust, and would receive equal funding flows from that. And could even receive equal input in to how the trust is managed, depending on how the trust was set up.

Trusts are also exciting because they are very flexible mechanisms that you can arrange the governance of in many different ways. And then to make this more manageable, obviously a single trust with eight billion beneficiaries seems hard to manage, so you take a single trust for every 100,000 people or whatever number you think is manageable. I’m kind of excited about that idea, I think it hits a lot of the desiderata pretty well and could be a way in which a lot of people could see benefit from the windfall.

Lucas Perry: Are there any ways of creating proto-windfall clauses or proto-windfall trusts to sort of test the idea before transformative AI comes on the scene?

Cullen O’Keefe: I would be very excited to do that. I guess one thing I should say, OpenAI where I currently work, has a structure called a capped-profit structure, which is similar in many ways to the Windfall Clause. Our structure is such that profits above a certain cap that can be returned to investors, go to a non-profit, which is the OpenAI non-profit, which then has to use those funds for charitable purposes. But I would be very excited to see new companies and potentially companies aligned with the mission of the FLI podcast, to experiment with structures like this. In the fourth section of the report, we talk all about different precedents that exist already, and some of these have different features that are close to the Windfall Clause. And I’d be interested in someone putting all those together for their start-up or their company and making a kind of pseudo-windfall clause.

Lucas Perry: Let’s get in to the legal permissibility of the Windfall Clause. Now you said that this is actually one of the reasons why you first got in to this, was because it got tabled because people were worried about the fiduciary responsibilities that companies would have. Let’s start by reflecting on whether or not this is legally permissible in America, and then think about China, because these are the two biggest AI players today.

Cullen O’Keefe: Yeah. There’s actually a slight wrinkle there that we might also have to talk about, the Cayman Islands. But we’ll get to that. I guess one interesting fact about the Windfall Clause report, is that it’s slightly weird that I’m the person that ended up writing this. You might think an economist should be the person writing this, since it deals so much with labor economics and inequality, etcetera, etcetera. And I’m not an economist by any means. The reason that I got swept up in this is because of the legal piece. So I’ll first give a quick crash course in corporate law, because I think it’s an area than not a lot of people understand and it’s also important for this.

Corporations are legal entities. They are managed by a board of directors for the benefit of the shareholders, who are the owners of the firm. And accordingly, since the directors have the responsibility of managing a thing which is owned in part by other people, they owe certain duties to the shareholders. There are known as fiduciary duties. The two primary ones are the duty of loyalty and the duty of care. So, duty of loyalty, we don’t really talk about a ton in this piece, just the duty to manage the corporation for the benefit of the corporation itself, and not for the personal gain of the directors.

The duty of care is kind of what it sounds like, just the duty to take adequate care that the decisions made for the corporation by the board of directors will benefit the corporation. The reason that this is important for the purposes of a Windfall Clause and also for the endless speculation of corporate law professors and theorists, is when you engage in corporate philanthropy, it kind of looks like you’re doing something that is not for the benefit of the corporation. By definition, giving money to charity is primarily a philanthropic act or at least that’s kind of the prima facie case for why that might be a problem from the standpoint of corporate law. Because this is other people’s money largely, and the corporation is giving it away, seemingly not for the benefit of the corporation itself.

There actually hasn’t been that much case law, so actual court decisions on this issue. I found some of them across the US. As a side note, we primarily talk about Delaware law, because Delaware is the state in which the plurality of American corporations are incorporated for historical reasons. Their corporate law is by far the most influential in the United States. So, even though you have this potential duty of care issue, with making corporate donations, the standard by which directors are judged is the business judgment rule. Quoting from the American Law Institute, a summary of the business judgment rule is, “A director or officer who makes a business judgment in good faith, fulfills the duty of care if the director or officer, one, is not interested,” that means there is no conflict of interest, “In the subject of the business judgment. Two, is informed with respect to the business judgment to the extent that the director or officer reasonably believes to be appropriate under the circumstances. And three, rationally believes that the business judgment is in the best interests of the corporation.” So this is actually a pretty forgiving standard. It’s basically just use your best judgement standard, which is why it’s very hard for shareholders to successfully make a case that a judgement was a violation of the business judgement rules. It’s very rare for such challenges to actually succeed.

So a number of cases have examined the relationship of the business judgement rule to corporate philanthropy. They basically universally held that this is a permissible invocation or permissible example of the business judgement rule. That there are all these potential benefits that philanthropy could give to the corporation, therefore corporate directors decision to authorize corporate donations would be generally upheld under the business judgement rule, provided all these other things are met.

Lucas Perry: So these firm motivations that we touched on earlier were generating goodwill towards the company, improving employee relations and then reducing political risk I guess is also like having good faith with politicians who are, at the end of the day, hopefully being held accountable by their constituencies.

Cullen O’Keefe: Yeah, exactly. So these are all things that could plausibly, financially benefit the corporation in some form. So in this sense, corporate philanthropy looks less like a donation and more like an investment in the firm’s long term profitability, given all these soft factors like political support and employee relations. Another interesting wrinkle to this, if you read the case law of these corporate donation cases, they’re actually quite funny. The only case I quote from would be Sullivan v. Hammer. A corporate director wanted to make a corporate donation to an art museum, that had his name and kind of served basically as his personal art collection, more or less. And the court kind of said, this is still okay under business judgement rule. So, that was a pretty shocking example of how lenient this standard is.

Lucas Perry: So then they synopsis version here, is that the Windfall Clause is permissible in the United States, because philanthropy in the past has been seen as still being in line with fiduciary duties. And the Windfall Clause would do the same.

Cullen O’Keefe: Yeah, exactly. The one interesting wrinkle about the Windfall Clause that might distinguish it from most corporate philanthropy but though definitely not all, is that it has this potentially very high ex-post cost, even though it’s ex-ante cost might be quite low. So in a situation which a firm actually has to pay out the Windfall Clause, it’s very, very costly to the firm. But the business judgement rule, there’s actually a post to protect these exact types of decisions, because the things that courts don’t want to do is be second guessing every single corporate decision with the benefit of hindsight. So instead, they just instruct people to look at the ex-ante cost benefit analysis, and defer to that, even if ex-post it turns out to have been a bad decision.

There’s an analogy that we draw to stock option compensation, which is very popular, where you give an employee a block of stock options, that at the time is not very valuable because it’s probably just in line with the current value of the stock. But ex-post might be hugely valuable and this how a lot of early employees of companies get wildly rich, well beyond what they would have earned at fair market and cash value ex-ante. That sort of ex-ante reasoning is really the important thing, not the fact that it could be worth a lot ex-post.

One of the interesting things about the Windfall Clause is that it is a contract through time, and potentially over a long time. A lot of contracts that we make are pretty short term focus. But the Windfall Clause is in agreement now to do stuff, is stuff happens in the future, potentially in the distant future, which is part of the way the windfall function is designed. It’s designed to be relevant over a long period of time especially given the uncertainty that we started off talking about, with AI timelines. The important thing that we talked about was the ex-ante cost which means the cost to the firm in expected value right now. Which is basically the probability that this ever gets triggered, and if it does get triggered, how much will it be worth, all discounted by the time value of money etcetera.

One thing that I didn’t talk about is that there’s some language in some court cases about limiting the amount of permissible corporate philanthropy to a reasonable amount, which is obviously not a very helpful guide. But there’s a court case saying that this should be determined by looking to the charitable giving deduction, which is I believe about 10% right now.

Lucas Perry: So sorry, just to get the language correct. It’s the ex-post cost is very high because after the fact you have to pay huge percentages of your profit?

Cullen O’Keefe: Yeah.

Lucas Perry: But it still remains feasible that a court might say that this violates fiduciary responsibilities right?

Cullen O’Keefe: There’s always the possibility that a Delaware court would invent or apply new doctrine in application to this thing, that looks kind of weird from their perspective. I mean, this is a general question of how binding precedent is, which is an endless topic of conversation for lawyers. But if they were doing what I think they should do and just straight up applying precedent, I don’t see a particular reason why this would be decided differently than any of the other corporate philanthropy cases.

Lucas Perry: Okay. So, let’s talk a little bit now about the Cayman Islands and China.

Cullen O’Keefe: Yeah. So a number of significant Chinese tech companies are actually incorporated in the Cayman Islands. It’s not exactly clear to me why this is the case, but it is.

Lucas Perry: Isn’t it for hiding money off-shore?

Cullen O’Keefe: So I’m not sure if that’s why. I think even if taxation is a part of that, I think it also has to do with capital restrictions in China, and also they want to attract foreign investors which is hard if they’re incorporated in China. Investors might not trust Chinese corporate law very much. This is just my speculation right now, I don’t actually know the answer to that.

Lucas Perry: I guess the question then just is, what is the US and China relationship with the Cayman Islands? What is it used for? And then is the Windfall Clause permissible in China?

Cullen O’Keefe: Right. So, the Cayman Islands is where the big three Chinese tech firms, Alibaba, Baidu and Tencent are incorporated. I’m not a Caymanian lawyer by any means, nor am I an expert in China law, but basically from my outsider reading of this law, applying my general legal knowledge, it appears that similar principals of corporate law apply in the Cayman Islands which is why it might be a popular spot for incorporation. They have a rule that looks like the business judgement rule. This is in footnote 120 if anyone wants to dig in to it in the report. So, for the Caymanian corporations, it looks like it should be okay for the same reason. China being a self proclaimed socialist country, also has a pretty interesting corporate law that actually not only allows but appears to encourage firms to engage in corporate philanthropy. From the perspective of their law, at least it looks potentially more friendly than even Delaware law, so kind of a-fortiori should be permissible there.

That said, obviously there’s potential political reality to be considered there, especially also the influence of the Chinese government on state owned enterprises, so I don’t want to be naïve as to just thinking what the law says is what is actually politically feasible there. But all that caveating aside, as far as the law goes, the People’s Republic of China looks potentially promising for a Windfall Clause.

Lucas Perry: And that again matter, because China is currently second to the US in AI and are thus also likely potentially able to reach windfall via transformative AI in the future.

Cullen O’Keefe: Yeah. I think that’s the general consensus, is that after the United States, China seems to be the most likely place to develop AGI for transformative AI. You can listen and read a lot of the work by my colleague Jeff Ding on this, who recently appeared on 80,000 Hours podcast, talking about China’s AI dream and has a report by the same name, from FHI, that I would highly encourage everyone to read.

Lucas Perry: All right. Is it useful here to talk about historical precedents?

Cullen O’Keefe: Sure. I think one that’s potentially interesting is that a lot of sovereign nations have actually dealt with this problem of windfall governance before. It’s actually like natural resource based states. So Norway is kind of the leading example of this. They had a ton of wealth from oil, and had to come up with a way of distributing that wealth in a fair way. And as a sovereign wealth fund as a result, as do a lot of countries and provides for all sorts of socially beneficial applications.

Google actually when it IPO’d, gave one percent of its equity to it’s non-profit arm, the Google Foundation. So that’s actually significantly like the Windfall Clause in the sense that it gave a commitment that would grow in value as the firm’s prospects engaged. And therefore had low ex-ante costs but potentially higher ex-post-cost. Obviously, in personal philanthropy, a lot of people will be familiar with pledges like Founders Pledge or the Giving What We Can Pledge, where people pledge a percentage of their personal income to charity. The Founders Pledge kind of most resembles the Windfall Clause in this respect. People pledge a percentage of equity from their company upon exit or upon liquidity events and in that sense, it looks a lot like a Windfall Clause.

Lucas Perry: All right. So let’s get in to objections, alternatives and limitations here. First objection to the Windfall Clause, would be that the Windfall Clause will never be triggered.

Cullen O’Keefe: That certainly might be true. There’s a lot of reasons why that might be true. So, one is that we could all just be very wrong about the promise of AI. Also AI development could unfold in some other ways. So it could be a non-profit or an academic institution or a government that develops windfall generating AI and no one else does. Or it could just be that the windfall from AI is spread out sufficiently over a large number of firms, such that no one firm earns windfall, but collectively the tech industry does or something. So, that’s all certainly true. I think that those are all scenarios worth investing in addressing. You could potentially modify the Windfall Clause to address some of those scenarios.

hat said, I think there’s a significant non-trivial possibility that such a windfall occurs in a way that would trigger a Windfall Clause, and if it does, it seems worth investing in solutions that could mitigate any potential downside to that or share the benefits equally. Part of the benefit of the Windfall Clause is that if nothing happens, it doesn’t have any obligations. So, it’s quite low cost in that sense. From a philanthropic perspective, there’s a cost in setting this up and promoting the idea, etcetera, and those are definitely non-trivial costs. But the actual costs, signing the clause, only manifests upon actually triggering it.

Lucas Perry: This next one is that firms will find a way to circumvent their commitments under the clause. So it could never trigger because they could just keep moving money around in skillful ways such that the clause never ends up getting triggered. Some sub-points here are that firms will evade the clause by nominally assigning profits to subsidiary, parent or sibling corporations. That firms will evade the clause by paying out profits in dividends. That firms will sell all windfall generating AI assets to a firm that is not bound by the clause. Any thoughts on these here.

Cullen O’Keefe: First of all, a lot of these were raised by early commentators on the idea, and so I’m very thankful to those people for helping raise this. I think we probably haven’t exhausted the list of potential ways in which firms could evade their commitments, so in general I would want to come up with solutions that are not just patch work solutions, but also more like general incentive alignment solutions. That said, I think most of these problems are mitigable by careful contractual drafting. And then potentially also searching to other forms of the Windfall Clause like something based on firm share price. But still, I think there are probably a lot of ways to circumvent the clause in its kind of early form that we’ve proposed. And we would want to make sure that we’re pretty careful about drafting it and simulating potential ways that signatory could try to wriggle out of its commitment.

Cullen O’Keefe: I think it’s also worth noting that a lot of those potential actions would be pretty clear violations of general legal obligations that signatories to a contract have. Or could be mitigated with pretty easy contractual clauses.

Lucas Perry: Right. The solution to these would be foreseeing them and beefing up the actual windfall contract to not allow for these methods of circumvention.

Cullen O’Keefe: Yeah.

Lucas Perry: So now this next one I think is quite interesting. No firm with a realistic chance of developing windfall generating AI would sign the clause. How would you respond to that?

Cullen O’Keefe: I mean, I think that’s certainly a possibility, and if that’s the case, then that’s the case. It seems like our ability to change that might be pretty limited. I would hope that most firms in the potential position to be generating windfall, would take that opportunity as also carrying with it responsibility to follow the common good principle. And I think that a lot of people in those companies, both in leadership and the rank and file employee positions, do take that seriously. We do also think that the Windfall Clause could bring non-trivial benefits as we spent a lot of time talking about.

Lucas Perry: All right. The next one here is that quote, “If the public benefits of the Windfall Clause are supposed to be large, that is inconsistent with stating that the cost to firms will be small enough, that they would be willing to sign the clause.” This has a lot to do with this distinction with the ex-ante and the ex-post differences in cost. And also how there is probabilities and time involved here. So, your response to this objection.

Cullen O’Keefe: I think there’s some a-symmetries between the costs and benefit. Some of the costs are things that would happen in the future. So from a firms perspective, they should probably discount the costs of the Windfall Clause because if they earn windfall, it would be in future. From a public policy perspective, a lot of those benefits might not be as time sensitive. So you might no super-care when exactly those costs happen and therefore not really discount them from a present value standpoint.

Lucas Perry: You also probably wouldn’t want to live in the world in which there was no distribution mechanism or windfall function for allocating the windfall profits from one of your competitors.

Cullen O’Keefe: That’s an interesting question though, because a lot of corporate law principals suggest that firms should want to behave in a risk neutral sense, and then allow investors to kind of spread their bets according to their own risk tolerances. So, I’m not sure that this risks spreading between firms argument works that well.

Lucas Perry: I see. Okay. The next is that the Windfall Clause reduces incentives to innovate.

Cullen O’Keefe: So, I think it’s definitely true that it will probably have some effect on the incentive to innovate. That almost seems like kind of necessary or something. That said, I think people in our community are kind of the opinion that there are significant externalities to innovation and not all innovation towards AGI is strictly beneficial in that sense. So, making sure that those externalities are balanced seems important. And the Windfall Clause is one way to do that. In general, I think that the disincentive is probably just outweighed by the benefits of the Windfall Clause, but I would be open to reanalysis of that exact calculus.

Lucas Perry: Next objection is, the Windfall Clause will shift investment to competitive non-signatory firms.

Cullen O’Keefe: This was another particularly interesting comment and it has a potential perverse effect actually. Suppose you have two types of firms, you have nice firms and less nice firms. And all the nice firms sign the Windfall Clause. And therefore their future profit streams are taxed more heavily than the bad firms. And this is bad, because now investors will probably want to go to bad firms because they offer potentially more attractive return on investment. Like the previous objection, this is probably true to some extent. It kind of depends on the empirical case about how many firms you think are good and bad, and also what the exact calculus is regarding how much this disincentives investors from giving to good firms and causes the good firms to act better.

We do talk a little bit about different ways in which you could potentially mitigate this with careful mechanism design. So you could have the Windfall Clause consist in subordinated obligations but the firm could raise senior equity or senior debt to the Windfall Clause such that new investors would not be disadvantaged by investing in a firm that has signed the Windfall Clause. Those are kind of complicated mechanisms, and again, this is another point where thinking through this from a very careful micro-economic point in modeling this type of development dynamic would be very valuable.

Lucas Perry: All right. So we’re starting to get to the end here of objections or at least objections in the paper. The next is, the Windfall Clause draws attention to signatories in an undesirable way.

Cullen O’Keefe: I think the motivation for this objection is something like, imagine that tomorrow Boeing came out and said, “If we built a Death Star, we’ll only use it for good.” What are you talking about, building a Death Star? Why do you even have to talk about this? I think that’s kind of the motivation, is talking about earning windfall is itself drawing attention to the firm in potentially undesirable ways. So, that could potentially be the case. I guess the fact that we’re having this conversation suggests that this is not a super-taboo subject. I think a lot of people are generally aware of the promise of artificial intelligence. So the idea that the gains could be huge and concentrated in one firm, doesn’t seem that worrying to me. Also, if a firm was super close to AGI or something, it would actually be much harder for them to sign on to the Windfall Clause, because the costs would be so great to them in expectation, that they probably couldn’t justify it from a fiduciary duty standpoint.

So in that sense, signing on to the Windfall Clause at least from a purely rational standpoint, is kind of negative evidence that a firm is close to AGI. That said, there is certainly psychological elements that complicate that. It’s very cheap for me to just make a commitment that says, oh sure if I get a trillion dollars, I’ll give 75% of it some charity. Sure, why not? I’ll make that commitment right now in fact.

Lucas Perry: It’s kind of more efficacious if we get firms to adopt this sooner rather than later, because as time goes on, their credences in who will hit AI windfall will increase.

Cullen O’Keefe: Yeah. That’s exactly right. Assuming timelines are constant, the clock is ticking on stuff like this. Every year that goes by, committing to this gets more expensive to firms, and therefore rationally, less likely.

Lucas Perry: All right. I’m not sure that I understand this next one, but it is, the Windfall Clause will lead to moral licensing. What does that mean?

Cullen O’Keefe: So moral licensing is a psychological concept, that if you do certain actions that either are good or appear to be good, that you’re more like to do bad things later. So you have a license to act immorally because of the times that you acted morally. I think a lot of times this is a common objection to corporate philanthropy. People call this ethics washing or green washing, in the context of environmental stuff specifically. I think you should again, do pretty careful cost benefit analysis here to see whether the Windfall Clause is actually worth the potential licensing effect that it has. But of course, one could raise this objection to pretty much any pro-social act. Given that we think the Windfall Clause could actually have legally enforceable teeth, it seems kind of less likely unless you think that the licensing effects would just be so great that they’ll overcome the benefits of actually having an enforceable Windfall Clause. It seems kind of intuitively implausible to me.

Lucas Perry: Here’s another interesting one. The rule of law might not hold if windfall profits are achieved. Human greed and power really kicks in and the power structures which are meant to enforce the rule of law no longer are able to, in relation to someone with AGI or superintelligence. How do you feel about this objection?

Cullen O’Keefe: I think it’s a very serious one. I think it’s something that perhaps the AI safety maybe should be investing more in. I’m also having an interesting discussion, asynchronously on this with Rohin Shah on the EA Forum. I do think there’s a significant chance that if you have an actor that is potentially as powerful as a corporation with AGI and all the benefits that come with that at its disposal, could be such that it would be very hard to enforce the Windfall Clause against it. That said, I think we do kind of see Davids beating Goliaths in the law. People do win lawsuits against the United States government or very large corporations. So it’s certainly not the case that size is everything, though it would be naïve to suppose that it’s not correlated with the probability of winning.

Other things to worry about, are the fact that this corporation will have very powerful AI that could potentially influence the outcome of cases in some way or perhaps hide ways in which it was evading the Windfall Clause. So, I think that’s worth taking seriously. I guess just in general, I think this issue is worth a lot of investment from the AI safety and AI policy communities, for reasons well beyond the Windfall Clause. And it seems like a problem that we’ll have to figure out how to address.

Lucas Perry: Yeah. That makes sense. You brought up the rule of law not holding up because of its power to win over court cases. But the kind of power that AGI would give, would also potentially far extend beyond just winning court cases right? In your ability to not be bound by the law.

Cullen O’Keefe: Yeah. You could just act as a thug and be beyond the law, for sure.

Lucas Perry: It definitely seems like a neglected point, in terms of trying to have a good future with beneficial AI.

Cullen O’Keefe: I’m kind of the opinion that this is pretty important. It just seems like that this is just also a thing in general, that you’re going to want of a post-AGI world. You want the actor with AGI to be accountable to something other than its own will.

Lucas Perry: Yeah.

Cullen O’Keefe: You want agreements you make before AGI to still have meaning post-AGI and not just depend on the beneficence of the person with AGI.

Lucas Perry: All right. So the last objection here is, the Windfall Clause undesirably leaves control of advanced AI in private hands.

Cullen O’Keefe: I’m somewhat sympathetic to the argument that AGI is just such an important technology that it ought to be governed in a pro-social way. Basically, this project doesn’t have a good solution to that, other than to the extent that you could use Windfall Clause funds to perhaps purchase share stock from the company or have a commitment in shares of stock rather than in money. On the other hand, private companies are doing a lot of very important work right now, in developing AI technologies and are kind of the current leading developers of advanced AI. It seems to me like their behaving pretty responsibility overall. I’m just not sure what the ultimate ideal arrangement of ownership of AI will look like and want to leave that open for other discussion.

Lucas Perry: All right. So we’ve hit on all of these objections, surely there are more objections, but this gives a lot for listeners and others to consider and think about. So in terms of alternatives for the Windfall Clause, you list four things here. They are windfall profits should just be taxed. We should rely on anti-trust enforcement instead. We should establish a sovereign wealth fund for AI. We should implement a universal basic income instead. So could you just go through each of these sequentially and give us some thoughts and analysis on your end?

Cullen O’Keefe: Yeah. We talked about taxes already, so is it okay if I just skip that?

Lucas Perry: Yeah. I’m happy to skip taxes. The point there being that they will end up only serving the country in which they are being taxed, unless that country has some other mechanism for distributing certain kinds of taxes to the world.

Cullen O’Keefe: Yeah. And it also just seems much more tractable right now to work on, private commitments like the Windfall Clause rather than lobbying for pretty robust tax code.

Lucas Perry: Sure. Okay, so number two.

Cullen O’Keefe: So number two is about anti-trust enforcement. This was largely spurred by a conversation with Haydn Belfield. The idea here is that in this world, the AI developer will probably be a monopoly or at least extremely powerful in its market, and therefore we should consider anti-trust enforcement against it. I guess my points are two-fold. Number one is that just under American law, it is pretty clear that merely possessing monopoly power is not itself a reason to take anti-trust action. You have to have acquired that monopoly power in some illegal way. And if some of the stronger hypothesis about AI are right, AI could be a natural monopoly and so it seems pretty plausible that an AI monopoly could develop without any illegal actions taken to gain that monopoly.

I guess second, the Windfall Clause addresses some of the harms from monopoly, though not all of them, by transferring some wealth from shareholders to everyone and therefore transferring some wealth from shareholders to consumers.

Lucas Perry: Okay. Could focusing on anti-trust enforcement alongside the Windfall Clause be beneficial?

Cullen O’Keefe: Yeah. It certainty could be. I don’t want to suggest that we ought not to consider anti-trust, especially if there’s a natural reason to break up firms or if there’s a natural violation of anti-trust law going on. I guess I’m pretty sympathetic to the anti-trust orthodoxy that monopoly is not in itself a reason in itself to break up a firm. But I certainly think that we should continue to think about anti-trust as a potential response to these situations.

Lucas Perry: All right. And number three is we should establish a sovereign wealth fund for AI.

Cullen O’Keefe: So this is an idea that actually came out of FLI. Anthony Aguirre has been thinking about this. The idea is to set up something that looks like the sovereign wealth funds that I alluded to earlier, that places like Norway and other resource rich countries have. Some better and some worse governed, I should say. And I think Anthony’s suggestion was to set this up as a fund that held shares of stock of the corporation, and redistributed wealth in that way. I am sympathetic to this idea overall as I mentioned, I think stock based Windfall Clause could be potentially be an improvement over the cash based one that we suggest. That said, I think there are significant legal problems here if that’s kind of make this harder to imagine working. For one thing, it’s hard to imagine the government buying up all these shares of stock companies, just to acquire a significant portion of them so that you have a good probability of capturing a decent percentage of future windfall, you would have to just spend a ton of money.

Secondly, they couldn’t expropriate the shares of stock, but it would require just compensation under the US Constitution. Third, there are ways that corporations can prevent from accumulating a huge share of its stock if they don’t want it to, the poison pills, the classic example. So if the firms didn’t want a sovereign automation fund to buy up significant shares of their fund, which they might not want to since it might not govern in the best interest of other shareholders, they could just prevent it from acquiring a controlling stake. So all those seem like pretty powerful reasons why contractual mechanisms might be preferable to that kind of sovereign automation fund.

Lucas Perry: All right. And the last one here is, we should implement a universal basic income instead.

Cullen O’Keefe: Saving kind of one of the most popular suggestions for last. This isn’t even really an alternative to the Windfall Clause, it’s just one way that the Windfall Clause could look. And ultimately I think UBI is a really promising idea that’s been pretty well studied. Seems to be pretty effective. It’s obviously quite simple, has widespread appeal. And I would be probably pretty sympathetic to a Windfall Clause that ultimately implements a UBI. That said, I think there are some reasons that you might you prefer other forms of windfall distribution. So one is just that UBI doesn’t seem to target people particularly harmed by AI for example, if we’re worried about a future with a lot of automation of jobs. UBI might not be the best way to compensate those people that are harmed.

Others address that it might not be the best opportunity for providing public goods, if you thought that that’s something that the Windfall Clause should do, but I think it could be a very promising part of the Windfall Clause distribution mechanism.

Lucas Perry: All right. That makes sense. And so wrapping up here, are there any last thoughts you’d like to share with anyone particularly interested in the Windfall Clause or people in policy in government who may be listening or anyone who might find themselves at a leading technology company or AI lab?

Cullen O’Keefe: Yeah. I would encourage them to get in touch with me if they’d like. My email address is listed in the report. I think just in general, this is going to be a major challenge for society in the next century. At least it could be. As I said, I think there’s substantial uncertainty about a lot of this, so I think there’s a lot of potential opportunities to do research, not just in economics and law, but also in political science and thinking about how we can govern the windfall that artificial intelligence brings, in a way that’s universally beneficial. So I hope that other people will be interested in exploring that question. I’ll be working with the Partnership on AI to help think through this as well and if you’re interested in those efforts and have expertise to contribute, I would very much appreciate people getting touch, so they can get involved in that.

Lucas Perry: All right. Wonderful. Thank you and everyone else who helped to help work on this paper. It’s very encouraging and hopefully we’ll see widespread adoption and maybe even implementation of the Windfall Clause in our lifetime.

Cullen O’Keefe: I hope so too, thank you so much Lucas.

FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity.

Topics discussed in this episode include:

  • Views on the nature of reality
  • Quantum mechanics and the implications of quantum uncertainty
  • Identity, information and description
  • Continuum of objectivity/subjectivity

Timestamps: 

3:35 – General history of views on fundamental reality

9:45 – Quantum uncertainty and observation as interaction

24:43 – The universe as constituted of information

29:26 – What is information and what does the view of reality as information have to say about objects and identity

37:14 – Identity as on a continuum of objectivity and subjectivity

46:09 – What makes something more or less objective?

58:25 – Emergence in physical reality and identity

1:15:35 – Questions about the philosophy of identity in the 21st century

1:27:13 – Differing views on identity changing human desires

1:33:28 – How the reality as information perspective informs questions of identity

1:39:25 – Concluding thoughts

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Recently we had a conversation between Max Tegmark and Yuval Noah Harari where in consideration of 21st century technological issues Yuval recommended “Get to know yourself better. It’s maybe the most important thing in life. We haven’t really progressed much in the last thousands of years and the reason is that yes, we keep getting this advice but we don’t really want to do it…. I mean, especially as technology will give us all, at least some of us, more and more power, the temptations of naive utopias are going to be more and more irresistible and I think the really most powerful check on these naive utopias is really getting to know yourself better.

Drawing inspiration from this, our following podcast was with Andres Gomez Emillson and David Pearce on different views of identity, like open, closed, and empty individualism, and their importance in the world. Our conversation today with Anthony Aguirre follows up on and further explores the importance of questions of self and identity in the 21st century.

This episode focuses on exploring this question from a physics perspective where we discuss the view of reality as fundamentally consisting of information. This helps us to ground what actually exists, how we come to know that, and how this challenges our commonly held intuitions about there existing a concrete reality out there populated by conventionally accepted objects and things, like cups and people, that we often take for granted without challenging or looking into much. This conversation subverted many of my assumptions about science, physics, and the nature of reality, and if that sounds interesting to you, I think you’ll find it valuable as well. 

For those of you not familiar with Anthony Athony, he is a physicist that studies the formation, nature, and evolution of the universe, focusing primarily on the model of eternal inflation—the idea that inflation goes on forever in some regions of universe—and what it may mean for the ultimate beginning of the universe and time. He is the co-founder and associate scientific director of the Foundational Questions Institute and is also a co-founder of the Future of Life Institute. He also co-founded Metaculus, an effort to optimally aggregate predictions about scientific discoveries, technological breakthroughs, and other interesting issues.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. These contributions make it possible for us to bring you conversations like these and to develop the podcast further. You can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description.

And with that, let’s get into our conversation with Anthony Aguirre.

So the last time we had you on, we had a conversation on information. Could you take us through the history of how people have viewed fundamental reality and fundamental ontology over time from a kind of idealism to then materialism to then this new shift that’s informed by quantum mechanics about seeing things as being constituted of information.

Anthony Aguirre: So, without being a historian of science, I can only give you the general impression that I have. And of course through history, many different people have viewed things very different ways. So, I would say in the history of humanity, there have obviously been many, many ways to think about the ultimate nature of reality, if you will, starting with a sense that the fundamental nature of external reality is one that’s based on different substances and tendencies and some level of regularity in those things, but without a sense that there are firm or certainly not mathematical regularities and things. And that there are causes of events, but without a sense that those causes can be described in some mathematical way.

So that changed obviously in terms of Western science with the advent of mechanics by Galileo and Newton and others showing that there are not just regularities in the sense that the same result will happen from the same causes over and over again, that was appreciated for a long time, but that those could be accessed not just experimentally but modeled mathematically and that there could be a relatively small set of mathematical laws that could then be used to explain a very wide range of different physical phenomena. I think that sense was not there before, it was clear that things caused other things and events caused other events, but I suspect the thinking was that it was more in a one off way, like, “That’s a complicated thing. It’s caused by a whole bunch of other complicated things. In principle, those things are connected.” But there wasn’t a sense that you could get in there and understand what that connection was analytically or intellectually and certainly not in a way that had some dramatic economy in the sense that we now appreciate from Galileo and Newton and subsequent physics.

Once we had that change to mathematical laws, then there was a question of, what are those mathematical laws describing? And the answer there was essentially that those mathematical laws are describing particles and forces between particles. And at some level, a couple of other auxiliary things like space and time are sort of there in the backdrop, but essentially the nature of reality is a bunch of little bits of stuff that are moving around under mathematically specified forces.

That is a sort of complete-ish description. I mean certainly Newton would have and have not said that that’s a complete description in the sense that, in Newton’s view, there were particles and those particles made up things and the forces told them exactly what to do, but at the same time there were lots of other things in Newton’s conception of reality like God and presumably other entities. So it’s not exactly clear how materialist Newton or Galileo for example were, but as time went on that became a more entrenched idea among hardcore theoretical physicists at least, or physicists, that there was ultimately this truest, most fundamental, most base description of reality that was lots of particles moving around under mathematical forces.

Now, that I think is a conception that is very much still with us in many senses but has taken on a much deeper level of subtlety given the advent of modern physics including particularly quantum mechanics and also I think a sort of modern recognition or sort of higher level maybe of sophistication and thinking about the relation between different descriptions of natural phenomena. So, let’s talk about quantum mechanics first. Quantum mechanics does say that there are particles in a sense, like you can say that there are particles but particles aren’t really the thing. You can ask questions of reality that entail that reality is made of particles and you will get answers that look like answers about particles. But you can also ask questions about the same physical system about how it is as a wave and you will get answers about how it is as a wave.

And in general in quantum mechanics, there are all sorts of questions that you can ask and you will get answers about the physical system in the terms that you asked those questions about. So as long as it is a sort of well-defined physical experiment that you can do and that you can translate into a kind of mathematical form, what does it mean to do that experiment? Quantum mechanics gives you a way to compute predictions for how that experiment will turn out without really taking a particular view on what that physical system is, is it a particle? Is it a wave? Or is it something else? And I think this is important to note, it’s not just that quantum mechanics says that things are particles and waves at the same time, it’s that they’re all sorts of things at the same time.

So you can ask how much of my phone is an elephant in quantum mechanics. A phone is totally not the same thing as an elephant, but a phone has a wave function, so if I knew the wave function of the phone and I knew a procedure for asking, “Is something an elephant?”, then I could apply that procedure to the phone and the answer would not be, “No, the phone is definitely not an elephant.” The answer would be, “The phone is a tiny, tiny, tiny, tiny, tiny bit an elephant.” So this is very exaggerated because we’re talking phones and elephants, all these numbers are so tiny. But the point is that I can interrogate reality in quantum mechanics in many different ways. I can formulate whatever questions I want and it will give me answers in terms of those questions.

And generally if my questions totally mismatched with what the system is, I’ll get, “No, it’s not really that.” But the no is always a, “No, the probability is incredibly tiny that it’s that.” But in quantum mechanics, there’s always some chance that if you look at your phone, you’ll notice that it’s an elephant. It’s just that that number is so tiny that it never matters, but when you’re talking about individual particles, you might find that that probability is significant, that the particle is somewhat different than you thought it was and that’s part of the quantum uncertainty and weirdness.

Lucas Perry: Can you unpack a little bit that quantum uncertainty and weirdness that explains, when you ask questions to quantum mechanics, you don’t ever get definite answers? Is that right?

Anthony Aguirre: Almost never. So there are occasions where you get definite answers. If you ask a question of a quantum system and it gives you an answer and then you ask that question immediately again, you’ll get the same answer for sure.

Lucas Perry: What does immediately mean?

Anthony Aguirre: Really immediately. So formally, like immediately, immediately. If time goes by between the two measurements then the system can evolve a little bit and then you won’t definitely get the same answer. That is if you have a quantum system, there is a particular set of questions that you can ask it that you will get definite answers to and the quantum state essentially is that set of questions. When you say an electron is here and it has this spin that is, it’s rotating around this direction, what you really mean is that there are a particular set of questions like, “Where are you? And what is your spin?” That if you asked them of this electron, you would get a definite answer.

Now if you take that same electron that I was going to ask those questions to and I would get a definite answer because that’s the state the electron is in, but you come along and ask a different question than one of the ones that is in that list, you will get an answer but it won’t be a definite answer. So that’s kind of the fundamental hallmark of quantum mechanics is that the list of questions you can ask to which you will get a definite answer is a finite one. And for a little particle it’s a very short list, like an electron is a very short list.

Lucas Perry: Is this because the act of observation includes interaction with the particle in such a way that it is changed by the interaction?

Anthony Aguirre: I think that’s a useful way to look at it in a sense, but it’s slightly misleading in the sense that as I said, if you ask exactly the right question, then you will get a definite answer. So you haven’t interfered with the system at all if you ask exactly the right question.

Lucas Perry: That means performing the kind of experiment that doesn’t change what the particle will be doing or its nature? Is that what that means?

Anthony Aguirre: Yes. It’s sort of like you’ve got a very, very particularly shaped net and you can cast it on something and if the thing happens to have exactly the right shape, your net just falls right over it and it doesn’t affect the thing at all and you say, “Oh, it has that property.” But if it has any other shape, then your net kind of messes it up, it gets perturbed and you catch something in your net. The net is your experiment, but you mess up the system while you’re doing it, but it’s not that you necessarily mess up the system, it’s that you’re asking it a question that it isn’t ready to answer definitively, but rather some other question.

So this is always true, but it’s kind of the crucial thing of reality. But the crucial thing about quantum mechanics is that that list is finite. We’re used to asking any question that… I’ve got a mug, I can ask, “Is it brown? Is it here? Is it there? How heavy?” Whatever question I think of, I feel like I can answer. I can ask the question and there will be an answer to it because whatever question I ask, if it’s a well-defined question before I ask it, the mug either has that property or it doesn’t. But quantum mechanics tells us that is true. But there’s only a finite number of answers there are built in to the object. And I can ask other questions, but I just can’t expect the answer to already be there in the sense that I’ll get a definite answer to it.

So this is a very subtle way that there’s this interactive process between the observer and the thing that’s observed. If we’re talking about something that is maximally specified that it has a particular quantum state, there is some way that it is in a sense, but you can’t ever find that out because as soon as you start asking questions of it, you change the thing unless you happen to ask exactly the right questions. But in order to ask exactly the right questions, you would already have to know what state it’s in. And the only way you can do that is by actually creating the system effectively.

So if I create an electron in a particular state in my lab, then I know what state it’s in and I know exactly what questions to ask it in order to get answers that are certain. But if I just come across an electron in the wild, I don’t know exactly what questions to ask. And so I just have to ask whatever questions I will and chances are it won’t be the right questions for that electron. And I won’t ever know whether they were or not because I’ll just get some set of answers and I won’t know whether those were the properties that the electron actually had already or if they were the ones that it fell into by chance upon my asking those questions.

Lucas Perry: How much of this is actual properties and features about the particles in and of themselves and how much is it about the fact that we’re like observers or agents that have to interact with the particles in some ways in order to get information about them? Such that we can’t ask too many questions without perturbing the thing in and of itself and then not being able to get definitive answers to other questions?

Anthony Aguirre: Well, I’m not sure how to answer that because I think it’s just that is the structure of quantum mechanics, which is the structure of reality. So it’s explicitly posed in terms of quantum states of things and a structure of observations that can be made or observables that can be measured so you can see whether the system has a particular value of that observable or not. If you take out the observation part or the measurement part, you just have a quantum state which evolves according to some equation and that’s fine, but that’s not something you can actually compare in any sense to reality or to observation or use in any way. You need something that will connect that quantum state and evolution equation to something that you can actually do or observe.

And I think that is something that’s a little bit different. You can say in Newtonian mechanics or classical physics, there’s something arguably reasonable about saying, “Here is the system, it’s these particles and they’re moving around in this way.” And that’s saying something. I think you can argue about whether that’s actually true, that that’s saying something. But you can talk about the particles themselves in a fairly meaningful way without talking about the observer or the person who’s measuring it or something like that. Whereas in quantum mechanics, it’s really fairly useless to talk about the wave function of something without talking about the way that you measure things or the basis that you operate it on and so on.

That was a long sort of digression in a sense, but I think that’s crucial because that I think is a major underlying change in the way that we think about reality, not as something that is purely out there, but understanding that even to the extent that there’s something out there, any sense of our experiencing that is unavoidably an interactive one and in a way that you cannot ignore the interaction, that you might have this idea that there’s an external objective reality that although it’s inconvenient to know, although on an everyday basis you might mess with it a little bit when you interact with it, in principle it’s out there and if you could just be careful enough, you could avoid that input from the observer. Quantum mechanics says, “No. That’s a fundamental part of it. There’s no avoiding that. It’s a basic part of the theory that reality is made up of this combination of the measurer and the state.”

I also think that once you admit, because you have to in this case that there is more to a useful or complete description of reality than just the kind of objective state of the physical system, then you notice that there are a bunch of other things that actually are there as well that you have to admit are part of reality. So, if you ask some quantum mechanical question, like if I ask, “Is my mug brown? And is it spinning? Where is it?” Those kinds of questions, you have to ask, what is the reality status of those questions or the categories that I’m defining and asking those questions? Like brownness, what is that? That’s obviously something that I invented, not me personally, but I invented in this particular case. Brownness is something that biological creatures and humans and so on invented. The sensation of brown is something that biological creatures maybe devised, the calling something brown and the word brown are obviously human and English creations.

So those are things that are created through this process and are not there certainly in the quantum state. And yet if we say that the quantum state on its own is not a meaningful or useful description of reality, but we have to augment it with the sorts of questions that we ask and the sort of procedure of asking and getting questions answered, then those extra things that we have to put into the description entail a whole lot of different things. So there’s not just the wave function. So in that simple example, there’s a set of questions and possible answers to those questions that the mug could give me. And there are different ways of talking about how mathematically to define those questions.

One way is to call them course grained states or macro states, that is, there are lots of ways that reality can be, but I want to extract out certain features of reality. So if I take the set of possible ways that a mug can be, there’s some tiny subset of all those different ways that the atoms in my mug could be that I would actually call a mug and a smaller subset of those that I would call a brown mug and a smaller subset of those that I would call a brown mug that’s sitting still and so on. So they’re kind of subsets of the set of all possible ways that a physical system with that many atoms and that mass and so on could be and when I’m asking questions about the mug, like are you brown? I’m asking, “Is the system in that particular subset of possibilities that I call a brown mug sitting on a table?”

I would say that at some level, almost all of what we do in interacting with reality is like that process. There’s this huge set of possible realities that we could inhabit. What we do are to divvy up that reality into many, many possibilities corresponding to questions that we might ask and answers to those questions we might ask and then we go and ask those questions of reality and we get sort of yes or no answers to them. And quantum mechanics is sort of the enactment of that process with full exactness that applies to even the smallest systems, but we can think of that process just on a day to day level, like we can think of, what are all the possible ways that the system could be? And then ask certain questions. Is it this? Is it that?

So this is a conception of reality that’s kind of like a big game of 20 questions. Every time we look out at reality, we’re just asking different questions of it. Normally we’re narrowing down the possibility space of how reality is by asking those questions, getting answers to it. To me a really interesting question is like, what is the ontological reality status of all those big sets of questions that we’re asking? Your tendency as a theoretical physicist is to say, “Oh, the wave function is the thing that’s real and that’s what actually exists, and all these extra things are just extra things that we made up and our globbed onto the wave function.” But I think that’s kind of a very impoverished view of reality, not just impoverished, but completely useless and empty of any utility or meaning because quantum mechanics by its nature requires both parts. The questions and the state. If you cut out all the questions, you’re just left with this very empty thing that has no applicability or meaning.

Lucas Perry: But doesn’t that tell us how reality is in and of itself?

Anthony Aguirre: I don’t think it tells you anything, honestly. It’s almost impossible to even say what the wave function is except in some terms. Like if I just write down, “Okay, the wave function of the universe is psi.” What did that tell me? Nothing. There’s nothing there. There’s no way that I could even communicate to you what the wave function is without reference to some set of questions because remember the wave function is a definite set of answers to a particular set of questions. So, I have to communicate to you the set of questions to which the wave function is the definite answer and those questions are things that have to do with macroscopic reality.

There’s no way that I can tell you what the wave function is if I were to try to communicate it to you without reference to those questions. Like if I say, “Okay, I’ve got a thingie here and it’s got a wave function,” and you asked me, “Okay, what is the wave function?” I don’t know how to tell you. I could tell you it’s mass, but now what I’m really saying is, here’s a set of energy measuring things that I might do and the amplitude for getting those different possible outcomes in that energy measuring thing is 0.1 for that one and 0.2 for that one and so on. But I have to tell you what those energy measuring things are in order to be able to tell you what the wave function is.

Lucas Perry: If you go back to the starting conditions of the universe, that initial state is a definite thing, right? Prior to any observers and defined coherently and exactly in and of itself. Right?

Anthony Aguirre: I don’t know if I would say that.

Lucas Perry: I understand that for us to know anything we have to ask questions. I’m asking you about something that I know that has no utility because we’re always going to be the observer standing in reference, right? But just to think about reality in and of itself.

Anthony Aguirre: Right. But you’re assuming that there is such a thing and that’s not entirely clear to me. So I recognize that there’s a desire to feel like there is a sort of objective reality that is out there and that there’s meaning to saying what that reality is, but that is not entirely clear to me that that’s a safe assumption to make. So it is true that we can go back in time and attribute all kinds of pretty objective properties of the universe and it certainly is true that it can’t be that we needed people and observers and things back at the beginning in order to be able to talk about those things. But it’s a very thorny question to me, that it’s meaningful to say that there was a quantum state that the universe had at the very beginning when I don’t know what operationally that means. I wouldn’t know how to describe that quantum state or make it meaningful other than in terms of measurable things which require adding a whole bunch of ingredients to the description of what the universe is.

To say that the universe started in this quantum state, to make that meaningful requires these extra ingredients. But we also recognize that those extra ingredients are themselves parts of the universe. So, either you have to take this view that there is a quantum state and somehow we’re going to get out of that in this kind of circular self-consistent way, a bunch of measuring apparatuses that are hidden in that quantum state and make certain measurements and then define the quantum state in this bootstrapping way. Or you have to say that the quantum state, and I’m not sure how different these things are, that the quantum state is part of reality, but in order to actually specify what reality is, there’s a whole bunch of extra ingredients that we have to define and we have to put in there.

And that’s kind of the view that I take nowadays, that there is reality and then there’s our description of reality. And as we describe reality, one of the things that we need to describe reality are quantum states and one of the things that we need to describe reality are coarse grainings or systems of measurement or bases and so on. There are all these extra things that we need to put in. And the quantum states are one of them and a very important one. And evolution equations are one of them in a very important one. But to identify reality with the state plus the fundamental laws that evolve that state, I just don’t think is quite the right way to think about it.

Lucas Perry: Okay, so this is all very illuminating for this perspective here that we’re trying to explore, which is the universe being simply constituted of information.

Anthony Aguirre: Yeah, so let’s talk about that. Once you let go, I think of the idea that there is matter that is made of particles and then there are arrangements of that matter and there are things that that matter does, but the matter is this intrinsically existing stuff. Once you start to think of there being the state, which is a set of answers to questions, that set of answers to questions is a very informative thing. It’s a kind of maximally informative thing, but it isn’t a different kind of thing to other sets of answers to questions.

That is to say that I’ve got information about something, kind of is saying that I’ve asked a bunch of questions and I’ve gotten answers about it so I know about it. If I keep asking enough incredibly detailed questions that maybe I’ve maximally specified the state of the cup and I have as much information as I can have about the cup. But in that process, as I ask more and more information, as I more and more specify what the cup is like, there’s no particular place in which the cup changes its nature. So I start out asking questions and I get more and more and more information until I get the most information that I can. And then I call that, that’s the most information I can get and now I’ve specified the quantum state of the cup.

But in that sense then a quantum state is like the sort of end state of a process of interrogating a physical system to get more and more information about it. So to me that suggests this interpretation that the nature of something like the quantum state of something is an informational thing. It’s identified with a maximal set of information that you can have about something. But that’s kind of one end of the spectrum, the maximal knowing about that thing end of the spectrum. But if we don’t go that far, then we just have less information about the thing. And once you start to think that way, well what then isn’t information? If the nature of things is to be a state and a set of questions and the state gives me answers to those questions, that’s a set of information. But as I said, that sort of applies to all physical systems that’s kind of what they are according to quantum mechanics.

So there used to be a sense, I think that there was a thing, it was a bunch of particles and then when I ask questions I could learn about that thing. The lesson to me of quantum mechanics is that there’s no space between the answers to questions that I get when I ask questions of a thing and the thing itself. The thing is in a sense, the set of answers to the questions that I have or could ask of it. It comes much less of a kind of physical tangible thing made of stuff and much more of a thing made out of information and it’s information that I can get by interacting with that thing, but there isn’t a thing there that the information is about. That notion seems to be sort of absent. There’s no need to think that there is a thing that the information is about. All we know is the information.

Lucas Perry: Is that true of the particles arranged cup wise or the cup thing that is there? Is it true of that thing in and of itself or is that basically just the truth of being an epistemic agent who’s trying to interrogate the cup thing?

Anthony Aguirre: Suppose the fundamental nature of reality was a bunch of particles, then what I said is still true. I can imagine if things like observers exist, then they can ask questions and they can get answers and those will be answers about the physical system that kind of has this intrinsic nature of bits of stuff. And it would still, I think, be true that most of reality is made of everything but the little bits of stuff, the little bits of stuff are only there at the very end. If you ask the very most precise questions you get more and more a sense of, “Oh they’re little bits of stuff.” But I think what’s interesting is that what quantum mechanics tells us is we keep getting more and more fine grained information about something, but then at the very end rather than little bits of stuff, it sort of disappears before our eyes. There aren’t any little bits of stuff there, there’s just the answers to the most refined sets of questions that we can ask.

So that’s where I think there’s sort of a difference is that there’s this sense in classical physics that underlying all these questions and answers and information is this other thing of a different nature, that is matter and it has a different fundamental quality to it than the information. And in quantum mechanics it seems to me like there’s no need to think that there is such a thing, that there is no need to think that there is some other different stuff that is non-informational that’s out there that the information is about because the informational description is complete.

Lucas Perry: So I guess there’s two questions here that come out of this. It’d be good if you could define and unpack what information exactly is and then if you could explore and get further into the idea of how this challenges our notion of what a macroscopic thing is or what a microscopic or what a quantum thing is, something that we believe to have identity. And then also how this impacts identity like cup identity or particle identity, what it means for people and galaxies and the universe to be constituted of information. So those two things.

Anthony Aguirre: Okay. So there are lots of ways to talk about information. There are also qualitative and quantitative ways to talk about it. So let me talk about the quantitative way first. So you can say that if I have a whole possibility space, like many different possibilities for the way something can be and then I restrict those possibilities to a smaller set of possibilities in some way. Either I say it’s definitely in one of these, or maybe there’s a higher probability that it’s one of these than one of those. I, in some way restrict rather than every possibility is the same, I say that some possibilities are more than others, they’re more likely or it’s restricted to some subset. Then I have information about that system and that information is precisely the gap between everything being just equally likely and every possibility being equally good and knowing that some of them are more likely or valid or something than others.

So, information is that gap that says it’s more this than some of those other things. So, that’s a super general way of talking about it but that can be made very mathematically precise. So if I say there are four bits of information stored in my computer, exactly what I mean is that there are a bunch of registers and if I don’t know whether they’re ones or zeros, I say I have no information. If I know that these four are 1101, then I’ve restricted my full set of possibilities to this subset in which those are 1101 and I have those four bits of information. So I can be very mathematically precise about this. And I can even say if the first bit, well I don’t know whether it’s 01 but it’s 75% chance that it’s zero and 25% chance that it’s one, that’s still information. It’s less than one bit of information.

People think of bits as being very discrete things, but you can have fractions of bits of information. There’s nothing wrong with that. The very general definition as restrictions away from every possibility being equally likely to some being more likely than others. And that can be made mathematically precise and is exactly the sort of information we talk about when we say, “My hard drive is 80 gigabytes in size or I have 20 megabits per second of internet speed.” It’s exactly that sort of information that we’re quantifying.

Now, when I think about a cup, I can think about the system in some way like, there are some number of atoms like 10 to the 25th or whatever, atoms or electrons and protons and neutrons or whatever, and there are then some huge, huge possible set of ways that those things can be and some tiny, tiny, tiny, tiny, tiny, tiny, almost infinitesimally tiny subset of those ways that can be are something that I would label a cup. So if I say, “Oh look, I have a cup”, I’m actually specifying a vast amount of information by saying, “Look, I have a cup.”

Now if I say, “Look, I have a cup and inside it is some dregs of coffee.” I’ve got a huge amount more information. Now, it doesn’t feel like a huge amount more of information. It’s just like, “Yeah, what did I expect? Dregs of coffee.” It’s not that big of a deal but physically speaking, it’s a huge amount of information that I’ve specified just by noticing that there are dregs of coffee in the cup instead of dregs of all kinds of other liquids and all kinds of other states and so on.

So that’s the quantitative aspect, I can quantify how much information is in a description of a system and the description of it is important because you might come along and you can’t see this cup. So I can tell you, there’s some stuff on my desk. You know a lot less about what’s on my desk than I do. So we have different descriptions of this same system and I’ve got a whole lot more information than you do about what’s on my desk. So the information, and this is an important thing, is associated with somebody’s description of the system, not necessarily a person’s, but any way of specifying probabilities of the system being in a subset of all of its possibilities. Whether that’s somebody describing it or whatever else, anything that defines probabilities over the states that the system could be in, that’s defining an amount of information associated with those probabilities.

So there’s that quantity. But there’s also, when I say, what is a mug? So you can say that the mug is made of protons, electrons, and neutrons, but of course pretty much anything in our world is made of protons, neutrons, and electrons. So what makes this a mug rather than a phone or a little bit of an elephant or whatever, is the particular arrangement that those atoms have. To say that a mug is just protons, neutrons, and electrons, I think is totally misleading in the sense that the protons, neutrons, and electrons are the least informative part of what makes it a mug. So there’s a quantity associated with that, the mug part of possibility space is very small compared to all of the possibilities. So that means that there’s a lot of information in saying that it’s a mug.

But there’s also the quality of what that particular subset is and that that particular subset is connected in various ways with things in my description, like solidity and mass and brownness and hardness and hollowness. It is at the intersection of a whole bunch of other properties that a system might have. So each of those properties I can also think of as subsets of possibility space. Suppose I take all things that are a kilogram, that’s how many protons, neutrons, and electrons they have. So, that’s my system. There’s a gazillion different ways that a kilogram of protons and neutrons and electrons can be where we could write down the very exponential numbers that it is.

Now, if I then say, “Okay, let me take a subset of that possibility space that are solid,” that’s a very small subset. There are lots of ways things can be gases and liquids. Okay, so I’ve made a small subset. Now let me take another property, which is hardness. So, that’s another subset of all possibilities. And where hardness intersect solid, I have hard, solid things and so on. So I can keep adding properties on and when I’ve specified enough properties, it’s something that I would give the label of a mug. So when I ask, what is a mug made of? In some sense it’s made of protons, neutrons, and electrons, but I think in a more meaningful sense, it’s made of the properties that make up it being a mug rather than some other thing. And those properties are these subsets or these ways of breaking up the state space of the mug into different possibilities.

In that sense, I kind of think of the mug as more made of properties with an associated amount of information with them and the sort of fundamental nature of the mug is that set of properties. And your reaction to that might be like, “Yes it has those properties but it is made of stuff.” But then if you go back and ask, what is that stuff? Again, the stuff is a particular set of properties. As deep as you go, it’s properties all the way down until you get to the properties of electrons, protons, and neutrons, which are just particular ways that those are and answers to those questions that you get by asking the right questions of those things.

And so that’s what it means to me to take the view that everything is made up of information in some way, it’s to take a view that there isn’t a separation between the properties that we intersect to say that it is something and the thing itself that has those properties.

Lucas Perry: So in terms of identity here, there was a question about the identity status of the cup. I think that, from hearing your talks previously, you propose a spectrum of subjectivity and objectivity rather than it being a kind of binary thing, because the cup is a set of questions and properties. Can you expand a little bit about the identity of the cup and what the meaning of the cup, given that it is constituted from this quantum mechanical perspective of just information about the kinds of questions and properties we may ask of cup-like objects.

Anthony Aguirre: I think there are different ways in which the description of a system or what it is that we mean when we say it is this kind of thing. “It is a cup” or the laws of physics or like, “There is this theorem of mathematics” or “I feel itchy”, are three fairly different statements. But my view is that we should not try to sort them into objective facts of the world and individual subjective or personal perspective kind of things.

But there’s really this continuum in between them. So when I say that there’s this thing on my desk that is a cup, there’s my particular point of view that sees the cup and that has a whole bunch of personal associations with the cup. Like I really like this one. I like that it’s made out of clay. I’ve had a lot of nice coffee out of it. And so I’m like … So that’s very personal stuff.

There’s cupness which is obviously not there in the fires of the Big Bang. It’s something that has evolved socially and via biological utility and all the processes that have led to our technological society and our culture having things that we store stuff in and liquids and-

Lucas Perry: That cupness though is kind of like the platonic idealism that we experience imbued upon the object, right? Because of our conventional experience of reality. We can forget the cupness experience is there like that and we identify it and like reify it, right? And then we’re like, “Oh, there’s just cupness there.”

Anthony Aguirre: We get this sense that there is an objectively speaking cup out there, but we forget the level of creation and formulation that has gone on historically and socially and so on to create this notion, this shared collective notion of cupness that is a creation of humanity and that we all carry around with us as part of our mental apparatus.

And then we say, “Oh, cupness is an objective thing and we all agree that this is a cup and the cup is out there.” But really it’s not. It’s somewhere in this spectrum, in the sense that there will certainly be cups, that it’s ambiguous whether it’s a cup or not. There will be people who don’t know what a cup is and so on.

It’s not like every possible person will agree even whether this is a brown cup. Some people may say, “Well actually I’d call that grayish.” It feels fairly objective, but obviously there’s this intersubjective component to it of all these ingredients that we invented going into making that a cup.

Now there are other things that feel more objective than that in a sense, like the laws of physics or some things about mathematics where you say like, “Oh, the ratio of the circumference to the diameter of a circle.” We didn’t make that up. That was there at the beginning of the universe. And that’s a longer conversation, but certainly that feels more objective than the cup.

Once it’s understood what the terms are, there’s sort of no disagreeing with that statement as long as we’re in flat space and so on. And there’s no sense in which we feel like that statement has a large human input. We certainly feel like that ratio was what it was and that we can express it as this series of fractions and so on. Long before there were people, that was true. So there’s a feeling that that is a much more objective thing. And I think that’s fair to say. It has more of that objectivity than a cup. But what I disagree with and find kind of not useful is the notion that there is a demarcation between things that are and aren’t objective.

I sort of feel like you will never find that bright line between an actually objective thing and a not actually objective thing. It will always be somewhere on this continuum and it’s probably not even a one dimensional continuum, but somewhere in this spectrum between things that are quite objective and things that are very, very subjective will be somewhere in that region, kind of everything that makes up our world that we experience.

Lucas Perry: Right. So I guess you could just kind of boil that down by saying that is true because all of the things are just constituted of the kinds of properties and questions that you’re interested in asking about the thing and the questions about the mathematical properties feel and seem more objective because they’re derived from primitive self-intuitive axioms. And then it’s just question wormholes from there, you know? That stand upon bedrock of slightly more and more dubious and relativistic and subjective questions and properties that one may or may not be interested in.

Anthony Aguirre: Yeah. So there are a couple of things I would say to that. One is that there’s a tendency among some people to feel like more objective is more true or more real or something like that. Whereas I think it’s different. And with more true and more real tends to come a normative sense of better. Like more true things are better things. There are two steps there from more objective to more true and from more true to better, both of which are kind of ones that we shouldn’t necessarily just swallow because I think it’s more complicated than that.

So more objective is different and might be more useful for certain purposes. Like it’s really great that the laws of physics are in the very objective side of the spectrum in that we feel like once we’ve found some, lots of different people can use them for all kinds of different things without having to refigure them out. And we can kind of agree on them. And we can also feel like they were true a long time ago and use them for all kinds of things that happened long ago and far away. So there are all these great things about the fact that they are on this sort of objective side of things.

At the same time, the things that actually matter to us in and that are like the most important things in the world to us are a totally subjective thing.

Lucas Perry: Love and human rights and the fact that other humans exist.

Anthony Aguirre: Right. Like all value at some level … I certainly see value as very connected with the subjective experience of things that are experiencing things and that’s purely subjective. Nobody would tell you that the subjective experience of beings is unimportant, I think.

Lucas Perry: But there’s the objectivity of the subjectivity, right? One might argue that the valence of the conscious experience is objective and that that is the objective ground.

Anthony Aguirre: So this was just to say that it’s not that objective is better or more valuable or something like that. It’s just different. And important in different ways. The laws of physics are super important and useful in certain ways, but if someone only knew and applied the laws of physics and held no regard or importance for the subjective experience of beings, I would be very worried about the sorts of things that they would do.

I think there’s some way in which people think dismissively of things that are less objective or that are subjective, like, “Oh, that’s just a subjective feeling of something.” Or, “That’s not like the true objective reality. Like I’m superior because I’m talking about the true objective reality” and I just don’t think that’s a useful way to think about it.

Lucas Perry: Yeah. These deflationary memes or jokes or arguments that love is an absurd reduction of a bunch of chemicals or whatever, that’s this kind of reduction of the supposed value of something which is subjective. But all of the things that we care about most in life, we talked about this last time that like hold together the fabric of reality and provide a ton of meaning, are subjective things. What are these kinds of things? I guess from the perspective of this conversation, it’s like they’re the kinds of questions that you can ask about systems and like how they will interact with each other and the kinds of properties that they have. Right?

Why are these particular questions and properties important? Well, I mean historically and evolutionarily speaking, they have particular functions, right? So it seems clearer and that I would agree with you that there’s the space of all possible questions and properties we can ask about things. And because of historical reasons, we care about a particularly arbitrary subset of those questions and properties that have functional use. And that is constituted of all of these subjective things like cups and houses and like love and like marriage and like rights.

Anthony Aguirre: I’m only, I think, objecting to the notion that those are somehow less real or sort of derivative of a description in terms of particles or fields or mathematics.

Lucas Perry: So the sense in which they’re less real is the sense in which we’ll get confused by the cupness being like a thing in the world. So that’s why I wanted to highlight that phenomenological sense of cupness before where the platonic idealism we see of the cupness is there in and of itself.

Anthony Aguirre: Yeah, I think I agree with that.

Lucas Perry: So what is it that defines whether or not something falls more on the objective side or more on the subjective side? Aren’t all the questions that we ask about macroscopic and fuzzy concepts like love and human rights and cups and houses and human beings … Don’t all those questions have definitive answers as long as the categories are coherent and properly defined?

Anthony Aguirre: I guess the way I see it is that there’s kind of a sense of how broadly shared through agents and through space and time are those categorizations or those sets of properties. Cupness is pretty widespread. It doesn’t go further back in time than humanity. Protozoa don’t use cups. So cupness is fairly objective in that sense. It’s tricky because there exists a subjectivity objectivity axis of how widely shared are the sets of properties and then there’s a different subjective objective axis of experience of my individual phenomenological experience of subjectivity versus an objective view of the world. And I think those are connected but they’re not quite the same sense of the subjective and objective.

Lucas Perry: I think that to put it on that axis is actually a little bit confusing. I understand that the more functional that a meme or a idea or concept is, the more widely shared it’s going to be. But I don’t think that just because more and more agents are agreeing to use some kind of concept like money, that that is becoming more objective. I think it’s just becoming more shared.

Anthony Aguirre: Yeah, that’s fine. I guess I would ask you what does more and less objective mean, if it’s not that?

Lucas Perry: Yeah, I mean I don’t know.

Anthony Aguirre: I’m not sure how to say something is more or less objective without referring to some sense like that, that it is more widespread in some way or that there are more sort of subjective views of the world that share that set of descriptions.

If we go back to the thinking about the probabilities in whatever sense you’re defining the probabilities and the properties, the more perspectives are using a shared set of properties, the more objectively defined are the things that are defined by those properties. Now, how to say that precisely like is this objectivity level 12 because 12 people share that set of properties and 50 people share these, so it’s objectivity level … I wouldn’t want to quantify it that way necessarily.

But I think there is some sort of sense of that, that the more different perspectives on the world use that same set of descriptions in order to interact with the world, the more kind of objective that set of descriptions is. Again, I don’t think that captures everything. Like I still think there was a sense in which the laws of physics were objective before anyone was talking about them and using them. It’s quite difficult. I mean when you think about mathematics-

Lucas Perry: Yeah, I was going to bring that up.

Anthony Aguirre: You know, if you think of mathematics as you’ve got a set of axioms and a set of rules for generating true statements out of those axioms. Even if you pick a particular set of rules, there are a huge number of sets of possible axioms and then each set of axioms, if you just grind those rules on those axioms, will produce just an infinite number of true statements. But grinding axioms into true statements is not doing mathematics, I would say.

So it is true that every true mathematical statement should have a sequence of steps that goes from the axioms to that true mathematical statement. But for every thing that we read in a math textbook, there’s an exponentially large number of other consequences of axioms that just nobody cares about because they’re totally uninteresting.

Lucas Perry: Yeah, there’s no utility to them. So this is again finding spaces of mathematics that have utility.

Anthony Aguirre: What makes certain ones more useful than others? So it seems like you know, e, Euler’s number is a very special number. It’s useful for all kinds of stuff. Obviously there are a continuous infinity of other numbers that are just as valid as that one. Right? But there’s something very special about that one because it shows up all the time, it’s really useful for all these different things.

So we’ve picked out that particular number as being special. And I would say there’s a lot of information associated with that pointing to e and saying, “Oh look, this number”, we’ve done something by that pointing. There’s a whole bunch of information and interesting stuff associated with pointing out that that number is special. So that pointing is something that we humans have done at some level. There wasn’t a symbol e or the notion of e or anything like that before humans were around.

Nonetheless, there’s some sense in which once we find e and see how cool it is and how useful it is, we say, “It was always true that e^ix = cos(x) + i sin(x). Like that was always true even though we just proved it a couple of centuries ago and so on. How could that have not been true? And it was always true, but it wasn’t always true that we knew that it was interesting.

So it’s kind of the interesting-ness and the pointing to that particular theorem as being an interesting one out of all the possible consequences that you could grind out of a set of axioms, that’s what was created by humanity. Now why the process by which we noticed that that was an interesting thing, much more interesting than many other things, how much objectivity there is to that is an interesting question.

Surely some other species that we encountered, almost surely, they would have noticed that that was a particularly interesting mathematical fact like we did. Why? That’s a really hard question to answer. So there is a subjective or non-objective part of it and that we as a species developed that thing. The interesting-ness of it wasn’t always there. We kind of created that interesting-ness of it, but we probably noticed its interesting-ness for some reason and that reason seems to go above and beyond the sort of human processes that noticed it. So there’s no easy answer to this, I think.

Lucas Perry: My layman’s easy answer would be just that it helps you describe and make the formalization and development of mathematical fields, right?

Anthony Aguirre: Sure. But is that helpfulness a fact of the world or a contingent thing that we’ve noticed as we’ve developed mathematics? How, among all species that ever could be imagined that exist, would almost all of them identify that as being useful and interesting or would only some of them and other ones have a very different concept of what’s useful and interesting? That’s really hard to know. And is it more or less objective in that sort of sense?

Lucas Perry: I guess, part of my intuition here is just that it has to do with the way that our universe is constituted. Calculus is useful for like modeling and following velocities and accelerations and objects in Newtonian physics. So like this calculus thing has utility because of this.

Anthony Aguirre: Right. But that which makes it useful, that feels like it’s something more objective, right? Like calculus is inheriting it objectiveness from the objective nature of the universe that makes calculus useful.

Lucas Perry: So the objectiveness is born of its relationship to the real world?

Anthony Aguirre: Yes, but again, what does that mean? It’s hard to put your finger at all on what that thing is that the real world has that makes calculus useful for describing it other than saying the real world is well-described by calculus, right? It feels very circular to say that.

Lucas Perry: Okay, so I’m thoroughly confused then about subjectivity and objectivity, so this is good.

Anthony Aguirre: I think we all have this intense desire to feel like we understand what’s going on. We don’t really understand how reality works or is constituted. We can nonetheless learn more about how it’s constituted and sitting on that razor’s edge between feeling pride and like, “Yes, we figured a bunch of stuff out and we really can predict the world and we can do technology and all these things”, all of which is true, while also feeling the humility that when we really go into it, reality is fundamentally very mysterious, I think is right, but difficult.

My frustration is when I see people purporting to fully understand things like, “Oh, I get it. This is the way that the world is.” And taking a very dismissive attitude toward thinking the world is not the way that they particularly see it. And that’s not as uncommon an attitude as one would like. Right? That is a lot of people’s tendency because there’s a great desire and safety in feeling like you understand this is the way that the world is and if only these poor benighted other souls could see it the way I do, they would be better off. That’s hard because we genuinely do understand much, much, much more about the world than we ever did.

So much so that there is a temptation to feel like we really understand it and I think at some level that’s more the notion that I feel like it’s important to push back against the notion that we get it all. Like you know, we more or less understand how the world is and how it works and how it fundamentally operates. Among some circles that’s more of the hubristic danger of falling into that then there is falling into the, “We don’t know anything.” Although there are other parts of society where there’s the other end too, the anti intellectual stances that like my conception of reality is just as good as yours that I just made up yesterday and we’re all equally good at understanding what the world is really like. Also quite dangerous.

Lucas Perry: The core draw away here for me is just this essential confusion about how to navigate this space of what it means for something to be more subjective and objective and the perspective of analyzing it through the kinds of questions and properties we would ask or be interested in. What you were just saying also had me reflecting a lot on people whose identity is extremely caught up in nationalism or like a team sport. It would seem to be trivial questions or properties you could ask. Like where did you happen to be born? Which city do you particularly have fondness towards? The identity of really being like an American or like really being a fan of the Patriots, people become just completely enthralled and engrossed by that. Your consciousness and ego just gets obliterated into identification with, “I am an American Patriot fan” and like there’s just no perspective. There is no context. When one goes way too far towards the objective, when one is mistaking the nature of things.

Anthony Aguirre: Yeah, there are all sorts of mistakes that we all make all the time and it’s interesting to see pathologies in all directions in terms of how we think about the world and our relation to it. And there are certain cases where you feel like if we could just all take a little bit more of an objective view of this, everyone would be so much better off and kind of vice versa. It takes a lot of very difficult skill to approach our complex world and reality in a way that we’re thinking about it in a useful way in this wide variety of different circumstances where sometimes it’s more useful to think about it more objectively and sometimes more subjectively or along all sorts of other different axes.

It’s a real challenge. I mean that’s part of what it is to be human and to engage in a worthy way with other people and with the world and so on, is to have to understand the more and less useful and skillful ways and lenses through which to look at those things.

At one time, almost everything we do is in error, but you also have to be forgiven because almost everything that you could do would be an error in some way from some standpoint. And sometimes thinking that the cup is objectively real is an error. Thinking that you made up the cup and invented it all on your own is also an error. So like the cup is real and isn’t real and is made up and isn’t made up. Any way you think about it is kind of wrong, but it’s also all kind of okay because you can still pick up the cup and take a drink.

So it’s very tricky. It’s a tricky reality we reside in, but that’s good. I think if everything was straightforward and obvious, that would be a boring world.

Lucas Perry: If everything were straightforward and obvious, then I would reprogram everyone to not find straightforward and obvious things boring and then we would not have this requirement to be in a complicated, un-understandable world.

Anthony Aguirre: I think there’s a Douglas Adams line that, “If you figure it all out, then immediately it all stops and starts again in a more complicated way that becomes more and more difficult. And of course this is something that’s happened many, many times before.”

Lucas Perry: I don’t know how useful it is now, but is talking about emergence here, is that something that’s useful, you think, for talking about identity?

Anthony Aguirre: Maybe. There’s a question of identity of what makes something one thing rather than another and then there’s another question of personal identity and sort of my particular perspective or view of the world, like what I identify as my awareness, my consciousness, my phenomenal experience of the world and that identity and how it persists through time. That identity and how it does or doesn’t connect with other ones. Like, is it truly its own island or should I take a more expansive view of it and is it something that persists over time?

Is there a core thing that persist over time or is it succession of things that are loosely identified or tightly identified with each other? I’m not sure whether all of the stuff that we’ve been talking about in terms of properties and questions and answers and states and things applies to that, but I’m not sure that it doesn’t either.

Lucas Perry: I think it does. Wouldn’t the self or like questions on personal identity be arbitrary questions in a very large state that we would be interested in asking particular questions about what constitutes the person? Is there a self? The self is like a squishy fuzzy concept like love. Does the self exist? Does love exist? Where do they fall on the subjective objective scale?

Anthony Aguirre: Well there are many different questions we could think about, but if I think of my identity through time, I could maybe talk about how similar some physical system is to the physical system I identify as me right now. And I could say I’ve sort of identified through time with the physical system that is really much like me and physics makes that easy because physical systems are very stable and this body kind of evolves slowly. But once you get to the really hard questions like suppose I duplicate this physical system in some way, is my identity one of those or two of those and what happens if you destroy the original one and, you know, those are genuinely confusing questions that I’m not sure that the sort of niceties of understanding emergence and the properties and so on, I’m not sure how much it has to say about it. I’m not sure that it doesn’t, but having thought a lot about the earlier identity questions, I feel no less confused.

Lucas Perry: The way in which emergence is helpful or interesting to me is the way in which … the levels of reality at which human beings conceptualize, which would be like quantum mechanics and then atomic science and then chemistry and then biology and so on.

We imagine them as being sort of stacked up on each other and that if reductionism is attractive to one, you would think that all the top layers supervene upon the nature of the very bottom layer, quantum mechanics. Which is true to some sense and you would want to say that there is fundamental brute identity facts about the like very, very, very base layer.

So you could say that there are such things as irreducible quantum atoms like maybe they reduce into other things but that’s an open question for now. And if we are confident about the identity of those things, there’s at least a starting place, you know from which we would have true answers about identity. Does that make sense?

Anthony Aguirre: Well the sentences make sense but I just largely don’t agree with them. And for all the reasons that we’ve talked about. I think there needs to be a word that is the opposite of emergence, like distillation or something, because I think it’s useful to think both directions.

Like I think it is certainly useful to be able to think about, I have a whole bunch of particles that do these things and then I have another description of them that glosses over say the individual actions of the particles, but creates some very reliable regularity that I can call a law like thermodynamics or like some chemical laws and so on.

So I think that is true, but it’s also useful to think of the other direction, which is we have complicated physical systems and by making very particular simplifications and carving away a lot of the complexity, we create systems that are simple enough to have very simple laws describe them. I would call that a sort of distillation process, which is one that we do. So we go through this process when we encounter new phenomena. We kind of look for ways that we can cut away lots of the complexity, cut away a lot of the properties, try to create a system that’s simple enough to describe in some mathematical way, using some simple attenuated set of concepts and so on.

And then often we take that set and then we try to work our way back up by using those laws and kind of having things that emerge from that lower level description. But I think both processes are quite important and it’s a little bit intellectually dangerous to think of what I’d call the distillation process as a truth-finding process. Like I’m finding these laws that were all already there rather than I’m finding some regularities that are left when I remove all this extra stuff and then forget that you’ve removed all the extra stuff and that when you go back from the so-called more fundamental description, to the emerged description, that you’re secretly sticking a lot of that stuff back in without noticing that you’re doing it.

So that’s sort of my point of view, that the notion that we can go from this description in terms of particles and fields and that we could derive all these emerged layers from it, I think it’s just not true in practice for sure, but also not really true in principle. There’s stuff that we have to add to the system in order to describe those other levels that we sort of pretend that we’re not adding. We say, “Oh, I’m just assuming this extra little thing” but really you’re adding concepts and quantities and all kinds of other apparatus to the thing that you started with.

Lucas Perry: Does that actually describe reality then or does that give you an approximation, the emergent levels?

Anthony Aguirre: Sure. It just gives you answers to different questions than the particle and field level does.

Lucas Perry: But given that the particle and field level stuff is still there, doesn’t that higher order thing still have the capacity for like strange quantum things to happen and that would not be accounted for in the emergent level understanding and therefore it would not always be true if there was some like entanglement or like quantum tunneling business going on?

Anthony Aguirre: Yeah, I think there’s more latitude perhaps. The statistical laws and statistical mechanics are statistical laws. They’re totally exact, but the things that they make are statistical descriptions of the world that are approximate in some way. So it’s like they’re approximate but they’re approximate in a very, very well defined way. I mean it’s certainly true that the different descriptions should not contradict each other. If you have a description of a macroscopic phenomenon that doesn’t conserve energy, then that’s a sort of wrongheaded way to look at that system.

Lucas Perry: But what if that macroscopic system does something quantum? Then the macroscopic description fails. So then it’s like not true or it’s not predictive.

Anthony Aguirre: Yeah, not true I think is not quite the right, like that description let you down in that circumstance. Everything will let you down sometimes.

Lucas Perry: I understand what you’re saying. The things are like functional at the perspective and scales at which you’re interested. And this goes back to kind of this more epistemological agent centered view of science and like engaging in the world that we were talking about earlier. I guess, for a very long time the way that I viewed science as explaining the intrinsic nature of the physical, but really it’s not doing that because all of these things are going to fail at different times. They just have strong predictive power. And maybe it was very wrong of me early on to ever think that science was describing the intrinsic nature of the physical.

Anthony Aguirre: I don’t think it’s entirely wrong. You do get something through distilling more and going more toward the particle and field level in that once you specify something that the quantum mechanics and the standard model of particle physics say gives you a well-defined answer to, then you feel really sure that you’re going to get that result. You do get a dramatically higher level of confidence from doing that distilling process and idealizing a system enough that you can actually do the mathematics to figure out what should happen according to the fundamental physical laws, as we describe them in terms of particles and fields and so on.

So I think that’s the sense in which they’re extra true or real or fundamental, is that you get that higher level of confidence. But at the cost that you had to shoehorn your physical system, either add in assumptions or cutaway things in order to make it something that is describable using that level of description.

You know, not everyone will agree with the way that I’m characterizing this. I think you’ll talk to other physicists and they would say, “Yes they are approximations, but really there’s this objective description and you know, there’s this fundamental description in terms of particles and fields and we’re just making different approximations to it when we talk about these other levels.”

I don’t think there’s much of a difference operationally in terms of that way of talking about it and mine. But I think this is a more true-to-life description of reality, I guess.

Lucas Perry: Right. So I mean there are the fundamental forces and the fundamental forces are what evolve everything. And you’re saying that the emergent things have to do with adding and cutting away things so that you can like simplify the whole process, extract out these other rules and laws which are still highly predictive. Is that all true to say so far?

Anthony Aguirre: Somewhat. I think it’s just that we don’t actually do any of that. We very, very, very, very rarely take a more fundamental set of rules and derive.

Lucas Perry: Yeah, yeah, yeah. That’s not how science works.

Anthony Aguirre: Right. We think that there is such a process in principle.

Lucas Perry: Right.

Anthony Aguirre: But not in practice.

Lucas Perry: But yeah, understanding it in principle would give us more information about how reality is.

Anthony Aguirre: I don’t believe that there is in principle that process. I think the going from the more fundamental level to the “emerged” can’t be done without taking input that comes from the emerged level. Like I don’t think you’re going to find the emerged level in the fundamental description in and of itself without unavoidably taking information from the emerged level.

Lucas Perry: Yeah. To modify the-

Anthony Aguirre: Not modifying but augmenting. Augmenting in the sense that you’re adding things like brownness that you will never find, as far as you will ever look, you will never find brownness in the wave function. It just isn’t there.

Lucas Perry: It’s like you wouldn’t find some kind of chemical law or property in the wave function.

Anthony Aguirre: Any more than you’ll find here or now in the state of the universe. Like they’re just not there. Those are things, incredibly useful things, important things like here and now are pretty central to my description of the world. I’m not going to do much without those, but they’re not in the wave function and they’re not in the boundary conditions of the universe and it’s okay that I have to add those. There’s nothing evil in that doing that.

Like I can just accept that I have to have some input from the reality that I’m trying to describe in order to use that fundamental description. It’s fine. But like, there’s nothing to be worried about, there’s nothing anti-scientific about that. It’s just the idea that someone’s going to hand you the wave function and you’ll derive that the cup is brown here and now is crazy. It just doesn’t work that way. Not in there. That’s my view anyway.

Lucas Perry: But the cup being brown here and now is a consequence of the wave function evolving an agent who then specifies that information, right?

Anthony Aguirre: Again, I don’t know what that would look like. Here’s the wave function. Here’s Schrodinger’s equation and the Hamiltonian. Now tell me is the brown cup in front of or in back of the tape measure? It’s not in there. There’s all colored cups and all colored tape measures and all kinds of configurations. They’re all there in the wave function. To get an answer to that question, you have to put in more information which is like which cup and where and when.

That’s just information you have to put in, in order to get an answer. The answer is not there to begin with and that’s okay. It doesn’t mean that there’s something wrong with the wave function description or that you’ve got the wrong Hamiltonian or the wrong Schrodinger’s equation. It just means that to call that a complete description of reality, I think that’s just very misleading. I understand what people intend by saying that everything is just the wave function and the Schrodinger equation. I just think that’s not the right way to look at it.

Lucas Perry: I understand what you’re saying, like the question only makes sense if say that wave function has evolved to a point that it has created human beings who would specify that information, right?

Anthony Aguirre: None of those things are in there.

Lucas Perry: They’re not in the primordial state but they’re born later.

Anthony Aguirre: Later is no different from the beginning thing. It’s just a wave function. There’s really no difference in quality between the wave function now and at the beginning. It’s exactly the same sort of entity. There’s no more, no less in it than there was then. Everything that we ascribe to being now in the universe that wasn’t there at the beginning are additional ingredients that we have to specify from our position, things like now and here and all those properties of thing.

Lucas Perry: Does the wave function just evolve the initial conditions? Are the initial conditions contained within the wave function?

Anthony Aguirre: Well, both in the sense that if there’s such a thing as the wave function of the universe, and that’s a whole nother topic as to whether that’s a right-minded thing to say, but say that there is, then there’s exactly the same information content to that wave function at anytime and that given the wave function at a time, and the Schrodinger equation, we can say what the wave function is at any other time. There’s nothing added or subtracted.

One is just as good as the other. In that sense, there’s no more stuff in the wave function “now” than there was at the beginning. It’s just the same. All of the sense in which there’s more in the universe now than there was at the Big Bang has to do with things that we specify in addition to the wave function, I would say, that constitute the other levels of reality that we interact with. They’re extra information that we’ve added to the wave function from our actual experience of reality.

If you take a timeline of all possible times, without pointing to any particular one, there’s no time information in that system, but when I say, “Oh look, I declare that I’m now 13.8 billion years from the big bang,” you’re pointing to a particular time by associating with my experience now. By doing that pointing, I’m creating information in just the same way that we’ve described it before. I’m making information by picking out a particular time. That’s something new that I’ve added to what was a barren timeline before I’ve added now something.

There’s more information than there was before by the fact of my pointing to it. I think most of the world is of that nature that it is made of information created by our pointing to it from our particular perspective here and now in the universe seeing this and that and having measured this and that and the other thing. Most of the universe I contend is made of that sort of stuff, information that comes from our pointing to it by seeing it, not information that was there intrinsically in the universe, which is, I think, radical in a sense, but I think is just the way reality is, and that none of that stuff is there in the wave function.

Lucas Perry: At least the capacity is there for it because the wave function will produce us to then specify that information.

Anthony Aguirre: Right, but it produces all kinds of other stuff. It’s like if I create a random number generator, and it just generates a whole list of random numbers, if I look at that list and find, “Oh look, there’s one, one, one, one, one, one, one, one, one,” that’s interesting. I didn’t see that before. By pointing to that, you’ve now created information. The information wasn’t there before. That’s largely what I see the universe as, and in large part, it’s low information in a sense.

I’m hemming and hawing because there are ways in which it’s very high information too, but I think most of the information that we see about the world is information of that type that exists because we very collectively as beings that have evolved and had culture and all the stuff that we’ve gone through historically we are pointing to it.

Lucas Perry: So connecting this back to the spectrum of objectivity and subjectivity, as we were talking for a long time about cups and as we talked about on the last podcast about human rights for example as being a myth or kinds of properties which we’re interested in ascribing to all people, which people actually intrinsically lack. People are numerically distinct over time. They’re qualitatively distinct, very often. There’s nothing in the heart of physics which gives us the kinds of properties.

Human rights, for example, are supposed to be instantiating in us. Rather, it’s a functional convention that is very useful for producing value. We’ve specified this information that all human beings share unalienable rights, but as we enter the 21st century, the way that things are changing is that the numerical and qualitative facts about being a human being that have held for thousands of years are going to begin to be perturbed.

Anthony Aguirre: Yes.

Lucas Perry: You brought this up by saying… You could either duplicate yourself arbitrarily, whether you do that physically via scans and instantiating actual molecular duplicates of yourself. You could be mind uploaded, and then you could have that duplicated arbitrarily. For hundreds of thousands of years, your atoms would cycle out every seven years or so, and that’s how you would be numerically distinct, and qualitatively, you would just change over your whole lifetime until you became thermodynamically very uninteresting and spread out and died.

Now, there’s this duplication stuff. There is your ability to qualitatively change yourself very arbitrarily. So at first, it will be through bioengineering like designer babies. There’s all these interesting things and lots of thought experiments that go along with it. What about people who have their corpus callosum cut? You have the sense of phenomenological self, which is associated with that. You feel like you’re a unitary subject of experience.

What happens to your first person phenomenological perspective if you do something like that? What about if you create a corpus callosum bridge to another person’s brain, what happens to the phenomenological self or identity? Science and AI and increasing intelligence and power over the universe will increasingly give us this power to radically change and subvert our commonly held intuitions about identity, which are constituted about the kinds of questions and properties which we’re interested in.

Then also the phenomenological experience, which is whether or not you have a strong sense of self, whether or not you are empty of a sense of self or whether or not you feel identified with all of consciousness and the whole world. There’s spectrums and degrees and all kinds of things around here. That is an introduction to the kind of problem that this is.

Anthony Aguirre: I agree with everything you said, but you’re very unhelpfully asking all the super interesting questions-

Lucas Perry: At once.

Anthony Aguirre: … which are all totally impossible to solve. No, I totally agree. We’ve had this enviable situation of one mind equals one self equals one brain equals one body that has made it much easier to accord to that whole set of things, all of which are identified with each other a set of rights and moral values and things like that.

Lucas Perry: Which all rest on these intuitions, right? That are all going to change.

Anthony Aguirre: Right.

Lucas Perry: Property and rights and value and relationships and phenomenological self, et cetera.

Anthony Aguirre: Right, so we either have a choice of trying to maintain that identity, and remove any possibility of breaking some of those identities because it’s really important to keep all those things identified, or we have to understand some other way to accord value and rights and all those things given that the one-to-one correspondence can break. Both of those are going to be very hard, I think. As a practical matter, it’s simply going to happen that those identifications are going to get broken sooner or later.

As you say, if we have a sufficient communication bandwidth between two different brains, for example, one can easily imagine that they’ll start to have a single identity just as the two hemispheres of our brain are connected enough that they generally have what feels like a single identity. Even though if you cut it, it seems fairly clear that there are in some sense two different identities. At minimum, technologically, we ought to be able to do that.

It seems very likely that we’ll have machine intelligence systems whose phenomenological awareness of the world is unclear but at least have a concept of self and a history and agency and will be easily duplicatable. They at least will have to face the question of what it means when they get duplicated because that’s going to happen to them, and they’re going to have to have a way of dealing with that reality because it’s going to be their everyday reality that they can be copied, ad infinitum, and reset and so on.

If they’re functioning is it all like a current digital computer. There are also going to be even bigger gulfs than there are now between levels of capability and awareness and knowledge and perhaps consciousness. We already have those, and we gloss over them, and I think that’s a good thing in according people fundamental human rights. We don’t give people at least explicitly legally more rights when they’re better educated and wealthier and so on, even if in practice they do get more.

Legally, we don’t, even though that range is pretty big, but if it gets dramatically bigger, it may get harder and harder to maintain even that principle. I find it both exciting and incredibly daunting because the questions are so hard to think of how we’re going to deal with that set of ethical questions and identity questions, and yet we’re going to have to somehow. I don’t think we can avoid them. One possibility is t