Sam Harris on Global Priorities, Existential Risk, and What Matters Most
Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.
Topics discussed in this episode include:
- The problem of communicationÂ
- Global prioritiesÂ
- Existential riskÂ
- Animal suffering in both wild animals and factory farmed animalsÂ
- Global povertyÂ
- Artificial general intelligence risk and AI alignmentÂ
- Ethics
- Sam’s book, The Moral Landscape
You can take a survey about the podcast here
Submit a nominee for the Future of Life Award here
Timestamps:Â
0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts
You can follow Sam here:Â
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloud, iTunes, Google Play and Stitcher.
Transcript
Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Sam Harris where we get into issues related to global priorities, effective altruism, and existential risk. In particular, this podcast covers the critical importance of improving our ability to communicate and converge on the truth, animal suffering in both wild animals and factory farmed animals, global poverty, artificial general intelligence risk and AI alignment, as well as ethics and some thoughts on Sam’s book, The Moral Landscape.Â
If you find this podcast valuable, you can subscribe or follow us on your preferred listening platform, like on Apple Podcasts, Spotify, Soundcloud, or whatever your preferred podcasting app is. You can also support us by leaving a review.Â
Before we get into it, I would like to echo two announcements from previous podcasts. If you’ve been tuned into the FLI Podcast recently you can skip ahead just a bit. The first is that there is an ongoing survey for this podcast where you can give me feedback and voice your opinion about content. This goes a super long way for helping me to make the podcast valuable for everyone. You can find a link for the survey about this podcast in the description of wherever you might be listening.Â
The second announcement is that at the Future of Life Institute we are in the midst of our search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs where the first to invite the nomination winner gets $1,500, whoever first invited them gets $750, whoever first invited the previous person gets $375, and so on. You can find details about that on the Future of Life Award page.Â
Sam Harris has a PhD in neuroscience from UCLA and is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). Sam hosts the Making Sense Podcast and is also the creator of the Waking Up App, which is for anyone who wants to learn to meditate in a modern, scientific context. Sam has practiced meditation for more than 30 years and studied with many Tibetan, Indian, Burmese, and Western meditation teachers, both in the United States and abroad.
And with that, here’s my conversation with Sam Harris.
Starting off here, trying to get a perspective on what matters most in the world and global priorities or crucial areas for consideration, what do you see as the most important problems in the world today?
Sam Harris: There is one fundamental problem which is encouragingly or depressingly non-technical, depending on your view of it. I mean it should be such a simple problem to solve, but it's seeming more or less totally intractable and that's just the problem of communication. The problem of persuasion, the problem of getting people to agree on a shared consensus view of reality, and to acknowledge basic facts and to have their probability assessments of various outcomes to converge through honest conversation. Politics is obviously the great confounder of this meeting of the minds. I mean, our failure to fuse cognitive horizons through conversation is reliably derailed by politics. But there are other sorts of ideology that do this just as well, religion being perhaps first among them.
And so it seems to me that the first problem we need to solve, the place where we need to make progress and we need to fight for every inch of ground and try not to lose it again and again is in our ability to talk to one another about what is true and what is worth paying attention to, to get our norms to align on a similar picture of what matters. Basically value alignment, not with superintelligent AI, but with other human beings. That's the master riddle we have to solve and our failure to solve it prevents us from doing anything else that requires cooperation. That's where I'm most concerned. Obviously technology influences it, social media and even AI and the algorithms behind the gaming of everyone's attention. All of that is influencing our public conversation, but it really is a very apish concern and we have to get our arms around it.
Lucas Perry: So that's quite interesting and not the answer that I was expecting. I think that that sounds like quite the crucial stepping stone. Like the fact that climate change isn't something that we're able to agree upon, and is a matter of political opinion drives me crazy. And that's one of many different global catastrophic or existential risk issues.
Sam Harris: Yeah. The COVID pandemic has made me, especially skeptical of our agreeing to do anything about climate change. The fact that we can't persuade people about the basic facts of epidemiology when this thing is literally coming in through the doors and windows, and even very smart people are now going down the rabbit hole of this is on some level a hoax, people's political and economic interests just bend their view of basic facts. I mean it's not to say that there hasn't been a fair amount of uncertainty here, but it's not the sort of uncertainty that should give us these radically different views of what's happening out in the world. Here we have a pandemic moving in real time. I mean, where we can see a wave of illness breaking in Italy a few weeks before it breaks in New York. And again, there's just this Baghdad Bob level of denialism. The prospects of our getting our heads straight with respect to climate change in light of what's possible in the middle of a pandemic, that seems at the moment, totally farfetched to me.
For something like climate change, I really think a technological elite needs to just decide at the problem and decide to solve it by changing the kinds of products we create and the way we manufacture things and we just have to get out of the politics of it. It can't be a matter of persuading more than half of American society to make economic sacrifices. It's much more along the lines of just building cars and other products that are carbon neutral that people want and solving the problem that way.
Lucas Perry: Right. Incentivizing the solution by making products that are desirable and satisfy people's self-interest.
Sam Harris: Yeah. Yeah.
Lucas Perry: I do want to explore more actual global priorities. This point about the necessity of reason for being able to at least converge upon the global priorities that are most important seems to be a crucial and necessary stepping stone. So before we get into talking about things like existential and global catastrophic risk, do you see a way of this project of promoting reason and good conversation and converging around good ideas succeeding? Or do you have any other things to sort of add to these instrumental abilities humanity needs to cultivate for being able to rally around global priorities?
Sam Harris: Well, I don't see a lot of innovation beyond just noticing that conversation is the only tool we have. Intellectual honesty spread through the mechanism of conversation is the only tool we have to converge in these ways. I guess the thing to notice that's guaranteed to make it difficult is bad incentives. So we should always be noticing what incentives are doing behind the scenes to people's cognition. There are things that could be improved in media. I think the advertising model is a terrible system of incentives for journalists and anyone else who's spreading information. You're incentivized to create sensational hot takes and clickbait and depersonalize everything. Just create one lurid confection after another, that really doesn't get at what's true. The fact that this tribalizes almost every conversation and forces people to view it through a political lens. The way this is all amplified by Facebook's business model and the fact that you can sell political ads on Facebook and we use their micro-targeting algorithm to frankly, distort people's vision of reality and get them to vote or not vote based on some delusion.
All of this is pathological and it has to be disincentivized in some way. The business model of digital media is part of the problem. But beyond that, people have to be better educated and realize that thinking through problems and understanding facts and creating better arguments and responding to better arguments and realizing when you're wrong, these are muscles that need to be trained, and there are certain environments in which you can train them well. And there's certain environments where they are guaranteed to atrophy. Education largely consists in the former, in just training someone to interact with ideas and with shared perceptions and with arguments and evidence in a way that is agnostic as to how things will come out. You're just curious to know what's true. You don't want to be wrong. You don't want to be self-deceived. You don't want to have your epistemology anchored to wishful thinking and confirmation bias and political partisanship and religious taboos and other engines of bullshit, really.
I mean, you want to be free of all that, and you don't want to have your personal identity trimming down your perception of what is true or likely to be true or might yet happen. People have to understand what it feels like to be willing to reason about the world in a way that is unconcerned about the normal, psychological and tribal identity formation that most people, most of the time use to filter against ideas. They'll hear an idea and they don't like the sound of it because it violates some cherished notion they already have in the bag. So they don't want to believe it. That should be a tip off. That's not more evidence in favor of your worldview. That's evidence that you are an ape who's disinclined to understand what's actually happening in the world. That should be an alarm that goes off for you, not a reason to double down on the last bad idea you just expressed on Twitter.
Lucas Perry: Yeah. The way the ego and concern for reputation and personal identity and shared human psychological biases influence the way that we do conversations seems to be a really big hindrance here. And being aware of how your mind is reacting in each moment to the kinetics of the conversation and what is happening can be really skillful for catching unwholesome or unskillful reactions it seems. And I've found that non-violent communication has been really helpful for me in terms of having valuable open discourse where one's identity or pride isn't on the line. The ability to seek truth with another person instead of have a debate or argument is a skill certainly developed. Yet that kind of format for discussion isn't always rewarded or promoted as well as something like an adversarial debate, which tends to get a lot more attention.
Sam Harris: Yeah.
Lucas Perry: So as we begin to strengthen our epistemology and conversational muscles so that we're able to arrive at agreement on core issues, that'll allow us to create a better civilization and work on what matters. So I do want to pivot here into what those specific things might be. Now I have three general categories, maybe four, for us to touch on here.
The first is existential risk that primarily come from technology, which might lead to the extinction of Earth originating life, or more specifically just the extinction of human life. You have a Ted Talk on AGI risk, that's artificial general intelligence risk, the risk of machines becoming as smart or smarter than human beings and being misaligned with human values. There's also synthetic bio risk where advancements in genetic engineering may unleash a new age of engineered pandemics, which are more lethal than anything that is produced by nature. We have nuclear war, and we also have new technologies or events that might come about that we aren't aware of or can't predict yet. And the other categories in terms of global priorities, I want to touch on are global poverty, animal suffering and human health and longevity. So how is it that you think of and prioritize and what is your reaction to these issues and their relative importance in the world?
Sam Harris: Well, I'm persuaded that thinking about existential risk is something we should do much more. It is amazing how few people spend time on this problem. It's a big deal that we have the survival of our species as a blind spot, but I'm more concerned about what seems likelier to me, which is not that we will do something so catastrophically unwise as to erase ourselves, certainly not in the near term. And we're capable of doing that clearly, but I think it's more likely we're capable of ensuring our unrecoverable misery for a good long while. We could just make life basically not worth living, but we'll be forced or someone will be forced to live it all the while, basically a Road Warrior like hellscape could await us as opposed to just pure annihilation. So that's a civilizational risk that I worry more about than extinction because it just seems probabilistically much more likely to happen no matter how big our errors are.
I worry about our stumbling into an accidental nuclear war. That's something that I think is still pretty high on the list of likely ways we could completely screw up the possibility of human happiness in the near term. It's humbling to consider what an opportunity cost this, compared to what's possible, minor pandemic is, right. I mean, we've got this pandemic that has locked down most of humanity and every problem we had and every risk we were running as a species prior to anyone learning the name of this virus is still here. The threat of nuclear war has not gone away. It's just, this has taken up all of our bandwidth. We can't think about much else. It's also humbling to observe how hard a time we're having, even agreeing about what's happening here, much less responding intelligently to the problem. If you imagine a pandemic that was orders of magnitude, more deadly and more transmissible, man, this is a pretty startling dress rehearsal.
I hope we learn something from this. I hope we think more about things like this happening in the future and prepare for them in advance. I mean, the fact that we have a CDC, that still cannot get its act together is just astounding. And again, politics is the thing that is gumming up the gears in any machine that would otherwise run halfway decently at the moment. I mean, we have a truly deranged president and that is not a partisan observation. That is something that can be said about Trump. And it would not be said about most other Republican presidents. There's nothing I would say about Trump that I could say about someone like Mitt Romney or any other prominent Republican. This is the perfect circumstance to accentuate the downside of having someone in charge who lies more readily than any person in human history perhaps.
It's like toxic waste at the informational level has been spread around for three years now and now it really matters that we have an information ecosystem that has no immunity against crazy distortions of the truth. So I hope we learn something from this. And I hope we begin to prioritize the list of our gravest concerns and begin steeling our civilization against the risk that any of these things will happen. And some of these things are guaranteed to happen. The thing that's so bizarre about our failure to grapple with a pandemic of this sort is, this is the one thing we knew was going to happen. This was not a matter of "if." This was only a matter of "when." Now nuclear war is still a matter of "if", right? I mean, we have the bombs, they're on hair-trigger, overseen by absolutely bizarre and archaic protocols and highly outdated technology. We know this is just a doomsday system we've built that could go off at any time through sheer accident or ineptitude. But it's not guaranteed to go off.
But pandemics are just guaranteed to emerge and we still were caught flat footed here. And so I just think we need to use this occasion to learn a lot about how to respond to this sort of thing. And again, if we can't convince the public that this sort of thing is worth paying attention to, we have to do it behind closed doors, right? I mean, we have to get people into power who have their heads screwed on straight here and just ram it through. There has to be a kind of Manhattan Project level urgency to this, because this is about as benign a pandemic as we could have had, that would still cause significant problems. An engineered virus, a weaponized virus that was calculated to kill the maximum number of people. I mean, that's a zombie movie, all of a sudden, and we're not ready for the zombies.
Lucas Perry: I think that my two biggest updates from the pandemic were that human civilization is much more fragile than I thought it was. And also I trust the US government way less now in its capability to mitigate these things. I think at one point you said that 9/11 was the first time that you felt like you were actually in history. And as someone who's 25, being in the COVID pandemic, this is the first time that I feel like I'm in human history. Because my life so far has been very normal and constrained, and the boundaries between everything has been very rigid and solid, but this is perturbing that.
So you mentioned that you were slightly less worried about humanity just erasing ourselves via some kind of existential risk and part of the idea here seems to be that there are futures that are not worth living. Like if there's such thing as a moment or a day that isn't worth living then there are also futures that are not worth living. So I'm curious if you could unpack why you feel that these periods of time that are not worth living are more likely than existential risks. And if you think that some of those existential conditions could be permanent, and could you speak a little bit about the relative likely hood of existential risk and suffering risks and whether you see the higher likelihood of the suffering risks to be ones that are constrained in time or indefinite.
Sam Harris: In terms of the probabilities, it just seems obvious that it is harder to eradicate the possibility of human life entirely than it is to just kill a lot of people and make the remaining people miserable. Right? If a pandemic spreads, whether it's natural or engineered, that has 70% mortality and the transmissibility of measles, that's going to kill billions of people. But it seems likely that it may spare some millions of people or tens of millions of people, even hundreds of millions of people and those people will be left to suffer their inability to function in the style to which we've all grown accustomed. So it would be with war. I mean, we could have a nuclear war and even a nuclear winter, but the idea that it'll kill every last person or every last mammal, it would have to be a bigger war and a worse winter to do that.
So I see the prospect of things going horribly wrong to be one that yields, not a dial tone, but some level of remaining, even civilized life, that's just terrible, that nobody would want. Where we basically all have the quality of life of what it was like on a mediocre day in the middle of the civil war in Syria. Who wants to live that way? If every city on Earth is basically a dystopian cell on a prison planet, that for me is a sufficient ruination of the hopes and aspirations of civilized humanity. That's enough to motivate all of our efforts to avoid things like accidental nuclear war and uncontrolled pandemics and all the rest. And in some ways it's more of motivating because when you ask people, what's the problem with the failure to continue the species, right? Like if we all died painlessly in our sleep tonight, what's the problem with that?
That actually stumps some considerable number of people because they immediately see that the complete annihilation of the species painlessly is really a kind of victimless crime. There's no one around to suffer our absence. There's no one around to be bereaved. There's no one around to think, oh man, we could have had billions of years of creativity and insight and exploration of the cosmos and now the lights have gone out on the whole human project. There's no one around to suffer that disillusionment. So what's the problem? I'm persuaded that that's not the perfect place to stand to evaluate the ethics. I agree that losing that opportunity is a negative outcome that we want to value appropriately, but it's harder to value it emotionally and it's not as clear. I mean it's also, there's an asymmetry between happiness and suffering, which I think is hard to get around.
We are perhaps rightly more concerned about suffering than we are about losing opportunities for wellbeing. If I told you, you could have an hour of the greatest possible happiness, but it would have to be followed by an hour of the worst possible suffering. I think most people given that offer would say, oh, well, okay, I'm good. I'll just stick with what it's like to be me. The hour of the worst possible misery seems like it's going to be worse than the highest possible happiness is going to be good and I do sort of share that intuition. And when you think about it, in terms of the future of humanity, I think it is more motivating to think, not that your grandchildren might not exist, but that your grandchildren might live horrible lives, really unendurable lives and they'll be forced to live them because there'll be born. If for no other reason, then we have to persuade some people to take these concerns seriously, I think that's the place to put most of the emphasis.
Lucas Perry: I think that's an excellent point. I think it makes it more morally salient and leverages human self-interest more. One distinction that I want to make is the distinction between existential risks and global catastrophic risks. Global catastrophic risks are those which would kill a large fraction of humanity without killing everyone, and existential risks are ones which would exterminate all people or all Earth-originating intelligent life. And this former risk, the global catastrophic risks are the ones which you're primarily discussing here where something goes really bad and now we're left with some pretty bad existential situation.
Sam Harris: Yeah.
Lucas Perry: Now we're not locked in that forever. So it's pretty far away from being what is talked about in the effective altruism community as a suffering risk. That actually might only last a hundred or a few hundred years or maybe less. Who knows. It depends on what happened. But now taking a bird's eye view again on global priorities and standing on a solid ground of ethics, what is your perspective on longtermist philosophy? This is the position or idea that the deep future has overwhelming moral priority, given the countless trillions of lives that could be lived. So if an existential risk occur, then we're basically canceling the whole future like you mentioned. There won't be any suffering and there won't be any joy, but we're missing out on a ton of good it would seem. And with the continued evolution of life, through genetic engineering and enhancements and artificial intelligence, it would seem that the future could also be unimaginably good.
If you do an expected value calculation about existential risks, you can estimate very roughly the likelihood of each existential risk, whether it be from artificial general intelligence or synthetic bio or nuclear weapons or a black swan event that we couldn't predict. And you multiply that by the amount of value in the future, you'll get some astronomical number, given the astronomical amount of value in the future. Does this kind of argument or viewpoint do the work for you to commit you to seeing existential risk as a global priority or the central global priority?
Sam Harris: Well, it doesn't do the emotional work largely because we're just bad at thinking about longterm risk. It doesn't even have to be that long-term for our intuitions and concerns to degrade irrationally. We're bad at thinking about the well-being, even of our future selves as you get further out in time. The term of jargon is that we "hyperbolically discount" our future well being. People will smoke cigarettes or make other imprudent decisions in the present. They know they will be the inheritors of these bad decisions, but there's some short-term upside.
The mere pleasure of the next cigarette say, that convinces them that they don't really have to think long and hard about what their future self will wish they had done at this point. Our ability to be motivated by what we think is likely to happen in the future is even worse when we're thinking about our descendants. Right? People we either haven't met yet or may never meet. I have kids, but I don't have grandkids. How much of my bandwidth is taken up thinking about the kinds of lives my grandchildren will have? Really none. It's conserved. It's safeguarded by my concern about my kids, at this point.
But, then there are people who don't have kids and are just thinking about themselves. It's hard to think about the comparatively near future. Even a future that, barring some real mishap, you have every expectation of having to live in yourself. It's just hard to prioritize. When you're talking about the far future, it becomes very, very difficult. You just have to have the science fiction geek gene or something disproportionately active in your brain, to really care about that.
Unless you think you are somehow going to cheat death and get aboard the starship when it's finally built. You're popping 200 vitamins a day with Ray Kurzweil and you think you might just be in the cohort of people who are going to make it out of here without dying because we're just on the cusp of engineering death out of the system, then I could see, okay. There's a self interested view of it. If you're really talking about hypothetical people who you know you will never come in contact with, I think it's hard to be sufficiently motivated, even if you believe the moral algebra here.
It's not clear to me that it need run through. I agree with you that if you do a basic expected value calculation here, and you start talking about trillions of possible lives, their interests must outweigh the interests of the 7.8 or whatever it is, billion of us currently alive. A few asymmetries here, again. The asymmetry between actual and hypothetical lives, there are no identifiable lives who would be deprived of anything if we all just decided to stop having kids. You have to take the point of view of the people alive who make this decision.
If we all just decided, "Listen. These are our lives to live. We can decide how we want to live them. None of us want to have kids anymore." If we all independently made that decision, the consequence on this calculus is we are the worst people, morally speaking, who have ever lived. That doesn't quite capture the moment, the experience or the intentions. We could do this thing without ever thinking about the implications of existential risk. If we didn't have a phrase for this and we didn't have people like ourselves talking about this is a problem, people could just be taken in by the overpopulation thesis.
That that's really the thing that is destroying the world and what we need is some kind of Gaian reset, where the Earth reboots without us. Let's just stop having kids and let nature reclaim the edges of the cities. You could see a kind of utopian environmentalism creating some dogma around that, where it was no one's intention ever to create some kind of horrific crime. Yet, on this existential risk calculus, that's what would have happened. It's hard to think about the morality there when you talk about people deciding not to have kids and it would be the same catastrophic outcome.
Lucas Perry: That situation to me seems to be like looking over the possible moral landscape and seeing a mountain or not seeing a mountain, but there still being a mountain. Then you can have whatever kinds of intentions that you want, but you're still missing it. From a purely consequentialist framework on this, I feel not so bad saying that this is probably one of the worst things that have ever happened.
Sam Harris: The asymmetry here between suffering and happiness still seems psychologically relevant. It's not quite the worst thing that's ever happened, but the best things that might have happened have been canceled. Granted, I think there's a place to stand where you could think that is a horrible outcome, but again, it's not the same thing as creating some hell and populating it.
Lucas Perry: I see what you're saying. I'm not sure that I quite share the intuition about the asymmetry between suffering and well-being. I feel somewhat suspect about that, but that would be a huge tangent right now, I think. Now, one of the crucial things that you said was, for those that are not really compelled to care about the long-term future argument, if you don't have the science fiction geek gene and are not compelled by moral philosophy, the essential way it seems to be that you're able to compel people to care about global catastrophic and existential risk is to demonstrate how they're very likely within this century.
And so their direct descendants, like their children or grandchildren, or even them, may live in a world that is very bad or they may die in some kind of a global catastrophe, which is terrifying. Do you see this as the primary way of leveraging human self-interest and feelings and emotions to make existential and global catastrophic risk salient and pertinent for the masses?
Sam Harris: It's certainly half the story, and it might be the most compelling half. I'm not saying that we should be just worried about the downside because the upside also is something we should celebrate and aim for. The other side of the story is that we've made incredible progress. If you take someone like Steven Pinker and his big books of what is often perceived as happy talk. He's pointing out all of the progress, morally and technologically and at the level of public health.
It's just been virtually nothing but progress. There's no point in history where you're luckier to live than in the present. That's true. I think that the thing that Steve's story conceals, or at least doesn't spend enough time acknowledging, is that the risk of things going terribly wrong is also increasing. It was also true a hundred years ago that it would have been impossible for one person or a small band of people to ruin life for everyone else.
Now that's actually possible. Just imagine if this current pandemic were an engineered virus, more like a lethal form of measles. It might take five people to create that and release it. Here we would be locked down in a truly terrifying circumstance. The risk is ramped up. I think we just have to talk about both sides of it. There is no limit to how beautiful life could get if we get our act together. Take an argument of the sort that David Deutsch makes about the power of knowledge.
Every problem has a solution born of a sufficient insight into how things work, i.e. knowledge, unless the laws of physics rules it out. If it's compatible with the laws of physics, knowledge can solve the problem. That's virtually a blank check with reality that we could live to cash, if we don't kill ourselves in the process. Again, as the upside becomes more and more obvious, the risk that we're going to do something catastrophically stupid is also increasing. The principles here are the same. The only reason why we're talking about existential risk is because we have made so much progress. Without the progress, there'd be no way to make a sufficiently large mistake. It really is two sides of the coin of increasing knowledge and technical power.
Lucas Perry: One thing that I wanted to throw in here in terms of the kinetics of long-termism and emotional saliency, it would be stupidly optimistic I think, to think that everyone could become selfless bodhisattvas. In terms of your interest, the way in which you promote meditation and mindfulness, and your arguments against the conventional, experiential and conceptual notion of the self, for me at least, has dissolved much of the barriers which would hold me from being emotionally motivated from long-termism.
Now, that itself I think, is another long conversation. When your sense of self is becoming nudged, disentangled and dissolved in new ways, the idea that it won't be you in the future, or the idea that the beautiful dreams that Dyson spheres will be having in a billion years are not you, that begins to relax a bit. That's probably not something that is helpful for most people, but I do think that it's possible for people to adopt and for meditation, mindfulness and introspection to lead to this weakening of sense of self, which then also opens one's optimism, and compassion, and mind towards the long-termist view.
Sam Harris: That's something that you get from reading Derek Parfit's work. The paradoxes of identity that he so brilliantly framed and tried to reason through yield something like what you're talking about. It's not so important whether it's you, because this notion of you is in fact, paradoxical to the point of being impossible to pin down. Whether the you that woke up in your bed this morning is the same person who went to sleep in it the night before, that is problematic. Yet there's this fact of some degree of psychological continuity.
The basic fact experientially is just, there is consciousness and its contents. The only place for feelings, and perceptions, and moods, and expectations, and experience to show up is in consciousness, whatever it is and whatever its connection to the physics of things actually turns out to be. There's just consciousness. The question of where it appears is a genuinely interesting one philosophically, and intellectually, and scientifically, and ultimately morally.
Because if we build conscious robots or conscious computers and build them in a way that causes them to suffer, we've just done something terrible. We might do that inadvertently if we don't know how consciousness arises based on information processing, or whether it does. It's all interesting terrain to think about. If the lights are still on a billion years from now, and the view of the universe is unimaginably bright, and interesting and beautiful, and all kinds of creative things are possible by virtue of the kinds of minds involved, that will be much better than any alternative. That's certainly how it seems to me.
Lucas Perry: I agree. Some things here that ring true seem to be, you always talk about how there's only consciousness and its contents. I really like the phrase, "Seeing from nowhere." That usually is quite motivating for me, in terms of the arguments against the conventional conceptual and experiential notions of self. There just seems to be instantiations of consciousness intrinsically free of identity.
Sam Harris: Two things to distinguish here. There's the philosophical, conceptual side of the conversation, which can show you that things like your concept of a self, or certainly your concept of a self that could have free will that, that doesn't make a lot of sense. It doesn't make sense when mapped onto physics. It doesn't make sense when looked for neurologically. Any way you look at it, it begins to fall apart. That's interesting, but again, it doesn't necessarily change anyone's experience.
It's just a riddle that can't be solved. Then there's the experiential side which you encounter more in things like meditation, or psychedelics, or sheer good luck where you can experience consciousness without the sense that there's a subject or a self in the center of it appropriating experiences. Just a continuum of experience that doesn't have structure in the normal way. What's more, that's not a problem. In fact, it's the solution to many problems.
A lot of the discomfort you have felt psychologically goes away when you punch through to a recognition that consciousness is just the space in which thoughts, sensations and emotions continually appear, change and vanish. There's no thinker authoring the thoughts. There's no experiencer in the middle of the experience. It's not to say you don't have a body. There's every sign that you have a body is still appearing. There's sensations of tension, warmth, pressure and movement.
There are sights, there are sounds but again, everything is simply an appearance in this condition, which I'm calling consciousness for lack of a better word. There's no subject to whom it all refers. That can be immensely freeing to recognize, and that's a matter of a direct change in one's experience. It's not a matter of banging your head against the riddles of Derek Parfit or any other way of undermining one's belief in personal identity or the reification of a self.
Lucas Perry: A little bit earlier, we talked a little bit about the other side of the existential risk coin. Now, the other side of that is this existential hope, we like to call at The Future of Life Institute. We're not just a doom and gloom society. It's also about how the future can be unimaginably good if we can get our act together and apply the appropriate wisdom to manage and steward our technologies with wisdom and benevolence in mind.
Pivoting in here and reflecting a little bit on the implications of some of this no self conversation we've been having for global priorities, the effective altruism community has narrowed down on three of these global priorities as central issues of consideration, existential risk, global poverty and animal suffering. We talked a bunch about existential risk already. Global poverty is prolific, and many of us live in quite nice and abundant circumstances.
Then there's animal suffering, which can be thought of as in two categories. One being factory farmed animals, where we have billions upon billions of animals being born into miserable conditions and being slaughtered for sustenance. Then we also have wild animal suffering, which is a bit more esoteric and seems like it's harder to get any traction on helping to alleviate. Thinking about these last two points, global poverty and animal suffering, what is your perspective on these?
I find the lack of willingness for people to empathize and be compassionate towards animal suffering to be quite frustrating, as well as global poverty, of course. If you view the perspective of no self as potentially being informative or helpful for leveraging human compassion and motivation to help other people and to help animals. One quick argument here that comes from the conventional view of self, so isn't strictly true or rational, but is motivating for me, is that I feel like I was just born as me and then I just woke up one day as Lucas.
I, referring to this conventional and experientially illusory notion that I have of myself, this convenient fiction that I have. Now, you're going to die and you could wake up as a factory farmed animal. Surely there are those billions upon billions of instantiations of consciousness that are just going through misery. If the self is an illusion then there are selfless chicken and cow experiences of enduring suffering. Any thoughts or reactions you have to global poverty, animal suffering and what I mentioned here?
Sam Harris: I guess the first thing to observe is that again, we are badly set up to prioritize what should be prioritized and to have the emotional response commensurate with what we could rationally understand is so. We have a problem of motivation. We have a problem of making data real. This has been psychologically studied, but it's just manifest in oneself and in the world. We care more about the salient narrative that has a single protagonist than we do about the data on, even human suffering.
The classic example here is one little girl falls down a well, and you get wall to wall news coverage. All the while there could be a genocide or a famine killing hundreds of thousands of people, and it doesn't merit more than five minutes. One broadcast. That's clearly a bug, not a feature morally speaking, but it's something we have to figure out how to work with because I don't think it's going away. One of the things that the effective altruism philosophy has done, I think usefully, is that it has separated two projects which up until the emergence of effective altruism, I think were more or less always conflated.
They're both valid projects, but one has much greater moral consequence. The fusion of the two is, the concern about giving and how it makes one feel. I want to feel good about being philanthropic. Therefore, I want to give to causes that give me these good feels. In fact, at the end of the day, the feeling I get from giving is what motivates me to give. If I'm giving in a way that doesn't really produce that feeling, well, then I'm going to give less or give less reliably.
Even in a contemplative Buddhist context, there's an explicit fusion of these two things. The reason to be moral and to be generous is not merely, or even principally, the effect on the world. The reason is because it makes you a better person. It gives you a better mind. You feel better in your own skin. It is in fact, more rewarding than being selfish. I think that's true, but that doesn't get at really, the important point here, which is we're living in a world where the difference between having good and bad luck is so enormous.
The inequalities are so shocking and indefensible. The fact that I was born me and not born in some hell hole in the middle of a civil war soon to be orphaned, and impoverished and riddled by disease, I can take no responsibility for the difference in luck there. That difference is the difference that matters more than anything else in my life. What the effective altruist community has prioritized is, actually helping the most people, or the most sentient beings.
That is fully divorceable from how something makes you feel. Now, I think it shouldn't be ultimately divorceable. I think we should recalibrate our feelings or struggle to, so that we do find doing the most good the most rewarding thing in the end, but it's hard to do. My inability to do it personally, is something that I have just consciously corrected for. I've talked about this a few times on my podcast. When Will MacAskill came on my podcast and we spoke about these things, I was convinced at the end of the day, "Well, I should take this seriously."
I recognize that fighting malaria by sending bed nets to people in sub-Saharan Africa is not a cause I find particularly sexy. I don't find it that emotionally engaging. I don't find it that rewarding to picture the outcome. Again, compared to other possible ways of intervening in human misery and producing some better outcome, it's not the same thing as rescuing the little girl from the well. Yet, I was convinced that, as Will said on that podcast and as organizations like GiveWell attest, giving money to the Against Malaria Foundation was and remains one of the absolute best uses of every dollar to mitigate unnecessary death and suffering.
I just decided to automate my giving to the Against Malaria Foundation because I knew I couldn't be trusted to wake up every day, or every month or every quarter, whatever it would be, and recommit to that project because some other project would have captured my attention in the meantime. I was either going to give less to it or not give at all, in the end. I'm convinced that we do have to get around ourselves and figure out how to prioritize what a rational analysis says we should prioritize and get the sentimentality out of it, in general.
It's very hard to escape entirely. I think we do need to figure out creative ways to reformat our sense of reward. The reward we find in helping people has to begin to become more closely coupled to what is actually most helpful. Conversely, the disgust or horror we feel over bad outcomes should be more closely coupled to the worst things that happen. As opposed to just the most shocking, but at the end of the day, minor things. We're just much more captivated by a sufficiently ghastly story involving three people than we are by the deaths of literally millions that happen some other way. These are bugs we have to figure out how to correct for.
Lucas Perry: I hear you. The person running in the burning building to save the child is sung as a hero, but if you are say, earning to give for example and write enough checks to save dozens of lives over your lifetime, that might not go recognized or felt in the same way.
Sam Harris: And also these are different people, too. It's also true to say that someone who is psychologically and interpersonally not that inspiring, and certainly not a saint might wind up doing more good than any saint ever does or could. I don't happen to know Bill Gates. He could be saint-like. I literally never met him, but I don't get that sense that he is. I think he's kind of a normal technologist and might be normally egocentric, concerned about his reputation and legacy.
He might be a prickly bastard behind closed doors. I don't know, but he certainly stands a chance of doing more good than any person in human history at this point, just based on the checks he's writing and his intelligent prioritization of his philanthropic efforts. There is an interesting uncoupling here where you could just imagine someone who might be a total asshole, but actually does more good than any army of Saints you could muster. That's interesting. That just proves a point that a concern about real world outcomes is divorceable from the psychology that we tend to associate with doing good in the world. On the point of animal suffering, I share your intuitions there, although again, this is a little bit like climate change in that I think that the ultimate fix will be technological. It'll be a matter of people producing the Impossible Burger squared that is just so good that no one's tempted to eat a normal burger anymore, or something like Memphis Meats, which actually, I invested in.
I have no idea where it's going as a company, but when I had its CEO on my podcast back in the day, Uma Valeti, I just thought, "This is fantastic to engineer actual meat without producing any animal suffering. I hope he can bring this to scale." At the time, it was like an $18,000-meatball. I don't know what it is now, but it's that kind of thing that will close the door to the slaughterhouse more than just convincing billions of people about the ethics. It's too difficult and the truth may not align with exactly what we want.
I'm going to reap the whirlwind of criticism from the vegan mafia here, but it's just not clear to me that it's easy to be a healthy vegan. Forget about yourself as an adult making a choice to be a vegan, raising vegan kids is a medical experiment on your kids of a certain sort and it's definitely possible to screw it up. There's just no question about it. If you're not going to admit that, you're not a responsible parent.
It is possible, it is by no means easier to raise healthy vegan kids than it is to raise kids who eat meat sometimes and that's just a problem, right? Now, that's a problem that has a technical solution, but there's still diversity of opinion about what constitutes a healthy human diet even when all things are on the menu. We're just not there yet. It's unlikely to be just a matter of supplementing B12.
Then the final point you made does get us into a kind of, I would argue, a reductio ad absurdum of the whole project ethically when you're talking about losing sleep over whether to protect the rabbits from the foxes out there in the wild. If you're going to go down that path, and I will grant you, I wouldn't want to trade places with a rabbit, and there's a lot of suffering out there in the natural world, but if you're going to try to figure out how to minimize the suffering of wild animals in relation to other wild animals then I think you are a kind of antinatalist with respect to the natural world. I mean, then it would be just better if these animals didn't exist, right? Let's just hit stop on the whole biosphere, if that's the project.
Then there's the argument that there are many more ways to suffer and to be happy as a sentient being. Whatever story you want to tell yourself about the promise of future humanity, it's just so awful to be a rabbit or an insect that if an asteroid hit us and canceled everything, that would be a net positive.
Lucas Perry: Yeah. That's an actual view that I hear around a bunch. I guess my quick response is as we move farther into the future, if we're able to reach an existential situation which is secure and where there is flourishing and we're trying to navigate the moral landscape to new peaks, it seems like we will have to do something about wild animal suffering. With AGI and aligned superintelligence, I'm sure there could be very creative solutions using genetic engineering or something. Our descendants will have to figure that out, whether they are just like, "Are wild spaces really necessary in the future and are wild animals actually necessary, or are we just going to use those resources in space to build more AI that would dream beautiful dreams?"
Sam Harris: I just think it may be, in fact, the case that nature is just a horror show. It is bad almost any place you could be born in the natural world, you're unlucky to be a rabbit and you're unlucky to be a fox. We're lucky to be humans, sort of, and we can dimly imagine how much luckier we might get in the future if we don't screw up.
I find it compelling to imagine that we could create a world where certainly most human lives are well worth living and better than most human lives ever were. Again, I follow Pinker in feeling that we've sort of done that already. It's not to say that there aren't profoundly unlucky people in this world, and it's not to say that things couldn't change in a minute for all of us, but life has gotten better and better for virtually everyone when you compare us to any point in the past.
If we get to the place you're imagining where we have AGI that we have managed to align with our interests and we're migrating into of spaces of experience that changes everything, it's quite possible we will look back on the "natural world" and be totally unsentimental about it, which is to say, we could compassionately make the decision to either switch it off or no longer provide for its continuation. It's like that's just a bad software program that evolution designed and wolves and rabbits and bears and mice, they were all unlucky on some level.
We could be wrong about that, or we might discover something else. We might discover that intelligence is not all it's cracked up to be, that it's just this perturbation on something that's far more rewarding. At the center of the moral landscape, there's a peak higher than any other and it's not one that's elaborated by lots of ideas and lots of creativity and lots of distinctions, it's just this great well of bliss that we actually want to fully merge with. We might find out that the cicadas were already there. I mean, who knows how weird this place is?
Lucas Perry: Yeah, that makes sense. I totally agree with you and I feel this is true. I also feel that there's some price that is paid because there's already some stigma around even thinking this. I think it's a really early idea to have in terms of the history of human civilization, so people's initial reaction is like, "Ah, what? Nature's so beautiful and why would you do that to the animals?" Et cetera. We may come to find out that nature is just very net negative, but I could be wrong and maybe it would be around neutral or better than that, but that would require a more robust and advanced science of consciousness.
Just hitting on this next one fairly quickly, effective altruism is interested in finding new global priorities and causes. They call this "Cause X," something that may be a subset of existential risk or something other than existential risk or global poverty or animal suffering probably still just has to do with the suffering of sentient beings. Do you think that a possible candidate for Cause X would be machine suffering or the suffering of other non-human conscious things that we're completely unaware of?
Sam Harris: Yeah, well, I think it's a totally valid concern. Again, it's one of these concerns that's hard to get your moral intuitions tuned up to respond to. People have a default intuition that a conscious machine is impossible, that substrate independence, on some level, is impossible, they're making an assumption without ever doing it explicitly... In fact, I think most people would explicitly deny thinking this, but it is implicit in what they then go on to think when you pose the question of the possibility of suffering machines and suffering computers.
That just seems like something that never needs to be worried about and yet the only way to close the door to worrying about it is to assume that consciousness is totally substrate-dependent and that we would never build a machine that could suffer because we're building machines out of some other material. If we built a machine out of biological neurons, well, then, then we might be up for condemnation morally because we've taken an intolerable risk analogous to create some human-chimp hybrid or whatever. It's like obviously, that thing's going to suffer. It's an ape of some sort and now it's in a lab and what sort of monster would do that, right? We would expect the lights to come on in a system of that sort.
If consciousness is the result of information processing on some level, and again, that's an "if," we're not sure that's the case, and if information processing is truly substrate-independent, and that seems like more than an "if" at this point, we know that's true, then we could inadvertently build conscious machines. And then the question is: What is it like to be those machines and are they suffering? There's no way to prevent that on some level.
Certainly, if there's any relationship between consciousness and intelligence, if building more and more intelligent machines is synonymous with increasing the likelihood that the lights will come on experientially, well, then we're clearly on that path. It's totally worth worrying about, but it's again, judging from what my own mind is like and what my conversations with other people suggest, it seems very hard to care about for people. That's just another one of these wrinkles.
Lucas Perry: Yeah. I think a good way of framing this is that humanity has a history of committing moral catastrophes because of bad incentives and they don't even realize how bad the thing is that they're doing, or they just don't really care or they rationalize it, like subjugation of women and slavery. We're in the context of human history and we look back at these people and see them as morally abhorrent.
Now, the question is: What is it today that we're doing that's morally abhorrent? Well, I think factory farming is easily one contender and perhaps human selfishness that leads to global poverty and millions of people drowning in shallow ponds is another one that we'll look back on. With just some foresight towards the future, I agree that machine suffering is intuitively and emotionally difficult to empathize with if your sci-fi gene isn't turned on. It could be the next thing.
Sam Harris: Yeah.
Lucas Perry: I'd also like to pivot here into AI alignment and AGI. In terms of existential risk from AGI or transformative AI systems, do you have thoughts on public intellectuals who are skeptical of existential risk from AGI or superintelligence? You had a talk about AI risk and I believe you got some flak from the AI community about that. Elon Musk was just skirmishing with the head of AI at Facebook, I think. What is your perspective about the disagreement and confusion here?
Sam Harris: It comes down to a failure of imagination on the one hand and also just bad argumentation. No sane person who's concerned about this is concerned because they think it's going to happen this year or next year. It's not a bet on how soon this is going to happen. For me, it certainly isn't a bet on how soon it's going to happen. It's just a matter of the implications of continually making progress in building more and more intelligent machines. Any progress, it doesn't have to be Moore's law, it just has to be continued progress, will ultimately deliver us into relationship with something more intelligent than ourselves.
To think that that is farfetched or is not likely to happen or can't happen is to assume some things that we just can't assume. It's to assume that substrate independence is not in the cards for intelligence. Forget about consciousness. I mean, consciousness is orthogonal to this question. I'm not suggesting that AGI need be conscious, it just needs to be more competent than we are. We already know that our phones are more competent as calculators than we are, they're more competent chess players than we are. You just have to keep stacking cognitive-information-processing abilities on that and making progress, however incremental.
I don't see how anyone can be assuming substrate dependence for really any of the features of our mind apart from, perhaps, consciousness. Take the top 200 things we do cognitively, consciousness aside, just as a matter of sheer information-processing and behavioral control and power to make decisions and you start checking those off, those have to be substrate independent: facial recognition, voice recognition, we can already do that in silico. It's just not something you need meat to do.
We're going to build machines that get better and better at all of these things and ultimately, they will pass the Turing test and ultimately, it will be like chess or now Go as far as the eye can see, where it will be in relationship to something that is better than we are at everything that we have prioritized, every human competence we have put enough priority in that we took the time to build it into our machines in the first place: theorem-proving in mathematics, engineering software programs. There is no reason why a computer will ultimately not be the best programmer in the end, again, unless you're assuming that there's something magical about doing this in meat. I don't know anyone who's assuming that.
Arguing about the time horizon is a non sequitur, right? No one is saying that this need happen soon to ultimately be worth thinking about. We know that whatever the time horizon is, it can happen suddenly. We have historically been very bad at predicting when there will be a breakthrough. This is a point that Stuart Russell makes all the time. If you look at what Rutherford said about the nuclear chain reaction being a pipe dream, it wasn't even 24 hours before Leo Szilard committed the chain reaction to paper and had the relevant breakthrough. We know we can make bad estimates about the time horizon, so at some point, we could be ambushed by a real breakthrough, which suddenly delivers exponential growth in intelligence.
Then there's a question of just how quickly that could unfold and whether this something like an intelligence explosion. That's possible. We can't know for sure, but you need to find some foothold to doubt whether these things are possible and the footholds that people tend to reach for are either nonexistent or they're non sequiturs.
Again, the time horizon is irrelevant and yet the time horizon is the first thing you hear from people who are skeptics about this: "It's not going to happen for a very long time." Well, I mean, Stuart Russell's point here, which is, again, it's just a reframing, but in the persuasion business, reframing is everything. The people who are consoled by this idea that this is not going to happen for 50 years wouldn't be so consoled if we receive a message from an alien civilization which said, "People of Earth, we will arrive on your humble planet in 50 years. Get ready."
If that happened, we would be prioritizing our response to that moment differently than the people who think it's going to take 50 years for us to build AGI are prioritizing their response to what's coming. We would recognize a relationship with something more powerful than ourselves is in the often. It's only reasonable to do that on the assumption that we will continue to make progress.
The point I made in my TED Talk is that the only way to assume we're not going to continue to make progress is to be convinced of a very depressing thesis. The only way we wouldn't continue to make progress is if we open the wrong door of the sort that you and I have been talking about in this conversation, if we invoke some really bad roll of the dice in terms of existential risk or catastrophic civilizational failure, and we just find ourselves unable to build better and better computers. I mean, that's the only thing that would cause us to be unable to do that. Given the power and value of intelligent machines, we will build more and more intelligent machines at almost any cost at this point, so a failure to do it would be a sign that something truly awful has happened.
Lucas Perry: Yeah. From my perspective, the people that are skeptical of substrate independence, I wouldn't say that those are necessarily AI researchers. Those are regular persons or laypersons who are not computer scientists. I think that's motivated by mind-body dualism, where one has a conventional and experiential sense of the mind as being non-physical, which may be motivated by popular religious beliefs, but when we get into the area of actual AI researchers, for them, it seems to either be like they're attacking some naive version of the argument or a straw man or something
Sam Harris: Like robots becoming spontaneously malevolent?
Lucas Perry: Yeah. It's either that, or they think that the alignment problem isn't as hard as it is. They have some intuition, like why the hell would we even release systems that weren't safe? Why would we not make technology that served us or something? To me, it seems that when there are people from like the mainstream machine-learning community attacking AI alignment and existential risk considerations from AI, it seems like they just don't understand how hard the alignment problem is.
Sam Harris: Well, they're not taking seriously the proposition that what we will have built are truly independent minds more powerful than our own. If you actually drill down on what that description means, it doesn't mean something that is perfectly enslaved by us for all time, I mean, because that is by definition something that couldn't be more intelligent across the board than we are.
The analogy I use is imagine if dogs had invented us to protect their interests. Well, so far, it seems to be going really well. We're clearly more intelligent than dogs, they have no idea what we're doing or thinking about or talking about most of the time, and they see us making elaborate sacrifices for their wellbeing, which we do. I mean, the people who own dogs care a lot about them and make, you could argue, irrational sacrifices to make sure they're happy and healthy.
But again, back to the pandemic, if we recognize that we had a pandemic that was going to kill the better part of humanity and it was jumping from dogs to people and the only way to stop this is to kill all the dogs, we would kill all the dogs on a Thursday. There'd be some holdouts, but they would lose. The dog project would be over and the dogs would never understand what happened.
Lucas Perry: But that's because humans aren't perfectly aligned with dog values.
Sam Harris: But that's the thing: Maybe it's a solvable problem, but it's clearly not a trivial problem because what we're imagining are minds that continue to grow in power and grow in ways that by definition we can't anticipate. Dogs can't possibly anticipate where we will go next, what we will become interested in next, what we will discover next, what we'll prioritize next. If you're not imagining minds so vast that we can't capture their contents ourselves, you're not talking about the AGI that the people who are worried about alignment are talking about.
Lucas Perry: Maybe this is like a little bit of a nuanced distinction between you or I, but I think that that story that you're developing there seems to assume that the utility function or the value learning or the objective function of the systems that we're trying to align with human values is dynamic. It may be the case that you can build a really smart alien mind and it might become super-intelligent, but there are arguments that maybe you could make its alignment stable.
Sam Harris: That's the thing we have to hope for, right? I'm not a computer scientist, so as far as the doability of this, that's something I don't have good intuitions about, but Stuart Russell's argument that we would need a system whose ultimate value is to more and more closely approximate our current values that would continually, no matter how much its intelligence escapes our own, it would continually remain available to the conversation with us where we say, "Oh, no, no. Stop doing that. That's not what we want." That would be the most important message from its point of view, no matter how vast its mind got.
Maybe that's doable, right, but that's the kind of thing that would have to be true for the thing to remain completely aligned to us because the truth is we don't want it aligned to who we used to be and we don't want it aligned to the values of the Taliban. We want to grow in moral wisdom as well and we want to be able to revise our own ethical codes and this thing that's smarter than us presumably could help us do that, provided it doesn't just have its own epiphanies which cancel the value of our own or subvert our own in a way that we didn't foresee.
If it really has our best interest at heart, but our best interests are best conserved by it deciding to pull the plug on everything, well, then we might not see the wisdom of that. I mean, it might even be the right answer. Now, this is assuming it's conscious. We could be building something that is actually morally more important than we are.
Lucas Perry: Yeah, that makes sense. Certainly, eventually, we would want it to be aligned with some form of idealized human values and idealized human meta preferences over how value should change and evolve into the deep future. This is known, I think, as "ambitious value learning" and it is the hardest form of value learning. Maybe we can make something safe without doing this level of ambitious value learning, but something like that may be deeper in the future.
Now, as we've made moral progress throughout history, we've been expanding our moral circle of consideration. In particular, we've been doing this farther into space, deeper into time, across species, and potentially soon, across substrates. What do you see as the central way of continuing to expand our moral circle of consideration and compassion?
Sam Harris: Well, I just think we have to recognize that things like distance in time and space and superficial characteristics, like whether something has a face, much less a face that can make appropriate expressions or a voice that we can relate to, none of these things have moral significance. The fact that another person is far away from you in space right now shouldn't fundamentally affect how much you care whether or not they're being tortured or whether they're starving to death.
Now, it does. We know it does. People are much more concerned about what's happening on their doorstep, but I think proximity, if it has any weight at all, it has less and less weight the more our decisions obviously affect people regardless of separation and space, but the more it becomes truly easy to help someone on another continent because you can just push a button in your browser, then you're caring less about them is clearly a bug. And so it's just noticing that the things that attenuate our compassion tend to be things that for evolutionary reasons we're designed to discount in this way, but at the level of actual moral reasoning about a global civilization it doesn't make any sense and it prevents us from solving the biggest problems.
Lucas Perry: Pivoting into ethics more so now. I'm not sure if this is the formal label that you would use but your work on the moral landscape lands you pretty much it seems in the moral realism category.
Sam Harris: Mm-hmm (affirmative).
Lucas Perry: You've said something like, "Put your hand in fire to know what bad is." That seems to disclose or seems to argue about the self intimating nature of suffering about how it's clearly bad. If you don't believe me, go and do the suffering things. From other moral realists who I've talked to and who argued for moral realism, like Peter Singer, they make similar arguments. What view or theory of consciousness are you most partial to? And how does this inform this perspective about the self intimating nature of suffering as being a bad thing?
Sam Harris: Well, I'm a realist with respect to morality and consciousness in the sense that I think it's possible not to know what you're missing. So if you're a realist, the property that makes the most sense to me is that there are facts about the world that are facts whether or not anyone knows them. It is possible for everyone to be wrong about something. We could all agree about X and be wrong. That's the realist position as opposed to pragmatism or some other variant, where it's all just a matter, it's all a language game, and the truth value of a statement is just the measure of the work it does in conversation. So with respect to consciousness, I'm a realist in the sense that if a system is conscious, if a cricket is conscious, if a sea cucumber is conscious, they're conscious whether we know it or not. For the purposes of this conversation, let's just decide that they're not conscious, the lights are not on in those systems.
Well, that's a claim that we could believe, we could all believe it, but we could be wrong about it. And so the facts exceed our experience at any given moment. And so it is with morally salient facts, like the existence of suffering. If a system can be conscious whether I know it or not a system can be suffering whether I know it or not. And that system could be me in the future or in some counterfactual state. I could think I'm doing the right thing by doing X. But the truth is I would have been much happier had I done Y and I'll never know that. I was just wrong about the consequences of living in a certain way. That's what realism on my view entails. So the way this relates to questions of morality and good and evil and right and wrong, this is back to my analogy of the moral landscape, I think morality really is a navigation problem. There are possibilities of experience in this universe and we don't even need the concept of morality, we don't need the concept of right and wrong and good and evil really.
That's shorthand for, in my view, the way we should talk about the burden that's on us in each moment to figure out what we should do next. Where should we point ourselves across this landscape of mind and possible minds? And knowing that it's possible to move in the wrong direction, and what does it mean to be moving in the wrong direction? Well, it's moving in a direction where everything is getting worse and worse and everything that was good a moment ago is breaking down to no good end. You could conceive of moving down a slope on the moral landscape only to ascend some higher peak. That's intelligible to me that we might have to all move in the direction that seems to be making things worse but it is a sacrifice worth making because it's the only way to get to something more beautiful and more stable.
I'm not saying that's the world we're living in, but it certainly seems like a possible world. But this just doesn't seem open to doubt. There's a range of experience on offer. And, on the one end, it's horrific and painful and all the misery is without any silver lining, right? It's not like we learn a lot from this ordeal. No, it just gets worse and worse and worse and worse and then we die, and I call that the worst possible misery for everyone. Alright so, the worst possible misery for everyone is bad if anything is bad, if the word bad is going to mean anything, it has to apply to the worst possible misery for everyone. But now some people come in and think they're doing philosophy when they say things like, "Well, who's to say the worst possible misery for everyone is bad?" Or, "Should we avoid the worst possible misery for everyone? Can you prove that we should avoid it?" And I actually think those are unintelligible noises that they're making.
You can say those words, I don't think you can actually mean those words. I have no idea what that person actually thinks they're saying. You can play a language game like that but when you actually look at what the words mean, "the worst possible misery for everyone," to then say, "Well, should we avoid it?" In a world where you should do anything, where the word should make sense, there's nothing that you should do more than avoid the worst possible misery for everyone. By definition, it's more fundamental than the concept of should. What I would argue is if you're hung up on the concept of should, and you're taken in by Hume's flippant and ultimately misleading paragraph on, "You can't get an ought from an is," you don't need oughts then. There is just this condition of is. There's a range of experience on offer, and the one end it is horrible, on the other end, it is unimaginably beautiful.
And we clearly have a preference for one over the other, if we have a preference for anything. There is no preference more fundamental than escaping the worst possible misery for everyone. If you doubt that, you're just not thinking about how bad things can get. It's incredibly frustrating. In this conversation, you're hearing the legacy of the frustration I've felt in talking to otherwise smart and well educated people who think they're on interesting philosophical ground in doubting whether we should avoid the worst possible misery for everyone. Or that it would be good to avoid it, or perhaps it's intelligible to have other priorities. And, again, I just think that they're not understanding the words "worst possible misery and everyone", they're not letting those words and land in language cortex. And if they do, they'll see that there is no other place to stand where you could have other priorities.
Lucas Perry: Yeah. And my brief reaction to that is, I still honestly feel confused about this. So maybe I'm in the camp of frustrating people. I can imagine other evolutionary timelines where there are minds that just optimize for the worst possible misery for everyone, just because in mind space those minds are physically possible.
Sam Harris: Well, that's possible. We can certainly create a paperclip maximizer that is just essentially designed to make every conscious being suffer as much as it can. And that would be especially easy to do provided that intelligence wasn't conscious. If it's not a matter of its suffering, then yeah, we could use AGI to make things awful for everyone else. You could create a sadistic AGI that wanted everyone else to suffer and it derived immense pleasure from that.
Lucas Perry: Or immense suffering. I don't see anything intrinsically motivating about suffering as navigating a mind necessarily away from it. Computationally, I can imagine a mind just suffering as much as possible and spreads that as much as possible. And maybe the suffering is bad in some objective sense, given consciousness realism, and that that was disclosing the intrinsic valence of consciousness in the universe. But the is-ought distinction there still seems confusing to me. Yes, suffering is bad and maybe the worst possible misery for everyone is bad, but that's not universally motivating for all possible minds.
Sam Harris: The usual problem here is, it's easy for me to care about my own suffering, but why should I care about the suffering of others? That seems to be the ethical stalemate that people worry about. My response there is that it doesn't matter. You can take the view from above there and you can just say, "The universe would be better if all the sentient beings suffered less and it would be worse if they suffered more." And if you're unconvinced by that, you just have to keep turning the dial to separate those two more and more and more and more so that you get to the extremes. If any given sentient being can't be moved to care about the experience of others, well, that's one sort of world, that's not a peak on the moral landscape. That will be a world where beings are more callous than they would otherwise be in some other corner of the universe. And they'll bump into each other more and they'll be more conflict and they'll fail to cooperate in certain ways that would have opened doors to positive experiences that they now can't have.
And you can try to use moralizing language about all of this and say, "Well, you still can't convince me that I should care about people starving to death in Somalia." But the reality is an inability to care about that has predictable consequences. If enough people can't care about that then certain things become impossible and those things, if they were possible, lead to good outcomes that if you had a different sort of mind, you would enjoy. So all of this bites its own tail in an interesting way when you imagine being able to change a person's moral intuitions. And then the question is, well, should you change those intuitions? Would it be good to change your sense of what is good? That question has an answer on the moral landscape. It has an answer when viewed as a navigation problem.
Lucas Perry: Right. But isn't the assumption there that if something leads to a good world, then you should do it?
Sam Harris: Yes. You can even drop your notion of should. I'm sure it's finite, but a functionally infinite number of worlds on offer and there's ways to navigate into those spaces. And there are ways to fail to navigate into those spaces. There are ways to try and fail, and worse still, there are ways to not know what you're missing, to not even know where you should be pointed on this landscape, which is to say, you need to be a realist here. There are experiences that are better than any experience that you are going to have and you are never going to know about them, possible experiences. And granting that, you don't need a concept of should, should is just shorthand for how we speak with one another and try to admonish one another to be better in the future in order to cooperate better or to realize different outcomes. But it's not a deep principle of reality.
What is a deep principle of reality is consciousness and its possibilities. Consciousness is the one thing that can't be an illusion. Even if we're in a simulation, even if we're brains in vats, even if we're confused about everything, something seems to be happening, and that seeming is the fact of consciousness. And almost as rudimentary as that is the fact that within this space of seemings, again, we don't know what the base layer of reality is, we don't know if our physics is the real physics, we could be confused, this could be a dream, we could be confused about literally everything except that in this space of seemings there appears to be a difference between things getting truly awful to no apparent good end and things getting more and more sublime.
And there's potentially even a place to stand where that difference isn't so captivating anymore. Certainly, there are Buddhists who would tell you that you can step off that wheel of opposites, ultimately. But even if you buy that, that is some version of a peak on my moral landscape. That is a contemplative peak where the difference between agony and ecstasy is no longer distinguishable because what you are then aware of is just that consciousness is intrinsically free of its content and no matter what its possible content could be. If someone can stabilize that intuition, more power to them, but then that's the thing you should do, just to bring it back to the conventional moral framing.
Lucas Perry: Yeah. I agree with you. I'm generally a realist about consciousness and still do feel very confused, not just because of reasons in this conversation, but just generally about how causality fits in there and how it might influence our understanding of the worst possible misery for everyone being a bad thing. I'm also willing to go that far to accept that as objectively a bad thing, if bad means anything. But then I still get really confused about how that necessarily fits in with, say, decision theory or "shoulds" in the space of possible minds and what is compelling to who and why?
Sam Harris: Perhaps this is just semantic. Imagine all these different minds that have different utility functions. The paperclip maximizer wants nothing more than paperclips. And anything that reduces paperclips is perceived as a source of suffering. It has a disutility. If you have any utility function, you have this liking and not liking component provided your sentient. That's what it is to be motivated consciously. For me, the worst possible misery for everyone is a condition where, whatever the character of your mind, every sentient mind is put in the position of maximal suffering for it. So some things like paperclips and some things hate paperclips. If you hate paperclips, we give you a lot of paperclips. If you like paperclips, we take away all your paperclips. If that's your mind, we tune your corner of the universe to that torture chamber. You can be agnostic as to what the actual things are that make something suffer. It's just suffering is by definition the ultimate frustration of that mind's utility function.
Lucas Perry: Okay. I think that's a really, really important crux and crucial consideration between us and a general point of confusion here. Because that's the definition of what suffering is or what it means. I suspect that those things may be able to come apart. So, for you, maximum disutility and suffering are identical, but I guess I could imagine a utility function being separate or inverse from the hedonics of a mind. Maybe the utility function, which is purely a computational thing, is getting maximally satisfied, maximizing suffering everywhere, and the mind that is implementing that suffering is just completely immiserated while doing it. But the utility function, which is different and inverse from the experience of the thing, is just getting satiated and so the machine keeps driving towards maximum-suffering-world.
Sam Harris: Right, but there's either something that is liked to be satiated in that way or there isn't right now. If we're talking about real conscious society, we're talking about some higher order satisfaction or pleasure that is not suffering by my definition. We have this utility function ourselves. I mean when you take somebody who decides to climb to the summit of Mount Everest where the process almost every moment along the way is synonymous with physical pain and intermittent fear of death, torture by another name. But the whole project is something that they're willing to train for, sacrifice for, dream about, and then talk about for the rest of their lives, and at the end of the day might be in terms of their conscious sense of what it was like to be them, the best thing they ever did in their lives.
That is this sort of bilayered utility function you're imagining, whereas if you could just experience sample what it's like to be in the death zone on Everest, it really sucks and if imposed on you for any other reason, it would be torture. But given the framing, given what this person believes about what they're doing, given the view out their goggles, given their identity as a mountain climber, this is the best thing they've ever done. You're imagining some version of that, but that fits in my view on the moral landscape. That's not the worst possible misery for anyone. The source of satisfaction that is deeper than just bodily, sensory pleasure every moment of the day, or at least it seems to be for that person at that point in time. They could be wrong about that. There could be something better. They don't know what they're missing. It's actually much better to not care about mountain climbing.
The truth is, your aunt is a hell of a lot happier than Sir Edmund Hillary was and Edmund Hillary was never in a position to know it because he was just so into climbing mountains. That's where the realism comes in, in terms of you not knowing what you're missing. But I just see any ultimate utility function, if it's accompanied by consciousness, it can't define itself as the ultimate frustration of its aims if its aims are being satisfied.
Lucas Perry: I see. Yeah. So this just seems to be a really important point around hedonics and computation and utility function and what drives what. So, wrapping up here, I think I would feel defeated if I let you escape without maybe giving a yes or no answer to this last question. Do you think that bliss and wellbeing can be mathematically defined?
Sam Harris: That is something I have no intuitions about it. I'm not enough of a math head to think in those terms. If we mathematically understood what it meant for us neurophysiologically in our own substrate, well then, I'm sure we can characterize it for creatures just like us. I think substrate independence makes it something that's hard to functionally understand in new systems and it'll just pose problems of our just knowing what it's like to be something that on the outside seems to be functioning much like we do but is organized in a very different way. But yeah, I don't have any intuitions around that one way or the other.
Lucas Perry: All right. And so pointing towards your social media or the best places to follow you, where should we do that?
Sam Harris: My website is just samharris.org and I'm SamHarrisorg without the dot on Twitter, and you can find anything you want about me on my website, certainly.
Lucas Perry: All right, Sam. Thanks so much for coming on and speaking about this wide range of issues. You've been deeply impactful in my life since I guess about high school. I think you probably partly at least motivated my trip to Nepal, where I overlooked the Pokhara Lake and reflected on your terrifying acid trip there.
Sam Harris: That's hilarious. That's in my book Waking Up, but it's also on my website and it's also I think I read it on the Waking Up App and it's in a podcast. It's also on Tim Ferriss' podcast. But anyway, that acid trip was detailed in this piece called Drugs and The Meaning of Life. That's hilarious. I haven't been back to Pokhara since, so you've seen that lake more recently than I have.
Lucas Perry: So yeah, you've contributed much to my intellectual and ethical development and thinking, and for that, I have tons of gratitude and appreciation. And thank you so much for taking the time to speak with me about these issues today.
Sam Harris: Nice. Well, it's been a pleasure, Lucas. And all I can say is keep going. You're working on very interesting problems and you're very early to the game, so it's great to see you doing it.
Lucas Perry: Thanks so much, Sam.