Skip to content
All Podcast Episodes

The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield

Published
August 1, 2019

Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems; They also discussed our species’ unique strengths and vulnerabilities — and the ways in which technology has heightened both — with respect to the changing climate.

This month’s podcast helps serve as the basis for a new podcast we’re launching later this month about the climate crisis. We’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more!. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified when the climate series launches.

We’d also like to make sure we’re covering the climate topics that are of most interest to you. If you have a couple minutes, please fill out a short survey at surveymonkey.com/r/climatepodcastsurvey, and let us know what you want to learn more about.

Topics discussed in this episode include:

  • What an existential risk is and how to classify different threats
  • Systems critical to human civilization
  • Destabilizing conditions and the global systems death spiral
  • How we’re vulnerable as a species
  • The “rungless ladder”
  • Why we can’t wait for technology to solve climate change
  • Uncertainty and how to deal with it
  • How to incentivize more creative science
  • What individuals can do

References discussed in this episode include:

Want to get involved? CSER is hiring! Find a list of openings here.

Transcript

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast. I’m your host, Ariel Conn, and I am especially excited about this month’s episode. Not only because, as always, we have two amazing guests joining us, but also because this podcast helps lay the groundwork for an upcoming series we’re releasing on climate change.

There’s a lot of debate within the existential risk community about whether the climate crisis really does pose an existential threat, or if it will just be really, really bad for humanity. But this debate exists because we don’t know enough yet about how bad the climate crisis will get nor about how humanity will react to these changes. It’s very possible that today’s predicted scenarios for the future underestimate how bad climate change could be, while also underestimating how badly humanity will respond to these changes. Yet if we can get enough people to take this threat seriously and to take real, meaningful action, then we could prevent the worst of climate change, and maybe even improve some aspects of life. 

In late August, we’ll be launching a new podcast series dedicated to climate change. I’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified as soon as the climate series launches.

But first, today, I’m joined by two guests who suggest we should reconsider studying climate change as an existential threat. Dr. Simon Beard and Haydn Belfield are researchers at University of Cambridge's Center for the Study of Existential Risk, or CSER. CSER is an interdisciplinary research group dedicated to the study and mitigation of risks that could lead to human extinction or a civilizational collapse. They study existential risks, develop collaborative strategies to reduce them, and foster a global community of academics, technologists, and policy makers working to safeguard humanity. Their research focuses on four areas: biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.

Simon is a senior research associate and academic program manager; He's a moral philosopher by training. Haydn is a research associate and academic project manager, as well as an associate fellow at the Leverhulme Center for the Future of Intelligence. His background is in politics and policy, including working for the UK Labor party for several years. Simon and Haydn, thank you so much for joining us today.

Simon Beard: Thank you.

Haydn Belfield: Hello, thank you.

Ariel Conn: So I've brought you both on to talk about some work that you're involved with, looking at studying climate change as an existential risk. But before we really get into that, I want to remind people about some of the terminology. So I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there's any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.

Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you've got your head around that, different groups have slightly different understandings of what the differences between these three terms are. 

So, for some groups, it's all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.

Most of the systems — be this physiological systems, the world's ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that's consistent with and supporting our health and our continued survival, and that the institutions that we've developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we'll basically, we'll be able to get on with our lives.

If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that's really hard for us to respond to.

And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can't get them back or it's going to be really hard. And life as we know it cannot be resumed; We're going to have to live in a very different and very inferior world, at least from our current way of thinking.

Haydn Belfield: I think that sort of captures it really well. One thing that you could kind of visualize, it might be something like, imagine a really bad endemic. 100 years ago, we had the Spanish flu pandemic that killed 100 million people — that was really bad. But it could be even worse. So imagine one tomorrow that killed a billion people. That would be one of the worst things that’s ever happened to humanity; It would be sort of a global catastrophic risk. But it might not end our story, it might not be the end of our potential. But imagine if it killed everyone, or it killed almost everyone, and it was impossible to recover: That would be an existential risk.

Ariel Conn: So, there’s — at least I've seen some debate about whether we want to consider climate change as falling into either a global catastrophic or existential risk category. And I want to start first with an article that, Simon, you wrote back in 2017, to consider this question. The subheading of your article is a question that I think is actually really important. And it was: how much should we care about something that is probably not going to happen? I want to ask you about that — how much should we care about something that is probably not going to happen?

Simon Beard: I think this is really important when you think about existential risk. People's minds, they want to think about predictions, they want someone who works in existential risk to be a prophet of doom. That is the idea that we have — that you know what the future is going to be like, and it's going to be terrible, and what you're saying is, this is what's going to happen. That's not how people who work in existential risk operate. We are dealing with risks, and risks are about knowing all the possible outcomes: whether any of those are this severe long term threat, an irrecoverable loss to our species.

And it doesn't have to be the case that you think that something is the most likely or the most probable as a potential outcome for you to get really worried about the thing that could bring that about. And even a 1% risk of one of these existential catastrophes is still completely unacceptable because of the scale of the threat, and the harm we're talking about. And because if this happens, there is no going back; It's not something that we can do a safe experiment with.

So when you're dealing with risk, you have to deal with probabilities. You don't have to be convinced that climate change is going to have these effects to really place it on the same level as some of the other existential risks that people talk about — nuclear weapons, and artificial intelligence, and so on — you just need to see that this is possible. We can't exclude it based on the knowledge that we have at the moment, but it seems like a credible threat with a real chance of materializing. And something that we can do about it, because ultimately the aim of all existential risk research is safety — trying to make the world a safer place and the future of humanity a more certain thing.

Ariel Conn: Before I get into the work that you're doing now, I want to stick with one more question that I have about this article. I was amused when you sent me the link to it — you sort of prefaced it by saying that you think it’s rather emblematic of some of the problematic ways that we think about climate change, especially as an existential risk, and that your thinking has evolved in the last couple of years since writing this. I was hoping you could just talk a little bit about some of the problems you see with the way we're thinking about climate change as an x-risk.

Simon Beard: I wrote this paper largely out of a realization that people wanted us to talk about climate change in the next century. And we wanted to talk about it. It's always up there on the list of risks and threats that people bring up when you talk about existential risk. And so I thought, well, let's get the ball rolling; Let's review what's out there, and the kind of predictions that people who seem to know what they're talking about have made about this — you know, economists, climate scientists, and so on — and make this case that this suggests there is a credible threat, and we need to take this seriously. And that seemed, at the time, like a really good place to start.

But the more I thought about it afterwards, the more flawed I saw the approach as being. And it's hard to regret a paper like that, because I'm still convinced that the risk is very real, and people need to take it seriously. But for instance, one of the things that kept on coming up is that when people make predictions about climate change as an existential risk, they're always very vague. Why is it a risk? What's the sort of scenarios that we worry about? Where are the danger levels? And they always want to link it to a particular temperature threshold or a particular greenhouse gas trajectory. And that just didn't strike me as credible, that we would cross a particular temperature threshold and then that would be the end of humanity.

Because of course, a huge amount of the risk that we face depends upon how humanity responds to the changing climate, not just upon climate change. I think people have this idea in their mind that it'll get so hot, everyone will fry or everyone will die of heat exhaustion. And that's just not a credible scenario. So there were these really credible scholars, like Marty Weitzman and Ram Ramanathan, who tried to work this out, and have tried to predict what was going to happen. But they seemed to me to be missing a lot, and try and make very precise claims but based on very vague scenarios. So we kind of said at that point, we're going to stop doing this until we have worked out a better way of thinking about climate change as an existential threat. And we’ve been thinking a lot about this in the intervening 18 months, and that's where the research that you're seeing that we're hoping to publish soon and the desire to do this podcast really come from. So it seems to us that there are kind of three ways that people have gone about thinking about climate change as an existential risk. It's a really hard question. We don't really know what's going to happen. There's a lot of speculation involved in this.

One of the ways that people have gone about trying to respond to this has just been to speculate, just been to come up with some plausible scenario or pick a temperature number out of the air and say, "Well, that seems about right, if that were to happen that would lead to human extinction, or at least a major disruption of all of these systems that we rely upon. So what's the risk of that happening, and then we'll label that as the existential climate threat.” As far as we can tell, there isn't the research to back up some of these numbers. Many of them conflict: In Ram Ramanathan’s paper he goes for five degrees; In Marty Weitzman’s paper he goes to six degrees; There’s another paper that was produced by Breakthrough where they go for four degrees. There’s kind of quite a lot of disagreement about where the danger levels lie.

And some of it's just really bad. So there’s this prominent paper by Jem Bendell — he never got it published, but it's been read like 150,000 times, I think — on adapting to extreme climate change. And he just picks this random scenario where the sea levels rise, a whole bunch of coastal nuclear reactors get inundated with seawater, and they go critical, and this just causes human extinction. That's not credible in many different ways, not least just that won't have that much damage. But it just doesn't seem credible that this slow sea level rise would have this disastrous meltdown effect — we could respond to that. What passes for scientific study and speculation didn't seem good enough to us.

Then there were some papers which just kind of passed the whole thing by — say, “Well, we can't come up with a plausible scenario or a plausible threat level, but there just seem to be a lot of bad things going on around there. Given that we know that the climate is changing, and that we are responding to this in a variety of ways, probably quite inadequately, it doesn't help us to prioritize efforts or really understand the level of risk we face and when maybe some more extreme measures like geoengineering become more appropriate because of the level of risk that we face.”

And then there's a final set of studies — there have been an increasing number of these; one recently came out in Vox, Anders Sandberg has done one, and Toby Ord talks about one — where people say, "Well, let's just go for the things that we know, let's go for the best data and the best studies." And these usually focus on a very limited number of climate effects, the more direct impacts of things like heat exhaustion, perhaps sometimes the crop failure — but only really looking at the most direct climate impacts and only where there are existing studies. And then they try and extrapolate from that, sometimes using integrated assessment models, sometimes it's the other kinds of analysis, but usually in quite a straightforward linear economic analysis or epidemiological analysis.

And that also is useful. I don't want to dis these papers; I think that they provide very useful information for us. But there is no way that that can constitute an adequate risk assessment, given the complexity of the impacts that climate change is having, and the ways in which we're responding to that. And it's very easy for people to read these numbers and these figures and conclude, as I think the Vox article did, climate change isn't an existential risk, it's just going to kill a lot of people. Well, no, we know it will kill a lot of people, but that doesn't answer the question about whether it is an existential threat. There are a lot of things that you're not considering in this analysis. So given that there wasn't really a good example that we could follow within the literature, we've kind of turned it on its head. And we're now saying, maybe we need to work backwards.

Rather than trying to work forwards from the climate change we're expecting and the effects that we think that is going to have and then whether these seem to constitute an existential threat, maybe we need to start from the other end and think about what are the conditions that could most plausibly destabilize the global civilization and the continued future of our species? And then work back from them to ask, are there plausible climate scenarios that could bring these about? And there's already been some interesting work in this area for natural systems, and this kind of global Earth system thinking and the planetary boundaries framework, but there’s been very little work on this done at the social level.

And even less work done when you consider that we rely on both social and natural systems for our survival. So what we really need is some kind of approach that will integrate these two. That's a huge research agenda. So this is how we think we're going to proceed in trying to move beyond the limited research that we've got available. And now we need to go ahead and actually construct these analysis and do a lot more work in this field. And maybe we're going to start to be able to produce a better answer.

Ariel Conn: Can you give some examples of the research that has started with this approach of working backwards?

Simon Beard: So there's been some really interesting research coming out of the Stockholm Resilience Center dealing with natural Earth systems. So they first produced this paper on planetary boundaries, where they looked at a range of, I think it's nine systems — the biosphere, biogeochemical systems, yes, climate system and so on — and said, are these systems operating in what we would consider their normal functioning boundaries? That's how they've operated throughout the pliocene, throughout the last several thousand years, during which human civilization has developed. Or do they show signs of transitioning to a new state of abnormal operation? Or are they in a state that's already posing high risk to the future of human civilization, but without really specifying what that risk is.

Then they produced another paper recently on Hothouse Earth, where they started to look for tipping points within the system, points where, in a sense, change become self perpetuating. And rather than just a kind of gradual transition from what we're used to, to maybe an abnormal condition, all of a sudden, a whole bunch of changes start to accelerate. So it becomes much harder to adapt to these. Their analysis is quite limited, but they argue that quite a lot of these tipping point seem to start kicking in at about one and a half to two degrees warming above pre-industrial levels.

We're getting quite close to that now. But yeah, the real question for us at the Center for the Study of Existential risk looking at humanity is, what are the effects of this going to be? And also what are the risks that exist within those socio-technological systems, the institutions that we set up, the way that we survive as a civilization, the way we get our food, the way we get our information, and so on, because there's also significant fragilities and potential tipping points there as well. 

That's a very new sort of study, I mean, to the point were a lot of people just refer back to this one book written by Jared Diamond in 2005 as if it was the authoritative tome on collapse. And it's a popular book, and he's not an expert in this: He's kind of a very generalist scholar, but he provides a very narrative-based analysis of the collapse of certain historical civilizations and draws out a couple of key lessons from that. But it's all very vague and really written for a general audience. And that still kind of stands out as this is the weighty tome, this is where you go to get answers to your questions. It's very early and we think that there's a lot of room for better analysis of that question. And that's something we're looking at a lot.

Ariel Conn: Can you talk about the difference between treating climate change itself as an existential risk, like saying this is an x-risk, and studying it as if it poses such a threat? If that distinction makes sense?

Simon Beard: Yeah. When you label something as an existential risk, I think that is in many ways a very political move. And I think that that has been the predominant lens through which people have approached this question of how we should talk about climate change. People want to draw attention to it, they realize that there's a lot of bad things that could come from it. And it seems like we could improve the quality of our future lives relatively easily by tackling climate change.

It's not like AI safety, you know, the threats that we face from advance artificial intelligence, where you really have to have advanced knowledge of machine learning and a lot of skills and do a lot of research to understand what's going on here and what the real threats that we face might be. This is quite clear. So talking about it, labeling it as an existential risk has predominantly been a political act. But we are an academic institution. 

I think when you ask this question about studying it as an existential threat, one of the great challenges we face is all things that are perceived as existential threats, they're all interconnected. Human extinction, or the collapse of our civilization, or these outcomes that we worry about: these are scenarios and they will have complex causes — complex technological causes, complex natural causes. And in a sense, when you want to ask the question, should we study climate change as an existential risk? What you're really asking is, if we look at everything that flows from climate change, will we learn something about the conditions that could precipitate the end of our civilization? 

Now, ultimately, that might come about because of some heat exhaustion or vast crop failure because of the climate change directly. It may come about because, say, climate change triggers a nuclear war. And then there's a question of, was that a climate-based extinction or a nuclear-based extinction? Or it might come about because we develop technologies to counter climate change, and then those technologies prove to be more dangerous than we thought and pose an existential threat. So when we carve this off as an academic question, what we really want to know is, do we understand more about the conditions that would lead to existential risk, and do we understand more about how we can prevent this bad thing from happening, if we look specifically at climate change? It's a slightly different bar. But it's all really just this question of, is talking about climate change, or thinking about climate change, a way to move to a safer world? We think it is but we think that there's quite a lot of complex, difficult research that is needed to really make that so. And at the moment, what we have is a lot of speculation.

Haydn Belfield: I've got maybe an answer to that as well. Over the last few years, lots, and lots of politicians have said climate change is an existential risk, and lots of activists as well. So you get lots and lots of speeches, or rallies, or articles saying this is an existential risk. But at the same time, over the last few years, we've had people who study existential risk for a living, saying, "Well, we think it's an existential risk in the same way that nuclear war is an existential risk. But it's not maybe this single event that could kill lots and lots of people, or everyone, in kind of one fell swoop."

So you get people saying, "Well, it's not a direct risk on its own, because you can't really kill absolutely everybody on earth with climate change. Maybe there's bits of the world you can't live in, but people move around. So it's not an existential risk.” And I think the problem with both of these ways of viewing it is that word that I've been emphasizing, “an.” So I would kind of want to ban the word “an” existential risk, or “a” existential risk, and just say, does it contribute to existential risk in general?

So it's pretty clear that climate change is going to make a bunch of the hazards that we face — like pandemics, or conflict, or environmental one-off disasters — more likely, but it will also make us more vulnerable to a whole range of hazards, and it will also increase the chances of all these types of things happening, and increase our exposure. So like with Simon, I would want to ask, is climate change going to increase the existential risk we face, and not get hung up on this question of is it “an” existential risk?

Simon Beard: The problem is, unfortunately, there is an existing terminology and existing way of talking that to some extent we're bound up with. And this is how the debate is. So we've really struggled with to what extent we kind of impose the terminology that we've most liked on the field and the way that these things are discussed? And we know ultimately existential risk is just a thing; It’s a homogenous lump at the end of human civilization or the human species, and what we're really looking at is the drivers of that and the things that push that up, and we want to push it down. That is not a concept that I think lots of people find easy to engage with. People do like to carve this up into particular hazards and vulnerabilities and so on.

Haydn Belfield: That's how most of risk studies works. Most of when you study natural disasters, or you study accidents, in an industry setting, that's what you're looking at. You're not looking at this risk as completely separate. You're saying, “What hazards are we facing? What are our vulnerabilities? And what are our exposure,” and kind of combining all of those into having some overall assessment of the risk you face. You don't try and silo it up into, this is bio, this is nuclear, this is AI, this is environment.

Ariel Conn: So that connects to a question that I have for you both. And that is what do you see as society's greatest vulnerabilities today?

Haydn Belfield: Do you want to give that a go, Simon?

Simon Beard: Sure. So I really hesitate to answer any question that’s posed quite in that way, just because I don't know what our greatest vulnerability is.

Haydn Belfield: Because you're a very good academic, Simon.

Simon Beard: But we know some of the things that contribute to our vulnerability overall. One that really sticks in my head came out of a study we did looking at what we can learn from previous mass extinction events. And one of the things that people have found looking at the species that tend to die out in mass extinctions, and the species that survive, is this idea that the specialists — the efficient specialists — who've really carved out a strong biological niche for themselves, and are often the ones that are doing very well as a result of that, tend to be the species that die out, and the species that survive are the species that are generalists. But that means that within any given niche or habitat or environment, they're always much more marginal, biologically speaking.

And then you say, "Well, what is humanity? Are we a specialist that’s very vulnerable to collapse, or are we a generalist that’s very robust and resilient to this kind of collapse that would fare very well?” And what you have to say is, as a species, when you consider humanity on its own, we seem to be the ultimate generalist, and indeed, we’re the only generalist who's really moved beyond marginality. We thrive in every environment, every biome, and we survive in places where almost no other life form would survive. We survived on the surface of the moon — not for very long, but we did; We survived Antarctica, on the back ice, for long periods of time. And we can survive at the bottom of the Mariana Trench, and just a ridiculously large range of habitats.

But of course, the way we've achieved that is that every individual is now an incredible specialist. There are very few people in the world who could really support themselves. And you can't just sort of pick it up and go along with it. You know like this last weekend, I went to an agricultural museum with my kids, and they were showing, you know, how you plow fields and how you gather crops and looked after it. And there's a lot of really important, quite artisanal skills about what you had to do to gather the food and protect it and prepare it and so on. And you can't just pick this up with a book; you really have to spend a long time learning it and getting used to it and getting your body strong enough to do these things.

And so every one of us as an individual, I think, is very vulnerable, and relies upon these massive global systems that we've set up, these massive global institutions, to provide this support and to make us this wonderfully adaptable generalist species. So, so long as institutions and the technologies that they've created and the broad socio-technological systems that we've created — so long as they carry on thriving and operating as we want them to, then we are very, very generalist, very adaptable, very likely to make it through any kind of trouble that we might face in the next couple of centuries — with a few exceptions, a few really extreme events. 

But the flip side of that is anything that threatens those global socio-technological institutions also threatens to move us from this very resilient global population we have at the moment to an incredibly fragile one. If we fall back on individuals and our communities, all of a sudden, we are going to become the vulnerable specialist that each of us individually is. That is a potentially catastrophic outcome that people don't think about enough.

Haydn Belfield: One of my colleagues, Luke Kemp, likes to describe this as a rungless ladder. So the idea is that there's been lots and lots of collapses before in human history. But what normally happens is elites at the top of the society collapse, and it's bad for them. But for everyone else, you kind of drop one rung down on the ladder, but it's okay, you just go back to the farm, and you still know how to farm, your family's still farming — things get a little worse, maybe, but it's not really that bad. And you get people leaving the cities, things like that; But you only drop one rung down the ladder, you don't fall off it. But as we've gone many, many more rungs up the ladder, we've knocked out every rung below us. And now we're really high up the ladder. Very few of us know how to farm, how to hunt or gather, how to survive, and so on. So were we to fall off that rungless ladder, then we might come crashing down with a wallop.

Ariel Conn: I'm sort of curious. We're talking about how humanity is generalist but we're looking within the boundaries of the types of places we can live. And yet, we're all very specifically, as you described, reliant on technology in order to live in these very different, diverse environments. And so I wonder if we actually are generalists? Or if we are still specialists at a societal level because of technology, if that makes sense?

Simon Beard: Absolutely. I mean, the point of this was, we kind of wanted to work out where we fell on the spectrum. And basically, it's a spectrum that you can't apply to humanity: We appear to fall as the most extreme species in both ends. And I think one of the reasons for that is that the scale as it would be applied to most species really only looks at the physical characteristics of the species, and how they interact directly with their environment — whereas we've developed all these highly emergent systems that go way beyond how we interact with the environment, that determine how we interact with one another, and how we interact with the technologies that we've created.

And those basically allow us to interact with the world around us in the same ways that both generalists and specialists would. That's great in many ways: It's really served us well as a species, it's been part of the hallmark of our success and our ability to get this far. But it is a real threat, because it adds a whole bunch of systems that have to be operating in a way as we expect them to in order for us to continue. Maybe so long as these systems function it makes us more resilient to normal environmental shocks. But it makes us vulnerable to a whole bunch of other shocks.

And then you look at the way that we actually treat these emergent socio-technological systems. And we're constantly driving for efficiency; We're constantly driving for growth, as quick and easy growth as we can get. And the ways that you do that are often by making the systems themselves much less resilient. Resiliency requires redundancy, requires diversity, requires flexibility, requires all of the things that either an economic planner or a market functioning on short-term economic return really hate, because they get in the way of productivity.

Haydn Belfield: Do you want to explain what resilience is?

Simon Beard: No.

Ariel Conn: Hayden do you want to explain it?

Haydn Belfield: I'll give it a shot, yeah. So, just since people might not be familiar with it — so what I normally think of is someone balancing. How robust they are is how much you can push that person balancing before they fall over, and then resilience is how quickly they get up and can balance again. The next time they balance, they're even stronger than before. So that's what we're talking about when we’re talking about resilience, how quickly and how well you're able to respond to those kinds of external shocks.

Ariel Conn: I want to stick with this topic of the impact of technology, because one of the arguments that I often hear about why climate change isn't as big of an existential threat or a contributor to existential risk as some people worry is because at some point in the near future, we will develop technologies that will help us address climate change, and so we don't need to worry about it. You guys bring this up in the paper that you're working on as potentially a dangerous approach; I was hoping you could talk about that.

Simon Beard: I think there's various problems with looking for the technological solutions. One of them is technologies tend to be developed for quite specific purposes. But some of the conditions that we are examining as potential civilization collapse due to climate change scenarios involve quite widespread and wide-scale systemic change to society and to the environment around us. And engineers have a great challenge even capturing and responding to one kind of change. Engineering is an art of the small; It's a reductionist art; You break things down, and you look at the components, and you solve each of the challenges one by one.

And there are definitely visionary engineers who look at systems and look at how the parts all fit together. But even there, you have to have a model, you have to have a basic set of assumptions of how all these parts fit together and how they're going to interact. And this is why you get things like Murphy's Law — you know, if it can go wrong, it will go wrong — because that's not how the real world works. The real world is constantly throwing different challenges at you, problems that you didn't foresee, or couldn't have foreseen because they are inconsistent with the assumption you made, all of these things. 

So it is quite a stretch to put your faith in technology being able to solve this problem, when you don't understand exactly what the problem that you're facing is. And you don't necessarily at this point understand where we may cross the tipping point, the point of no return, when you really have to step up this R & D funding. Or now you know the problem that the engineers have to solve, because it's staring you in the face: By the time that that happens, it may be too late. If you get positive feedback loops — you know, reinforcement where one bad thing leads to another bad thing, leads to another bad thing, which then contributes to the original bad thing — you need so much more energy to push the system back into a state of normality than for this cycle to just keep on pushing it further and further away from what you previously were at.

So that throws up significant barriers to a technological fix. The other issue, just going back to what we were saying earlier, is technology does also breed fragility. We have a set of paradigms about how technologies are developed, how they interface with the economy that we face, which is always pushing for more growth and more efficiency. It has not got a very good track record of investing in resilience, investing in redundancy, investing in fail-safes, and so on. You typically need to have strong, externally enforced incentives for that to happen.

And if you're busy saying this isn't really a threat, this isn't something we need to worry about, there's a real risk that you're not going to achieve that. And yes, you may be able to develop new technologies that start to work. But are they actually just storing up more problems for the future? We can't wait until the story’s ended and then know whether these technologies really did make us safer in the end or more vulnerable.

Haydn Belfield: So I think I would have an overall skepticism about technology from a kind of, “Oh, it's going to increase our resilience.” My skepticism in this case is just more practical. So it could very well be that we do develop — so there’s these things called negative emissions technologies, which suck CO2 out of the air — we could maybe develop that. Or things that could lower the temperature of the earth: maybe we can find a way to do that, throw the whole climate and weather into a chaotic system. Maybe tomorrow's the day that we get the breakthrough with nuclear fusion. I mean, it could be that all of these things happen — it'd be great if they could. But I just wouldn't put all my bets on it. The idea that we don't need to prioritize climate change above all else, and make it a real central effort for societies, for companies, for governments, because we can just hope for some techno-fix to come along and save us — I just think it's too risky, and it's unwise. Especially because if we’re listening to the scientists, we don't have that much longer. We've only got a few decades left, maybe even one decade, to really make dramatic changes. And we just won't have invented some silver bullet within a decade’s time. Maybe technology could save us from climate change; I'd love it if it could. But we just can't be sure about that, so we need to make other changes.

Simon Beard: That's really interesting, Hayden, because when you list negative emissions technologies, or nuclear fusion, that's not the sort of technology I'm talking about. I was thinking about technology as something that would basically just be used to make us more robust. Obviously, one of the things that you do if you think that climate change is an existential threat is you say, "Well, we really need to prioritize more investment into these potential technology solutions." The belief that climate change is exponential threat is not committing you to trying to make climate change worse, or something like that.

You want to make it as small as possible, you want to reduce this impact as much as possible. That's how you respond to climate change as an existential threat. if you don't believe climate change is an existential threat, you would invest less in those technologies. Also, I do wanna say — and I mean, I think there's some legitimate debate about this, but I don't like the 12 years terminology, I don't think we know nearly enough to support those kind of claims. The IPCC came up with this 12 years, but it's not really clear what they meant by it. And it's certainly not clear where they got it from. People have been saying, "Oh, we've got a year to fix the climate," or something, for as long as I can remember discussions going on about climate change.

It's one of those things where that makes a lot of sense politically, but those claims aren't scientifically based. We don't know. We need to make sure that that's not true; We need to falsify these claims, either by really looking at it, and finding out that it genuinely is safer than we thought it was or by doing the technological development and greenhouse gas reduction efforts and other climate mitigation methods to make it safe. That's just how it works.

Ariel Conn: Do you think that we're seeing the kind of investment in technology, you know, trying to develop any of these solutions, that we would be seeing if people were sufficiently concerned about climate change as an existential threat?

Simon Beard: So one of the things that worries me is people always judge this by looking at one thing and saying, "Are we doing enough of that thing? Are we reducing our carbon dioxide emissions fast enough? Are people changing their behaviors fast enough? Are we developing technologies fast enough? Are we ready?" Because we know so little about the nature of the risk, we have to respond to this in a portfolio manner; We have to say, "What are all the different actions and the different things that we can take that will make us safer?" And we need to do all of those. And we need to do as much as we can of all of these.

And I think there is a definite negative answer to your question when you look at it like that, because people aren't doing enough thinking and aren't doing enough work about how we do all the things we need to do to make us safe from climate change. People tend to get an idea of what they think a safer world would look like, and then complain that we're not doing enough of that thing, which is very legitimate and we should be doing more of all of these things. But if you look at it as an existential risk, and you look at it from an existential safety angle, there's just so few people who are saying, “Let's do everything we can to protect ourselves from this risk.”

Way too many people are saying, “I've had a great idea, let's do this.” That doesn't seem to me like safety-based thinking; That seems to me like putting all your eggs in one basket and basically generating the solution to climate change that's most likely to be fragile, that's most likely to miss something important and not solve the real problem and store up trouble for a future date and so on. We need to do more — but that's not just more quantitatively, it's also more qualitatively.

Haydn Belfield: I think just clearly we're not doing enough. We're not cutting emissions enough, we're not moving to renewables fast enough, we're not even beginning to explore possible solar geoengineering responses, we don't have anything that really works to suck carbon dioxide or other greenhouse gases out of the air. Definitely, we're not yet taking it seriously enough as something that could be a major contributor to the end of our civilization or the end of our entire species.

Ariel Conn: I think this connects nicely to another section of some of the work you've been doing. And that is looking at — I think there were seven critical systems that are listed as sort of necessary for humanity and civilization.

Simon Beard: Seven levels of critical systems.

Ariel Conn: Okay.

Simon Beard: We rely on all sorts of systems for our continued functioning and survival. And a sufficiently significant failure in any of these systems could be fatal to all of our species. We can kind of classify these systems at various levels. So at the bottom, there are the physical systems — that's basically the laws of physics. Atoms operate, how subatomic particles operate, how they interact with each other: those are pretty safe. There are some advanced physics experiments that some people have postulated may be a threat to those systems. But they all seem pretty safe. 

We then kind of move up: We've got basic chemical systems and biochemical systems, how we generate enzymes and all the molecules that we use — proteins, lipids, and so on. Then we move up to the level of the cell; Then we move up to the level of the anatomical systems — the digestive system, the respiratory system — we need all these things. Then you look at the organism as a whole and how it operates. Then you look at how organisms interact with each other: the biosphere system, the biological system, ecological system.

And then as human beings, we've added this kind of seventh, even more emergent, system, which is not just how humans interact with each other, but the kind of systems that we have made to govern our interaction, and to determine how we work together with each other: political institutions, technology, the way we distribute resources around the planet, and so on. So there are a really quite amazing number of potential vulnerabilities that our species has. 

It’s many more than seven, but categorizing needs on the kind of the seven levels is helpful to not miss anything, because I think most people's idea of an existential threat is something like a really big gun. Guns, we understand how they kill people, if you just had a really huge gun, and just blew a hole in everyone's head. But that's both missing things that are actually a lot more basic than the way that people normally die, but also a lot more sophisticated and emergent. All of these are potentially quite threatening.

Ariel Conn: So can you explain a little bit more detail how climate change affects these different levels?

Haydn Belfield: So I guess the way I'll do is I'll first talk a bit about natural feedback stuff, and then talk about the social feedback loops. Everyone listening to this will be familiar with feedback loops, like methane getting released from permafrost in the Arctic, or methane coming out of clathrates in the ocean, or there's other kinds of feedback loops. So there's one that was discovered only recently, very recent paper was about cloud formation. So if it gets to four degrees, these models show that it becomes much harder for clouds to form. And so you don't get much sort of radiation bouncing off those clouds and you get very rapid additional heating up to 12 degrees, is what it said.

So the first way that climate change could affect these kinds of systems that we're talking about is it just makes it anatomically way too hot: You get all these feedback, and it just becomes far too hot for anyone to survive sort of anywhere on the surface. It might get much too hot in certain areas of the globe for really civilization to be able to continue there, much like it's very hard in the center of the Sahara to have large cities or anything like that. But that seems quite unlikely that climate change would ever get that bad. The kind of stuff that we're much more concerned about is the more general effects that climate change, climate chaos, climate breakdown might have on a bunch of other systems.

So in this paper, we've broken it down into three. We've looked at the effects of climate change on the food/water/energy system, the ecological system, and on our political system and conflict. And climate change is likely to have very negative effects on all three of those systems. It's likely to negatively affect crop yields; It’s likely to increase freak weather events, and there's some possibility that you might have these sort of very freak weather events — droughts, or hurricanes is also one — in areas where we produce lots of our calories, so bread baskets around the world. So climate change is going to have very negative effects most likely on our food and energy and water systems.

Then separately, there's ecological systems. People will be very familiar with climate change driving lots of habitat loss, and therefore the loss of species; People will be very familiar with coral reefs dying and bleaching and going away. This could also have very negative effects on us, because we rely on these ecological systems to provide what we call ecological services. Ecological services are things like pollination, so if all the bees died what would we do? Ecological services also include the fish that we catch and eat, or fresh, clean drinking water. So climate change is likely to have very negative effects on that whole set of systems. And then it's likely to have negative effects on our political system.

If there are large areas of the world that are nigh on uninhabitable, because you can't grow food or you can't go out at midday, or there's no clean water available, then you're likely to see maybe state breakdown, maybe huge numbers of people leaving — much more than we've ever encountered before, sort of 10s or hundred millions of people dislocated and moving around the world. That's likely to lead to conflict and war. So those are some ways in which climate change could have negative effects on three sets of systems that we crucially rely on as a civilization.

Ariel Conn: So in your work, you also talk about the global systems death spiral. Was that part of this?

Haydn Belfield: Yeah, that's right. The global systems death spiral is a catchy term to describe the interaction between all these different systems. So not only would climate change have negative effects on our ecosystems, on our food and water and energy systems, the political system and conflict, but these different effects are likely to interact and make each other worse. So imagine our ecosystems are harmed by climate change: Well, that probably has an effect on food/water systems, because we rely on our ecosystems for these ecosystem services. 

So then, the bad effects on our food and water systems: Well, that probably leads to conflict. So some colleagues of ours at the Anglia Ruskin University have something called a global chaos map, which is a great name for a research project, where they try and link incidences of shocks to the food system and conflict — riots or civil wars. And they've identified lots and lots of examples of this. Most famously, the Arab Spring, which has now become lots of conflicts, has been linked to a big spike in food prices several years ago. So there's that link there between food and water, insecurity and conflict. 

And then conflict leads back into ecosystem damage. Because if you have conflict, you've got weak governance, you've got weak governments trying to protect their ecosystems, and weak government has been identified as the strongest single predictor of ecosystem loss, biodiversity loss. They all interact with one another, and make one another worse. And you could also think about things going back the other way. So for example, if you're in a war zone, if you’ve got conflict, you've got failing states — that has knock-on effects on the food systems, and the water systems that we rely on: We often get famines during wartime.

And then if they don't have enough food to eat, they don't have water to drink, maybe that has negative effects on our ecosystems, too, because people are desperate to eat anything. So what we're trying to point out here is that the systems aren’t independent from one another — they're not like three different knobs that are all getting turned up independently by climate change — but that they interact with one another in a way that could cause lots of chaos and lots of negative outcomes for world society.

Simon Beard: We did this kind of pilot study looking at the ecological system and the food system and the global political system and looking at the connections of those three, really just in one direction: looking at the impact of food insecurity on conflict, and conflict and political instability on the biosphere, and loss of biosphere on integrity of the food system. But that was largely determined by the fact that these were three connections that we either had looked at directly, or had close colleagues who had looked at, so we had quite good access to the resources.

As Hayden said, everything kind of also works in the other direction, most likely. And also, there are many, many more global systems that interact in different ways. Another trio that we're very interested in looking at in the future is the connection between the biosphere and the political system, but this time, also, with some of the health systems, the emergence of new diseases, the ability to respond to public health emergencies, and especially when these things are looked at in kind of one health perspective, where plant health and animal health and human health are all actually very closely interacting with one another.

And then you kind of see this pattern where, yes, we could survive six degrees plus, and we could survive famine, and we could survive x, y, and z. But once these things start interacting, it just drives you to a situation where really everything that we take for granted at the moment up to and including the survival of the species — they're all on the table, they're all up for grabs once you start to get this destructive cycle between changes in the environment and changes in how human society interacts with the environment. It's the very dangerous, potentially very self-perpetuating feedback loop, and that's why we refer to it as a global systems death spiral: because we really can't predict at this point in time where it will end. But it looks very, very bleak, and very, very hard to see how once you enter into this situation, you could then kind of dial it back and return to a safe operating environment for humanity and the systems that we rely on. 

There's definitely a new stable state at the end of this spiral. So when you get feedback loops between systems, it's not that they will just carry on amplifying change forever; They're moving towards another kind of stable state, but you don't know how long it's going to take to get there, you don't know what that steady state will be. So for the simulation with the death of clouds, this idea that purely physical feedback between rising global temperatures, changes in the water cycle, and cloud cover, then you end up with a world that's much, much hotter and much more arid than the one we have at the moment, which could be a very dangerous state. For sort of perpetual human survival, we would need a completely different way of feeding ourselves and really interacting with the environment. 

You don't know what sort of death traps or kill mechanisms lie along that path of change; You don't know if there is, for instance, somewhere here, it's going to trigger a nuclear war, or it's going to trigger attempts to geoengineer the climate in a sort of bid to gain safety, but actually these turn out to have catastrophic consequences, or all the others that are unknown unknowns we want to make turn into known unknowns, and then turn into things that we can actually begin to understand and study. So in terms of not knowing where the bottom is, that's potentially limitless as far as humanity is concerned. We know that it will have an end. Worst case scenario, that end is a very arid climate with a much less complex, much simpler atmosphere, which would basically need to be terraformed back into a livable environment in the way that we're currently thinking maybe we could do that for Mars. But to get a global effort to do that, in an already sort of disintegrating Earth, I think would be an extremely tall order. There's a huge range of different threats and different potential opportunities for an existential catastrophe to unravel within this kind of death spiral. And we think this really is a very credible threat.

Ariel Conn: How do we deal with all this uncertainty?

Haydn Belfield: More research needed, is the classic academic response to any time you ask that question. More research.

Simon Beard: That's definitely the case, but there are also big questions about the kind of research. So mostly scientists want to study things that they already kind of understand: where you already have well established techniques, you have journals that people can publish their research in, you have an extensive peer review community, you can say, yes, you have done this study by the book, you get to publish it. That's what all the incentives are aligned towards. 

And that sort of research is very important and very valuable, and I don't want to say that we need less of that kind of research. But that kind of research is not going to deal with the sort of radical uncertainty that we're talking about here. So we do need more creative science, we need science that is willing to engage in speculation, but to do so in an open and rigorous way. One of the things is you need scientists who are willing to come on the stand and say, "Look, here's a hypothesis. I think it's probably wrong, and I don't yet know how to test it. But I want people to come out and help me find a way to test this hypothesis and falsify it." 

There aren't any scientific incentive structures at the moment that encourage that. That is not a way to get tenure, and it's not a way to get a professorship or chair, or to take your paper published. That is a really stupid strategy to take if you want to be a successful scientist. So what we need to do is we need to create a safe sandbox for people who are concerned about this — and we know from our engagement that there are a lot of people who would really like to study this and really like to understand it better — for them to do that. So one of the big things that we're really looking at here in CSER is how do we make the tools to make the tools that will then allow us to study this. How do we provide the methodological insights or the new perspectives that are needed to move towards establishing a science of social collapse or environmental collapse that we can actually use to then answer some of these questions.

So there are several things that we're working on at the moment. One important thing, which I think is a very crucial step for dealing with the sort of radical uncertainty we face, is this classification. We've already talked about classifying different levels of critical system. That's one part of a larger classification scheme that CSER has been developing to just look at all the different components of risk and say, "Well, there's this and this and this. Once you start to sort of engage in that exercise and look at what are all the systems that might be vulnerable? What are all the possible vulnerabilities that exist within those systems? What are all the ways in which humanity has exposed these vulnerabilities that they could harness if things go wrong? And you map that out; You haven't got to the truth, but you've moved a lot of things in the unknown category into the, “Okay, I now know all the ways that things could go wrong, and I know that I haven't a clue how any of these things could happen.” Then you need to say, "Well, what are the techniques that seem appropriate?" 

So we think the planetary boundaries framework, albeit it doesn't answer the question that we're interested in, it offers a really nice approach to looking at this question about where tipping points arise, where systems move out of their ordinary operation. We want to apply that in new environments, we want to find new ways of using that. And there are other tools as well that we can take, for instance, from disaster studies and risk management studies, looking at things like fault tree analysis where you say, "What are all the things that might go wrong with this? And what are the levers that we currently have or the interventions that we could make to stop this from happening?" 

We also think that there's a lot more room for people to share their knowledge and their thoughts and their fears and expectations to what we call structured expert solicitations, where you get people who have very different knowledge together, and you find a way that they can all talk to each other and they can all learn from each other. And often you get answers out of these sort of exercises that are very different to what any individual might put in at the beginning, but they represent a much more sort of complete, much more creative structure. And you can get those published because it's a recognized scientific method, so structured expert solicitations on climate change got published in Nature last month. Which is great, because it's a really under researched topic. But I think one of the things that really helped there was that they were using an established method.

What I really hope that CSER’s work going forward is going to achieve is just to make this space that we can actually work with many more of the people who we need to work with to answer these questions and understand the nature of this risk and pull them all together and make the social structures so that the kind of research that we really badly need at this point can actually start to emerge.

Ariel Conn: A lot of what you're talking about doesn't sound like something that we can do in the short term, that it will take at least a decade, if not more to get some of this research accomplished. So in the interest of speed — which is one of the uncertainties we have, we don't seem to have a good grasp of how much time we have before the climate could get really bad — what do we do in the short term? What do we do for the next decade? What do non-academics do?

 

Haydn Belfield: The thing is, it's kind of two separate questions, right? We certainly know all we need to know to take really drastic, serious action on climate change. What we're asking is a slightly more specific question, which is how can climate change, climate breakdown, climate chaos contribute to existential risk. So we already know with very high certainty that climate change is going to be terrible for billions of people in the world, that it's going to make people's lives harder, it's going to make them getting out of extreme poverty much harder.

 

And we also know that the people who have contributed the least to the problem are going to be the ones that are screwed the worst by climate change. And it's just so unfair, and so wrong, that I think we know enough now to take serious action on climate change. And not only is it wrong, it’s not in the interest of rich countries to live in this world of chaos, of worse weather events, and so on. So I think we already know enough, we have enough certainty on those questions to act very seriously, to reduce our emissions very quickly, to invest in as much clean technology as we can, and to collaborate collectively around the world to make those changes. And what we're saying though, is about the different, more unusual question of how it contributes to existential risk more specifically. So I think I would just make that distinction pretty clear. 

 

Simon Beard: So there’s a direct answer to your question and an indirect answer to your question. Direct answer to your question is all the things you know you should be doing. Fly less, preferably not at all; eat less meat, preferably not at all, and perfectly not dairy, either. Every time there's an election, vote, but also ask all the candidates — all the candidates, don't just go for the ones who you think will give you the answer you like — “I'm thinking of voting for you. What are you going to do about climate change?” 

 

There are a lot of people all over the political spectrum who care about climate change. Yeah, there are political slumps in who cares more, and so on. But every political candidate has votes that they could pick up if they did more on climate change, irrespective of their political persuasion. And even if you have a political conviction, so that you're always going to vote the same way, you can nudge candidates to get those votes and to do more on climate change by just asking that simple question: “I'm thinking of voting for you. What are you going to do about climate change?” That's a really low buy, it’s good for election; If they get 100 letters, all saying that, and they're all personal letters, and not just some mass campaign, it really does change the way that people think about the problems that they face. But I also want to challenge you a bit on this, “This is going to take decades,” because it depends — depends how we approach it.

 

Ariel Conn: So one example of research that can happen quickly and action that can occur quickly is this example that you give early on in the work that you're doing, comparing the need to study climate change as a contributor to existential risk as the work that was done in the 80s, looking at how nuclear weapons can create a nuclear winter, and how that connects to an existential risk. And so I was hoping you could also talk a little bit about that comparison.

 

Simon Beard: Yeah, so I think this is really important and I know a lot of the things that we're talking about here, about critical global systems and how they interact with each other and so on — it's long winded, and it’s technical, and it can sound a bit boring. But this was, for me, a really big inspiration as for why we're trying to look at it in this way. So when people started to explode nuclear weapons in the Manhattan Project in the early 1940s, right from the beginning, they were concerned about the kind of threats, or the kind of risks that these posed, and firstly thought, well, maybe it would set light to the upper atmosphere. And there were big worries about the radiation. And then, for a time, there were worries just about the explosive capacity. 

 

This was enough to raise a kind of general sense of alarm and threat. But none of these were really credible. They didn't last; They didn't withstand scientific scrutiny for very long. And then Carl Sagan and some colleagues did this research in the early 1980s on modeling the climate impacts of nuclear weapons, which is not a really intuitive thing to do, right? When you’ve got the most explosive weapon ever envisaged, and it has all this nuclear fallout and so, and you think, what's this going to do to the global climate, that doesn't seem like that's going to be where the problems lie.

 

But they discover when they look at that, that no, it's a big thing. If you have nuclear strikes on cities, it sends a lot of ash into the upper atmosphere. And it's very similar to what happens if you have a very large asteroid, or a very large set of volcanoes going off; The kind of changes that you see in the upper atmosphere are very similar, and you get this dramatic global cooling. And this then threatens — as a lot of mass extinctions have — threatens the underlying food source. And that's how humans starve. And this comes out in 1983, this is kind of 40 years after people started talking about nuclear risk. And it changes the game, because all of a sudden, in looking at this rather unusual topic, they find a really credible way in which nuclear winter leads to everyone dying.

 

The research is still much discussed, and what kind of nuclear warhead, what kind of nuclear explosions, and how many and would they need to hit cities, or would they need to hit areas with particularly large sulphur deposits, or all of these things — these are still being discussed. But all of a sudden, the top leaders, the geopolitical leaders start to take this threat seriously. And we know Reagan was very interested and explored this a lot, the Russians even more so. And it really does seem to have kick started a lot of nuclear disarmament debate and discussion and real action.

 

And what we're trying to do in reframing the way that people research climate change as an existential threat is to look for something like that: What's a credible way in which this really does lead to an existential catastrophe for humanity? Because that hasn't been done yet. We don't have that. We feel like we have it because everyone knows the threat and the risk. But really, we're just at this area of kind of vague speculation. There's a lot of room for people to step up with this kind of research. And the historical evidence suggests that this can make a real difference.

 

Haydn Belfield: We tend to think of existential risks as one-off threats — some big explosion, or some big thing, like an individual asteroid that hits an individual species of dinosaurs and then kills it, right — we tend to think of existential risks as one singular event. But really, that's not how most mass extinctions happen. That's not how civilizational collapses have tended to happen over history. The way that all of these things have actually happened, when you go back to look at archeological evidence or you go back to look at the fossil evidence, is that there's a whole range of different things — different hazards and different internal capabilities of these systems, whether they're species or societies — and they get overcome by a range of different things. 

 

So, often in archeological history — in the Pueblo Southwest, for example — there’ll be one set of climatic conditions, and one external shock that faces the community, and they react fine to it. But then, in a few different years, the same community is faced by some similar threats, but reacts completely differently and collapses completely. It's not that there's these one singular, overwhelming events from outside, it's that you have to look at all the different systems that this one particular society or whatever relies on. And you have to look at when all of those things overcome the overall resilience of a system. 

 

Or looking at species, like what happens when sometimes a species can recover from an external shock, and sometimes there's just too many things, and the conditions aren't right, and they get overcome, and they go extinct. That's where looking at existential risk, and looking at the study of how we might collapse or how we might go extinct — that's where the field needs to go: It needs to go into looking at what are all the different hazards we face, how do they interact with the vulnerabilities that we have, and the internal dynamics of our systems that we rely on, and the different resilience of those systems, and how are we exposed to those hazards in different ways, and having a much more sophisticated, complicated, messy look at how they all interact. I think that's the way that existential risk research needs to go.

 

Simon Beard: I agree. I think that fits in with various things we said earlier.

 

Ariel Conn: So then my final question for both of you is — I mean, you're not even just looking at climate change as an existential threat; I know you look at lots of things and how they contribute to existential threats — but looking at climate change, what gives you hope?

 

Simon Beard: At a psychological level, hope and fear aren’t actually big day-to-day parts of my life. Because working in existential risk, you have this amazing privilege that you're doing something, you're working to make that difference between human extinction and civilization collapse and human survival and flourishing. It's a waste to have that opportunity and to get too emotional about it. It's a waste firstly because it is the most fascinating problem. It is intellectually stimulating; It is diverse; It allows you to engage with and talk to the best people, both in terms of intelligence and creativity, but also in terms of drive and passion, and activism and ability to get things done.

 

But also because it's a necessary task: We have to get on with it, we have to do this. So I don't know if I have hope. But that doesn't mean that I'm scared or anxious, I just have a strong sense of what I have to do. I have to do what I can to contribute, to make a difference, to maximize my impact. That's a series of problems and we have to solve those problems. If there's one overriding emotion that I have in relation to my work, and what I do, and what gets me out of bed, it's curiosity — which is, I think, at the end of the day, one of the most motivating emotions that exists. People often say to me, “What's the thing I should be most worried about: nuclear war, or artificial intelligence or climate change? Like, tell me, what should I be most worried about?” You shouldn't worry about any of those things. Because worry is a very disabling emotion.

 

People who worry stay in bed. I haven't got time to do that. I had heart surgery about 18 months ago, a big heart bypass operation. And they warned me before that, after this surgery, you're going to feel emotional, it happens to everyone. It's basically a near death experience. You have to be cooled down to a state that you can't recover on your own; They have to heat you up. Your body kind of remembers these things. And I do remember a couple of nights after getting home from that. And I just burst into floods of tears thinking about this kind of existential collapse, and, you know, what it would mean for my kids and how we’d survive it, and it was completely overwhelming. As overwhelming as you'd expect it to be for someone who has to think about that. 

 

But this isn't how we engage with it. This isn't science fiction stories that we're telling ourselves to feel scared or feel a rush. This is a real problem. And we're here to solve that problem. I've been very moved the last month or so by all the stuff about the Apollo landing missions. And it’s reminded me, sort of a big inspiration of my life, one of these bizarre inspirations of my life, was getting Microsoft Encarta 95, which was kind of my first all-purpose knowledge source. And when you loaded it up — because it was the first one on CD ROM — they had these sound clips and they included that bit of JFK’s speech about we choose to go to the moon, not because it's easy, but because it's hard. And that has been a really inspiring quote for me. And I think I've often chosen to do things because they're hard. 

 

And it's been kind of upsetting — this is the first time this kind of moon landing anniversary’s come up — and I realized no, he was being completely literal. Like the reason that I chose to go to the moon was it was so hard that the Russians couldn't do it. So they were confident that they were going to win the race. And that was all that mattered. But for me, I think in this case, we’re choosing to do this research and to do this work, not because it's hard, but because it's easy. Because understanding climate change, being curious about it, working out new ways to adapt, and to mitigate, and to manage the risk, is so much easier than living with the negative consequences of it. This is the best deal on the table at the moment. This is the way that we maximize the benefit for minimizing the cost.

 

This is not the great big structural change that completely messes up our entire society, and reduces us to some kind of Greek primitivism. That's what happens if climate change kicks in. That's when we start to see people reduced to subsistence level, agricultural, whatever it is. Understanding the risk and responding to it: this is the way that we keep all the good things that our civilization has given us. This is the way that we keep international travel, that we keep our technology, that we keep our food and getting nice things from all around the world. 

 

And yes, it does require some sacrifices. But these are really small change in the scale of things. And once we start to make them we will find ways of working around it. We are very creative, we are very adaptable, we can adapt to the changes that we need to make to mitigate climate change. And we'll be good at that. And I just wish that anyone listening to this podcast had that mindset, didn't think about fear or about blame, or shame or anger — that they thought about curiosity, and they thought about what can I do, and how good this is going to be, how bright and open our future is, and how much we can achieve as a species.

 

If we can just get over these hurdles, these mistakes that we made years ago, for various reasons — often a small number of people in the land, you know, that's what determined that we have petrol cars rather than battery cars — and we can undo them; It's in our power, it’s in our gift. We are the species that can determine our own fate; We get to choose. And that's why we're doing this research. And I think if lots of people — especially if lots of people who are well educated, maybe scientists, maybe people who are thinking about a career in science — view this problem in that light, as what can I do? What's the difference I can make? We're powerful. It's a much less difficult problem to solve and a much better ultimate payoff that we’ll get than if we try and solve this any other way, especially if we don't do anything.

 

Ariel Conn: That was wonderful.

 

Simon Beard: Yeah, I'm ready to storm the barricade.

 

Ariel Conn: All right, Haydn try to top that.

 

Haydn Belfield: No way. That's great. I think Simon said all that needs to be said on that.

 

Ariel Conn: All right. Well, thank you both for joining us today.

 

Simon Beard: Thank you. It's been a pleasure.

 

Haydn Belfield: Yeah, absolute pleasure.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram