The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield

Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems; They also discussed our species’ unique strengths and vulnerabilities — and the ways in which technology has heightened both — with respect to the changing climate.

This month’s podcast helps serve as the basis for a new podcast we’re launching later this month about the climate crisis. We’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more!. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified when the climate series launches.

We’d also like to make sure we’re covering the climate topics that are of most interest to you. If you have a couple minutes, please fill out a short survey at surveymonkey.com/r/climatepodcastsurvey, and let us know what you want to learn more about.

Topics discussed in this episode include:

  • What an existential risk is and how to classify different threats
  • Systems critical to human civilization
  • Destabilizing conditions and the global systems death spiral
  • How we’re vulnerable as a species
  • The “rungless ladder”
  • Why we can’t wait for technology to solve climate change
  • Uncertainty and how to deal with it
  • How to incentivize more creative science
  • What individuals can do

References discussed in this episode include:

Want to get involved? CSER is hiring! Find a list of openings here.

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast. I’m your host, Ariel Conn, and I am especially excited about this month’s episode. Not only because, as always, we have two amazing guests joining us, but also because this podcast helps lay the groundwork for an upcoming series we’re releasing on climate change.

There’s a lot of debate within the existential risk community about whether the climate crisis really does pose an existential threat, or if it will just be really, really bad for humanity. But this debate exists because we don’t know enough yet about how bad the climate crisis will get nor about how humanity will react to these changes. It’s very possible that today’s predicted scenarios for the future underestimate how bad climate change could be, while also underestimating how badly humanity will respond to these changes. Yet if we can get enough people to take this threat seriously and to take real, meaningful action, then we could prevent the worst of climate change, and maybe even improve some aspects of life. 

In late August, we’ll be launching a new podcast series dedicated to climate change. I’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified as soon as the climate series launches.

But first, today, I’m joined by two guests who suggest we should reconsider studying climate change as an existential threat. Dr. Simon Beard and Haydn Belfield are researchers at University of Cambridge’s Center for the Study of Existential Risk, or CSER. CSER is an interdisciplinary research group dedicated to the study and mitigation of risks that could lead to human extinction or a civilizational collapse. They study existential risks, develop collaborative strategies to reduce them, and foster a global community of academics, technologists, and policy makers working to safeguard humanity. Their research focuses on four areas: biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.

Simon is a senior research associate and academic program manager; He’s a moral philosopher by training. Haydn is a research associate and academic project manager, as well as an associate fellow at the Leverhulme Center for the Future of Intelligence. His background is in politics and policy, including working for the UK Labor party for several years. Simon and Haydn, thank you so much for joining us today.

Simon Beard: Thank you.

Haydn Belfield: Hello, thank you.

Ariel Conn: So I’ve brought you both on to talk about some work that you’re involved with, looking at studying climate change as an existential risk. But before we really get into that, I want to remind people about some of the terminology. So I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.

Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. 

So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.

Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that’s consistent with and supporting our health and our continued survival, and that the institutions that we’ve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we’ll basically, we’ll be able to get on with our lives.

If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that’s really hard for us to respond to.

And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can’t get them back or it’s going to be really hard. And life as we know it cannot be resumed; We’re going to have to live in a very different and very inferior world, at least from our current way of thinking.

Haydn Belfield: I think that sort of captures it really well. One thing that you could kind of visualize, it might be something like, imagine a really bad endemic. 100 years ago, we had the Spanish flu pandemic that killed 100 million people — that was really bad. But it could be even worse. So imagine one tomorrow that killed a billion people. That would be one of the worst things that’s ever happened to humanity; It would be sort of a global catastrophic risk. But it might not end our story, it might not be the end of our potential. But imagine if it killed everyone, or it killed almost everyone, and it was impossible to recover: That would be an existential risk.

Ariel Conn: So, there’s — at least I’ve seen some debate about whether we want to consider climate change as falling into either a global catastrophic or existential risk category. And I want to start first with an article that, Simon, you wrote back in 2017, to consider this question. The subheading of your article is a question that I think is actually really important. And it was: how much should we care about something that is probably not going to happen? I want to ask you about that — how much should we care about something that is probably not going to happen?

Simon Beard: I think this is really important when you think about existential risk. People’s minds, they want to think about predictions, they want someone who works in existential risk to be a prophet of doom. That is the idea that we have — that you know what the future is going to be like, and it’s going to be terrible, and what you’re saying is, this is what’s going to happen. That’s not how people who work in existential risk operate. We are dealing with risks, and risks are about knowing all the possible outcomes: whether any of those are this severe long term threat, an irrecoverable loss to our species.

And it doesn’t have to be the case that you think that something is the most likely or the most probable as a potential outcome for you to get really worried about the thing that could bring that about. And even a 1% risk of one of these existential catastrophes is still completely unacceptable because of the scale of the threat, and the harm we’re talking about. And because if this happens, there is no going back; It’s not something that we can do a safe experiment with.

So when you’re dealing with risk, you have to deal with probabilities. You don’t have to be convinced that climate change is going to have these effects to really place it on the same level as some of the other existential risks that people talk about — nuclear weapons, and artificial intelligence, and so on — you just need to see that this is possible. We can’t exclude it based on the knowledge that we have at the moment, but it seems like a credible threat with a real chance of materializing. And something that we can do about it, because ultimately the aim of all existential risk research is safety — trying to make the world a safer place and the future of humanity a more certain thing.

Ariel Conn: Before I get into the work that you’re doing now, I want to stick with one more question that I have about this article. I was amused when you sent me the link to it — you sort of prefaced it by saying that you think it’s rather emblematic of some of the problematic ways that we think about climate change, especially as an existential risk, and that your thinking has evolved in the last couple of years since writing this. I was hoping you could just talk a little bit about some of the problems you see with the way we’re thinking about climate change as an x-risk.

Simon Beard: I wrote this paper largely out of a realization that people wanted us to talk about climate change in the next century. And we wanted to talk about it. It’s always up there on the list of risks and threats that people bring up when you talk about existential risk. And so I thought, well, let’s get the ball rolling; Let’s review what’s out there, and the kind of predictions that people who seem to know what they’re talking about have made about this — you know, economists, climate scientists, and so on — and make this case that this suggests there is a credible threat, and we need to take this seriously. And that seemed, at the time, like a really good place to start.

But the more I thought about it afterwards, the more flawed I saw the approach as being. And it’s hard to regret a paper like that, because I’m still convinced that the risk is very real, and people need to take it seriously. But for instance, one of the things that kept on coming up is that when people make predictions about climate change as an existential risk, they’re always very vague. Why is it a risk? What’s the sort of scenarios that we worry about? Where are the danger levels? And they always want to link it to a particular temperature threshold or a particular greenhouse gas trajectory. And that just didn’t strike me as credible, that we would cross a particular temperature threshold and then that would be the end of humanity.

Because of course, a huge amount of the risk that we face depends upon how humanity responds to the changing climate, not just upon climate change. I think people have this idea in their mind that it’ll get so hot, everyone will fry or everyone will die of heat exhaustion. And that’s just not a credible scenario. So there were these really credible scholars, like Marty Weitzman and Ram Ramanathan, who tried to work this out, and have tried to predict what was going to happen. But they seemed to me to be missing a lot, and try and make very precise claims but based on very vague scenarios. So we kind of said at that point, we’re going to stop doing this until we have worked out a better way of thinking about climate change as an existential threat. And we’ve been thinking a lot about this in the intervening 18 months, and that’s where the research that you’re seeing that we’re hoping to publish soon and the desire to do this podcast really come from. So it seems to us that there are kind of three ways that people have gone about thinking about climate change as an existential risk. It’s a really hard question. We don’t really know what’s going to happen. There’s a lot of speculation involved in this.

One of the ways that people have gone about trying to respond to this has just been to speculate, just been to come up with some plausible scenario or pick a temperature number out of the air and say, “Well, that seems about right, if that were to happen that would lead to human extinction, or at least a major disruption of all of these systems that we rely upon. So what’s the risk of that happening, and then we’ll label that as the existential climate threat.” As far as we can tell, there isn’t the research to back up some of these numbers. Many of them conflict: In Ram Ramanathan’s paper he goes for five degrees; In Marty Weitzman’s paper he goes to six degrees; There’s another paper that was produced by Breakthrough where they go for four degrees. There’s kind of quite a lot of disagreement about where the danger levels lie.

And some of it’s just really bad. So there’s this prominent paper by Jem Bendell — he never got it published, but it’s been read like 150,000 times, I think — on adapting to extreme climate change. And he just picks this random scenario where the sea levels rise, a whole bunch of coastal nuclear reactors get inundated with seawater, and they go critical, and this just causes human extinction. That’s not credible in many different ways, not least just that won’t have that much damage. But it just doesn’t seem credible that this slow sea level rise would have this disastrous meltdown effect — we could respond to that. What passes for scientific study and speculation didn’t seem good enough to us.

Then there were some papers which just kind of passed the whole thing by — say, “Well, we can’t come up with a plausible scenario or a plausible threat level, but there just seem to be a lot of bad things going on around there. Given that we know that the climate is changing, and that we are responding to this in a variety of ways, probably quite inadequately, it doesn’t help us to prioritize efforts or really understand the level of risk we face and when maybe some more extreme measures like geoengineering become more appropriate because of the level of risk that we face.”

And then there’s a final set of studies — there have been an increasing number of these; one recently came out in Vox, Anders Sandberg has done one, and Toby Ord talks about one — where people say, “Well, let’s just go for the things that we know, let’s go for the best data and the best studies.” And these usually focus on a very limited number of climate effects, the more direct impacts of things like heat exhaustion, perhaps sometimes the crop failure — but only really looking at the most direct climate impacts and only where there are existing studies. And then they try and extrapolate from that, sometimes using integrated assessment models, sometimes it’s the other kinds of analysis, but usually in quite a straightforward linear economic analysis or epidemiological analysis.

And that also is useful. I don’t want to dis these papers; I think that they provide very useful information for us. But there is no way that that can constitute an adequate risk assessment, given the complexity of the impacts that climate change is having, and the ways in which we’re responding to that. And it’s very easy for people to read these numbers and these figures and conclude, as I think the Vox article did, climate change isn’t an existential risk, it’s just going to kill a lot of people. Well, no, we know it will kill a lot of people, but that doesn’t answer the question about whether it is an existential threat. There are a lot of things that you’re not considering in this analysis. So given that there wasn’t really a good example that we could follow within the literature, we’ve kind of turned it on its head. And we’re now saying, maybe we need to work backwards.

Rather than trying to work forwards from the climate change we’re expecting and the effects that we think that is going to have and then whether these seem to constitute an existential threat, maybe we need to start from the other end and think about what are the conditions that could most plausibly destabilize the global civilization and the continued future of our species? And then work back from them to ask, are there plausible climate scenarios that could bring these about? And there’s already been some interesting work in this area for natural systems, and this kind of global Earth system thinking and the planetary boundaries framework, but there’s been very little work on this done at the social level.

And even less work done when you consider that we rely on both social and natural systems for our survival. So what we really need is some kind of approach that will integrate these two. That’s a huge research agenda. So this is how we think we’re going to proceed in trying to move beyond the limited research that we’ve got available. And now we need to go ahead and actually construct these analysis and do a lot more work in this field. And maybe we’re going to start to be able to produce a better answer.

Ariel Conn: Can you give some examples of the research that has started with this approach of working backwards?

Simon Beard: So there’s been some really interesting research coming out of the Stockholm Resilience Center dealing with natural Earth systems. So they first produced this paper on planetary boundaries, where they looked at a range of, I think it’s nine systems — the biosphere, biogeochemical systems, yes, climate system and so on — and said, are these systems operating in what we would consider their normal functioning boundaries? That’s how they’ve operated throughout the pliocene, throughout the last several thousand years, during which human civilization has developed. Or do they show signs of transitioning to a new state of abnormal operation? Or are they in a state that’s already posing high risk to the future of human civilization, but without really specifying what that risk is.

Then they produced another paper recently on Hothouse Earth, where they started to look for tipping points within the system, points where, in a sense, change become self perpetuating. And rather than just a kind of gradual transition from what we’re used to, to maybe an abnormal condition, all of a sudden, a whole bunch of changes start to accelerate. So it becomes much harder to adapt to these. Their analysis is quite limited, but they argue that quite a lot of these tipping point seem to start kicking in at about one and a half to two degrees warming above pre-industrial levels.

We’re getting quite close to that now. But yeah, the real question for us at the Center for the Study of Existential risk looking at humanity is, what are the effects of this going to be? And also what are the risks that exist within those socio-technological systems, the institutions that we set up, the way that we survive as a civilization, the way we get our food, the way we get our information, and so on, because there’s also significant fragilities and potential tipping points there as well. 

That’s a very new sort of study, I mean, to the point were a lot of people just refer back to this one book written by Jared Diamond in 2005 as if it was the authoritative tome on collapse. And it’s a popular book, and he’s not an expert in this: He’s kind of a very generalist scholar, but he provides a very narrative-based analysis of the collapse of certain historical civilizations and draws out a couple of key lessons from that. But it’s all very vague and really written for a general audience. And that still kind of stands out as this is the weighty tome, this is where you go to get answers to your questions. It’s very early and we think that there’s a lot of room for better analysis of that question. And that’s something we’re looking at a lot.

Ariel Conn: Can you talk about the difference between treating climate change itself as an existential risk, like saying this is an x-risk, and studying it as if it poses such a threat? If that distinction makes sense?

Simon Beard: Yeah. When you label something as an existential risk, I think that is in many ways a very political move. And I think that that has been the predominant lens through which people have approached this question of how we should talk about climate change. People want to draw attention to it, they realize that there’s a lot of bad things that could come from it. And it seems like we could improve the quality of our future lives relatively easily by tackling climate change.

It’s not like AI safety, you know, the threats that we face from advance artificial intelligence, where you really have to have advanced knowledge of machine learning and a lot of skills and do a lot of research to understand what’s going on here and what the real threats that we face might be. This is quite clear. So talking about it, labeling it as an existential risk has predominantly been a political act. But we are an academic institution. 

I think when you ask this question about studying it as an existential threat, one of the great challenges we face is all things that are perceived as existential threats, they’re all interconnected. Human extinction, or the collapse of our civilization, or these outcomes that we worry about: these are scenarios and they will have complex causes — complex technological causes, complex natural causes. And in a sense, when you want to ask the question, should we study climate change as an existential risk? What you’re really asking is, if we look at everything that flows from climate change, will we learn something about the conditions that could precipitate the end of our civilization? 

Now, ultimately, that might come about because of some heat exhaustion or vast crop failure because of the climate change directly. It may come about because, say, climate change triggers a nuclear war. And then there’s a question of, was that a climate-based extinction or a nuclear-based extinction? Or it might come about because we develop technologies to counter climate change, and then those technologies prove to be more dangerous than we thought and pose an existential threat. So when we carve this off as an academic question, what we really want to know is, do we understand more about the conditions that would lead to existential risk, and do we understand more about how we can prevent this bad thing from happening, if we look specifically at climate change? It’s a slightly different bar. But it’s all really just this question of, is talking about climate change, or thinking about climate change, a way to move to a safer world? We think it is but we think that there’s quite a lot of complex, difficult research that is needed to really make that so. And at the moment, what we have is a lot of speculation.

Haydn Belfield: I’ve got maybe an answer to that as well. Over the last few years, lots, and lots of politicians have said climate change is an existential risk, and lots of activists as well. So you get lots and lots of speeches, or rallies, or articles saying this is an existential risk. But at the same time, over the last few years, we’ve had people who study existential risk for a living, saying, “Well, we think it’s an existential risk in the same way that nuclear war is an existential risk. But it’s not maybe this single event that could kill lots and lots of people, or everyone, in kind of one fell swoop.”

So you get people saying, “Well, it’s not a direct risk on its own, because you can’t really kill absolutely everybody on earth with climate change. Maybe there’s bits of the world you can’t live in, but people move around. So it’s not an existential risk.” And I think the problem with both of these ways of viewing it is that word that I’ve been emphasizing, “an.” So I would kind of want to ban the word “an” existential risk, or “a” existential risk, and just say, does it contribute to existential risk in general?

So it’s pretty clear that climate change is going to make a bunch of the hazards that we face — like pandemics, or conflict, or environmental one-off disasters — more likely, but it will also make us more vulnerable to a whole range of hazards, and it will also increase the chances of all these types of things happening, and increase our exposure. So like with Simon, I would want to ask, is climate change going to increase the existential risk we face, and not get hung up on this question of is it “an” existential risk?

Simon Beard: The problem is, unfortunately, there is an existing terminology and existing way of talking that to some extent we’re bound up with. And this is how the debate is. So we’ve really struggled with to what extent we kind of impose the terminology that we’ve most liked on the field and the way that these things are discussed? And we know ultimately existential risk is just a thing; It’s a homogenous lump at the end of human civilization or the human species, and what we’re really looking at is the drivers of that and the things that push that up, and we want to push it down. That is not a concept that I think lots of people find easy to engage with. People do like to carve this up into particular hazards and vulnerabilities and so on.

Haydn Belfield: That’s how most of risk studies works. Most of when you study natural disasters, or you study accidents, in an industry setting, that’s what you’re looking at. You’re not looking at this risk as completely separate. You’re saying, “What hazards are we facing? What are our vulnerabilities? And what are our exposure,” and kind of combining all of those into having some overall assessment of the risk you face. You don’t try and silo it up into, this is bio, this is nuclear, this is AI, this is environment.

Ariel Conn: So that connects to a question that I have for you both. And that is what do you see as society’s greatest vulnerabilities today?

Haydn Belfield: Do you want to give that a go, Simon?

Simon Beard: Sure. So I really hesitate to answer any question that’s posed quite in that way, just because I don’t know what our greatest vulnerability is.

Haydn Belfield: Because you’re a very good academic, Simon.

Simon Beard: But we know some of the things that contribute to our vulnerability overall. One that really sticks in my head came out of a study we did looking at what we can learn from previous mass extinction events. And one of the things that people have found looking at the species that tend to die out in mass extinctions, and the species that survive, is this idea that the specialists — the efficient specialists — who’ve really carved out a strong biological niche for themselves, and are often the ones that are doing very well as a result of that, tend to be the species that die out, and the species that survive are the species that are generalists. But that means that within any given niche or habitat or environment, they’re always much more marginal, biologically speaking.

And then you say, “Well, what is humanity? Are we a specialist that’s very vulnerable to collapse, or are we a generalist that’s very robust and resilient to this kind of collapse that would fare very well?” And what you have to say is, as a species, when you consider humanity on its own, we seem to be the ultimate generalist, and indeed, we’re the only generalist who’s really moved beyond marginality. We thrive in every environment, every biome, and we survive in places where almost no other life form would survive. We survived on the surface of the moon — not for very long, but we did; We survived Antarctica, on the back ice, for long periods of time. And we can survive at the bottom of the Mariana Trench, and just a ridiculously large range of habitats.

But of course, the way we’ve achieved that is that every individual is now an incredible specialist. There are very few people in the world who could really support themselves. And you can’t just sort of pick it up and go along with it. You know like this last weekend, I went to an agricultural museum with my kids, and they were showing, you know, how you plow fields and how you gather crops and looked after it. And there’s a lot of really important, quite artisanal skills about what you had to do to gather the food and protect it and prepare it and so on. And you can’t just pick this up with a book; you really have to spend a long time learning it and getting used to it and getting your body strong enough to do these things.

And so every one of us as an individual, I think, is very vulnerable, and relies upon these massive global systems that we’ve set up, these massive global institutions, to provide this support and to make us this wonderfully adaptable generalist species. So, so long as institutions and the technologies that they’ve created and the broad socio-technological systems that we’ve created — so long as they carry on thriving and operating as we want them to, then we are very, very generalist, very adaptable, very likely to make it through any kind of trouble that we might face in the next couple of centuries — with a few exceptions, a few really extreme events. 

But the flip side of that is anything that threatens those global socio-technological institutions also threatens to move us from this very resilient global population we have at the moment to an incredibly fragile one. If we fall back on individuals and our communities, all of a sudden, we are going to become the vulnerable specialist that each of us individually is. That is a potentially catastrophic outcome that people don’t think about enough.

Haydn Belfield: One of my colleagues, Luke Kemp, likes to describe this as a rungless ladder. So the idea is that there’s been lots and lots of collapses before in human history. But what normally happens is elites at the top of the society collapse, and it’s bad for them. But for everyone else, you kind of drop one rung down on the ladder, but it’s okay, you just go back to the farm, and you still know how to farm, your family’s still farming — things get a little worse, maybe, but it’s not really that bad. And you get people leaving the cities, things like that; But you only drop one rung down the ladder, you don’t fall off it. But as we’ve gone many, many more rungs up the ladder, we’ve knocked out every rung below us. And now we’re really high up the ladder. Very few of us know how to farm, how to hunt or gather, how to survive, and so on. So were we to fall off that rungless ladder, then we might come crashing down with a wallop.

Ariel Conn: I’m sort of curious. We’re talking about how humanity is generalist but we’re looking within the boundaries of the types of places we can live. And yet, we’re all very specifically, as you described, reliant on technology in order to live in these very different, diverse environments. And so I wonder if we actually are generalists? Or if we are still specialists at a societal level because of technology, if that makes sense?

Simon Beard: Absolutely. I mean, the point of this was, we kind of wanted to work out where we fell on the spectrum. And basically, it’s a spectrum that you can’t apply to humanity: We appear to fall as the most extreme species in both ends. And I think one of the reasons for that is that the scale as it would be applied to most species really only looks at the physical characteristics of the species, and how they interact directly with their environment — whereas we’ve developed all these highly emergent systems that go way beyond how we interact with the environment, that determine how we interact with one another, and how we interact with the technologies that we’ve created.

And those basically allow us to interact with the world around us in the same ways that both generalists and specialists would. That’s great in many ways: It’s really served us well as a species, it’s been part of the hallmark of our success and our ability to get this far. But it is a real threat, because it adds a whole bunch of systems that have to be operating in a way as we expect them to in order for us to continue. Maybe so long as these systems function it makes us more resilient to normal environmental shocks. But it makes us vulnerable to a whole bunch of other shocks.

And then you look at the way that we actually treat these emergent socio-technological systems. And we’re constantly driving for efficiency; We’re constantly driving for growth, as quick and easy growth as we can get. And the ways that you do that are often by making the systems themselves much less resilient. Resiliency requires redundancy, requires diversity, requires flexibility, requires all of the things that either an economic planner or a market functioning on short-term economic return really hate, because they get in the way of productivity.

Haydn Belfield: Do you want to explain what resilience is?

Simon Beard: No.

Ariel Conn: Hayden do you want to explain it?

Haydn Belfield: I’ll give it a shot, yeah. So, just since people might not be familiar with it — so what I normally think of is someone balancing. How robust they are is how much you can push that person balancing before they fall over, and then resilience is how quickly they get up and can balance again. The next time they balance, they’re even stronger than before. So that’s what we’re talking about when we’re talking about resilience, how quickly and how well you’re able to respond to those kinds of external shocks.

Ariel Conn: I want to stick with this topic of the impact of technology, because one of the arguments that I often hear about why climate change isn’t as big of an existential threat or a contributor to existential risk as some people worry is because at some point in the near future, we will develop technologies that will help us address climate change, and so we don’t need to worry about it. You guys bring this up in the paper that you’re working on as potentially a dangerous approach; I was hoping you could talk about that.

Simon Beard: I think there’s various problems with looking for the technological solutions. One of them is technologies tend to be developed for quite specific purposes. But some of the conditions that we are examining as potential civilization collapse due to climate change scenarios involve quite widespread and wide-scale systemic change to society and to the environment around us. And engineers have a great challenge even capturing and responding to one kind of change. Engineering is an art of the small; It’s a reductionist art; You break things down, and you look at the components, and you solve each of the challenges one by one.

And there are definitely visionary engineers who look at systems and look at how the parts all fit together. But even there, you have to have a model, you have to have a basic set of assumptions of how all these parts fit together and how they’re going to interact. And this is why you get things like Murphy’s Law — you know, if it can go wrong, it will go wrong — because that’s not how the real world works. The real world is constantly throwing different challenges at you, problems that you didn’t foresee, or couldn’t have foreseen because they are inconsistent with the assumption you made, all of these things. 

So it is quite a stretch to put your faith in technology being able to solve this problem, when you don’t understand exactly what the problem that you’re facing is. And you don’t necessarily at this point understand where we may cross the tipping point, the point of no return, when you really have to step up this R & D funding. Or now you know the problem that the engineers have to solve, because it’s staring you in the face: By the time that that happens, it may be too late. If you get positive feedback loops — you know, reinforcement where one bad thing leads to another bad thing, leads to another bad thing, which then contributes to the original bad thing — you need so much more energy to push the system back into a state of normality than for this cycle to just keep on pushing it further and further away from what you previously were at.

So that throws up significant barriers to a technological fix. The other issue, just going back to what we were saying earlier, is technology does also breed fragility. We have a set of paradigms about how technologies are developed, how they interface with the economy that we face, which is always pushing for more growth and more efficiency. It has not got a very good track record of investing in resilience, investing in redundancy, investing in fail-safes, and so on. You typically need to have strong, externally enforced incentives for that to happen.

And if you’re busy saying this isn’t really a threat, this isn’t something we need to worry about, there’s a real risk that you’re not going to achieve that. And yes, you may be able to develop new technologies that start to work. But are they actually just storing up more problems for the future? We can’t wait until the story’s ended and then know whether these technologies really did make us safer in the end or more vulnerable.

Haydn Belfield: So I think I would have an overall skepticism about technology from a kind of, “Oh, it’s going to increase our resilience.” My skepticism in this case is just more practical. So it could very well be that we do develop — so there’s these things called negative emissions technologies, which suck CO2 out of the air — we could maybe develop that. Or things that could lower the temperature of the earth: maybe we can find a way to do that, throw the whole climate and weather into a chaotic system. Maybe tomorrow’s the day that we get the breakthrough with nuclear fusion. I mean, it could be that all of these things happen — it’d be great if they could. But I just wouldn’t put all my bets on it. The idea that we don’t need to prioritize climate change above all else, and make it a real central effort for societies, for companies, for governments, because we can just hope for some techno-fix to come along and save us — I just think it’s too risky, and it’s unwise. Especially because if we’re listening to the scientists, we don’t have that much longer. We’ve only got a few decades left, maybe even one decade, to really make dramatic changes. And we just won’t have invented some silver bullet within a decade’s time. Maybe technology could save us from climate change; I’d love it if it could. But we just can’t be sure about that, so we need to make other changes.

Simon Beard: That’s really interesting, Hayden, because when you list negative emissions technologies, or nuclear fusion, that’s not the sort of technology I’m talking about. I was thinking about technology as something that would basically just be used to make us more robust. Obviously, one of the things that you do if you think that climate change is an existential threat is you say, “Well, we really need to prioritize more investment into these potential technology solutions.” The belief that climate change is exponential threat is not committing you to trying to make climate change worse, or something like that.

You want to make it as small as possible, you want to reduce this impact as much as possible. That’s how you respond to climate change as an existential threat. if you don’t believe climate change is an existential threat, you would invest less in those technologies. Also, I do wanna say — and I mean, I think there’s some legitimate debate about this, but I don’t like the 12 years terminology, I don’t think we know nearly enough to support those kind of claims. The IPCC came up with this 12 years, but it’s not really clear what they meant by it. And it’s certainly not clear where they got it from. People have been saying, “Oh, we’ve got a year to fix the climate,” or something, for as long as I can remember discussions going on about climate change.

It’s one of those things where that makes a lot of sense politically, but those claims aren’t scientifically based. We don’t know. We need to make sure that that’s not true; We need to falsify these claims, either by really looking at it, and finding out that it genuinely is safer than we thought it was or by doing the technological development and greenhouse gas reduction efforts and other climate mitigation methods to make it safe. That’s just how it works.

Ariel Conn: Do you think that we’re seeing the kind of investment in technology, you know, trying to develop any of these solutions, that we would be seeing if people were sufficiently concerned about climate change as an existential threat?

Simon Beard: So one of the things that worries me is people always judge this by looking at one thing and saying, “Are we doing enough of that thing? Are we reducing our carbon dioxide emissions fast enough? Are people changing their behaviors fast enough? Are we developing technologies fast enough? Are we ready?” Because we know so little about the nature of the risk, we have to respond to this in a portfolio manner; We have to say, “What are all the different actions and the different things that we can take that will make us safer?” And we need to do all of those. And we need to do as much as we can of all of these.

And I think there is a definite negative answer to your question when you look at it like that, because people aren’t doing enough thinking and aren’t doing enough work about how we do all the things we need to do to make us safe from climate change. People tend to get an idea of what they think a safer world would look like, and then complain that we’re not doing enough of that thing, which is very legitimate and we should be doing more of all of these things. But if you look at it as an existential risk, and you look at it from an existential safety angle, there’s just so few people who are saying, “Let’s do everything we can to protect ourselves from this risk.”

Way too many people are saying, “I’ve had a great idea, let’s do this.” That doesn’t seem to me like safety-based thinking; That seems to me like putting all your eggs in one basket and basically generating the solution to climate change that’s most likely to be fragile, that’s most likely to miss something important and not solve the real problem and store up trouble for a future date and so on. We need to do more — but that’s not just more quantitatively, it’s also more qualitatively.

Haydn Belfield: I think just clearly we’re not doing enough. We’re not cutting emissions enough, we’re not moving to renewables fast enough, we’re not even beginning to explore possible solar geoengineering responses, we don’t have anything that really works to suck carbon dioxide or other greenhouse gases out of the air. Definitely, we’re not yet taking it seriously enough as something that could be a major contributor to the end of our civilization or the end of our entire species.

Ariel Conn: I think this connects nicely to another section of some of the work you’ve been doing. And that is looking at — I think there were seven critical systems that are listed as sort of necessary for humanity and civilization.

Simon Beard: Seven levels of critical systems.

Ariel Conn: Okay.

Simon Beard: We rely on all sorts of systems for our continued functioning and survival. And a sufficiently significant failure in any of these systems could be fatal to all of our species. We can kind of classify these systems at various levels. So at the bottom, there are the physical systems — that’s basically the laws of physics. Atoms operate, how subatomic particles operate, how they interact with each other: those are pretty safe. There are some advanced physics experiments that some people have postulated may be a threat to those systems. But they all seem pretty safe. 

We then kind of move up: We’ve got basic chemical systems and biochemical systems, how we generate enzymes and all the molecules that we use — proteins, lipids, and so on. Then we move up to the level of the cell; Then we move up to the level of the anatomical systems — the digestive system, the respiratory system — we need all these things. Then you look at the organism as a whole and how it operates. Then you look at how organisms interact with each other: the biosphere system, the biological system, ecological system.

And then as human beings, we’ve added this kind of seventh, even more emergent, system, which is not just how humans interact with each other, but the kind of systems that we have made to govern our interaction, and to determine how we work together with each other: political institutions, technology, the way we distribute resources around the planet, and so on. So there are a really quite amazing number of potential vulnerabilities that our species has. 

It’s many more than seven, but categorizing needs on the kind of the seven levels is helpful to not miss anything, because I think most people’s idea of an existential threat is something like a really big gun. Guns, we understand how they kill people, if you just had a really huge gun, and just blew a hole in everyone’s head. But that’s both missing things that are actually a lot more basic than the way that people normally die, but also a lot more sophisticated and emergent. All of these are potentially quite threatening.

Ariel Conn: So can you explain a little bit more detail how climate change affects these different levels?

Haydn Belfield: So I guess the way I’ll do is I’ll first talk a bit about natural feedback stuff, and then talk about the social feedback loops. Everyone listening to this will be familiar with feedback loops, like methane getting released from permafrost in the Arctic, or methane coming out of clathrates in the ocean, or there’s other kinds of feedback loops. So there’s one that was discovered only recently, very recent paper was about cloud formation. So if it gets to four degrees, these models show that it becomes much harder for clouds to form. And so you don’t get much sort of radiation bouncing off those clouds and you get very rapid additional heating up to 12 degrees, is what it said.

So the first way that climate change could affect these kinds of systems that we’re talking about is it just makes it anatomically way too hot: You get all these feedback, and it just becomes far too hot for anyone to survive sort of anywhere on the surface. It might get much too hot in certain areas of the globe for really civilization to be able to continue there, much like it’s very hard in the center of the Sahara to have large cities or anything like that. But that seems quite unlikely that climate change would ever get that bad. The kind of stuff that we’re much more concerned about is the more general effects that climate change, climate chaos, climate breakdown might have on a bunch of other systems.

So in this paper, we’ve broken it down into three. We’ve looked at the effects of climate change on the food/water/energy system, the ecological system, and on our political system and conflict. And climate change is likely to have very negative effects on all three of those systems. It’s likely to negatively affect crop yields; It’s likely to increase freak weather events, and there’s some possibility that you might have these sort of very freak weather events — droughts, or hurricanes is also one — in areas where we produce lots of our calories, so bread baskets around the world. So climate change is going to have very negative effects most likely on our food and energy and water systems.

Then separately, there’s ecological systems. People will be very familiar with climate change driving lots of habitat loss, and therefore the loss of species; People will be very familiar with coral reefs dying and bleaching and going away. This could also have very negative effects on us, because we rely on these ecological systems to provide what we call ecological services. Ecological services are things like pollination, so if all the bees died what would we do? Ecological services also include the fish that we catch and eat, or fresh, clean drinking water. So climate change is likely to have very negative effects on that whole set of systems. And then it’s likely to have negative effects on our political system.

If there are large areas of the world that are nigh on uninhabitable, because you can’t grow food or you can’t go out at midday, or there’s no clean water available, then you’re likely to see maybe state breakdown, maybe huge numbers of people leaving — much more than we’ve ever encountered before, sort of 10s or hundred millions of people dislocated and moving around the world. That’s likely to lead to conflict and war. So those are some ways in which climate change could have negative effects on three sets of systems that we crucially rely on as a civilization.

Ariel Conn: So in your work, you also talk about the global systems death spiral. Was that part of this?

Haydn Belfield: Yeah, that’s right. The global systems death spiral is a catchy term to describe the interaction between all these different systems. So not only would climate change have negative effects on our ecosystems, on our food and water and energy systems, the political system and conflict, but these different effects are likely to interact and make each other worse. So imagine our ecosystems are harmed by climate change: Well, that probably has an effect on food/water systems, because we rely on our ecosystems for these ecosystem services. 

So then, the bad effects on our food and water systems: Well, that probably leads to conflict. So some colleagues of ours at the Anglia Ruskin University have something called a global chaos map, which is a great name for a research project, where they try and link incidences of shocks to the food system and conflict — riots or civil wars. And they’ve identified lots and lots of examples of this. Most famously, the Arab Spring, which has now become lots of conflicts, has been linked to a big spike in food prices several years ago. So there’s that link there between food and water, insecurity and conflict. 

And then conflict leads back into ecosystem damage. Because if you have conflict, you’ve got weak governance, you’ve got weak governments trying to protect their ecosystems, and weak government has been identified as the strongest single predictor of ecosystem loss, biodiversity loss. They all interact with one another, and make one another worse. And you could also think about things going back the other way. So for example, if you’re in a war zone, if you’ve got conflict, you’ve got failing states — that has knock-on effects on the food systems, and the water systems that we rely on: We often get famines during wartime.

And then if they don’t have enough food to eat, they don’t have water to drink, maybe that has negative effects on our ecosystems, too, because people are desperate to eat anything. So what we’re trying to point out here is that the systems aren’t independent from one another — they’re not like three different knobs that are all getting turned up independently by climate change — but that they interact with one another in a way that could cause lots of chaos and lots of negative outcomes for world society.

Simon Beard: We did this kind of pilot study looking at the ecological system and the food system and the global political system and looking at the connections of those three, really just in one direction: looking at the impact of food insecurity on conflict, and conflict and political instability on the biosphere, and loss of biosphere on integrity of the food system. But that was largely determined by the fact that these were three connections that we either had looked at directly, or had close colleagues who had looked at, so we had quite good access to the resources.

As Hayden said, everything kind of also works in the other direction, most likely. And also, there are many, many more global systems that interact in different ways. Another trio that we’re very interested in looking at in the future is the connection between the biosphere and the political system, but this time, also, with some of the health systems, the emergence of new diseases, the ability to respond to public health emergencies, and especially when these things are looked at in kind of one health perspective, where plant health and animal health and human health are all actually very closely interacting with one another.

And then you kind of see this pattern where, yes, we could survive six degrees plus, and we could survive famine, and we could survive x, y, and z. But once these things start interacting, it just drives you to a situation where really everything that we take for granted at the moment up to and including the survival of the species — they’re all on the table, they’re all up for grabs once you start to get this destructive cycle between changes in the environment and changes in how human society interacts with the environment. It’s the very dangerous, potentially very self-perpetuating feedback loop, and that’s why we refer to it as a global systems death spiral: because we really can’t predict at this point in time where it will end. But it looks very, very bleak, and very, very hard to see how once you enter into this situation, you could then kind of dial it back and return to a safe operating environment for humanity and the systems that we rely on. 

There’s definitely a new stable state at the end of this spiral. So when you get feedback loops between systems, it’s not that they will just carry on amplifying change forever; They’re moving towards another kind of stable state, but you don’t know how long it’s going to take to get there, you don’t know what that steady state will be. So for the simulation with the death of clouds, this idea that purely physical feedback between rising global temperatures, changes in the water cycle, and cloud cover, then you end up with a world that’s much, much hotter and much more arid than the one we have at the moment, which could be a very dangerous state. For sort of perpetual human survival, we would need a completely different way of feeding ourselves and really interacting with the environment. 

You don’t know what sort of death traps or kill mechanisms lie along that path of change; You don’t know if there is, for instance, somewhere here, it’s going to trigger a nuclear war, or it’s going to trigger attempts to geoengineer the climate in a sort of bid to gain safety, but actually these turn out to have catastrophic consequences, or all the others that are unknown unknowns we want to make turn into known unknowns, and then turn into things that we can actually begin to understand and study. So in terms of not knowing where the bottom is, that’s potentially limitless as far as humanity is concerned. We know that it will have an end. Worst case scenario, that end is a very arid climate with a much less complex, much simpler atmosphere, which would basically need to be terraformed back into a livable environment in the way that we’re currently thinking maybe we could do that for Mars. But to get a global effort to do that, in an already sort of disintegrating Earth, I think would be an extremely tall order. There’s a huge range of different threats and different potential opportunities for an existential catastrophe to unravel within this kind of death spiral. And we think this really is a very credible threat.

Ariel Conn: How do we deal with all this uncertainty?

Haydn Belfield: More research needed, is the classic academic response to any time you ask that question. More research.

Simon Beard: That’s definitely the case, but there are also big questions about the kind of research. So mostly scientists want to study things that they already kind of understand: where you already have well established techniques, you have journals that people can publish their research in, you have an extensive peer review community, you can say, yes, you have done this study by the book, you get to publish it. That’s what all the incentives are aligned towards. 

And that sort of research is very important and very valuable, and I don’t want to say that we need less of that kind of research. But that kind of research is not going to deal with the sort of radical uncertainty that we’re talking about here. So we do need more creative science, we need science that is willing to engage in speculation, but to do so in an open and rigorous way. One of the things is you need scientists who are willing to come on the stand and say, “Look, here’s a hypothesis. I think it’s probably wrong, and I don’t yet know how to test it. But I want people to come out and help me find a way to test this hypothesis and falsify it.” 

There aren’t any scientific incentive structures at the moment that encourage that. That is not a way to get tenure, and it’s not a way to get a professorship or chair, or to take your paper published. That is a really stupid strategy to take if you want to be a successful scientist. So what we need to do is we need to create a safe sandbox for people who are concerned about this — and we know from our engagement that there are a lot of people who would really like to study this and really like to understand it better — for them to do that. So one of the big things that we’re really looking at here in CSER is how do we make the tools to make the tools that will then allow us to study this. How do we provide the methodological insights or the new perspectives that are needed to move towards establishing a science of social collapse or environmental collapse that we can actually use to then answer some of these questions.

So there are several things that we’re working on at the moment. One important thing, which I think is a very crucial step for dealing with the sort of radical uncertainty we face, is this classification. We’ve already talked about classifying different levels of critical system. That’s one part of a larger classification scheme that CSER has been developing to just look at all the different components of risk and say, “Well, there’s this and this and this. Once you start to sort of engage in that exercise and look at what are all the systems that might be vulnerable? What are all the possible vulnerabilities that exist within those systems? What are all the ways in which humanity has exposed these vulnerabilities that they could harness if things go wrong? And you map that out; You haven’t got to the truth, but you’ve moved a lot of things in the unknown category into the, “Okay, I now know all the ways that things could go wrong, and I know that I haven’t a clue how any of these things could happen.” Then you need to say, “Well, what are the techniques that seem appropriate?” 

So we think the planetary boundaries framework, albeit it doesn’t answer the question that we’re interested in, it offers a really nice approach to looking at this question about where tipping points arise, where systems move out of their ordinary operation. We want to apply that in new environments, we want to find new ways of using that. And there are other tools as well that we can take, for instance, from disaster studies and risk management studies, looking at things like fault tree analysis where you say, “What are all the things that might go wrong with this? And what are the levers that we currently have or the interventions that we could make to stop this from happening?” 

We also think that there’s a lot more room for people to share their knowledge and their thoughts and their fears and expectations to what we call structured expert solicitations, where you get people who have very different knowledge together, and you find a way that they can all talk to each other and they can all learn from each other. And often you get answers out of these sort of exercises that are very different to what any individual might put in at the beginning, but they represent a much more sort of complete, much more creative structure. And you can get those published because it’s a recognized scientific method, so structured expert solicitations on climate change got published in Nature last month. Which is great, because it’s a really under researched topic. But I think one of the things that really helped there was that they were using an established method.

What I really hope that CSER’s work going forward is going to achieve is just to make this space that we can actually work with many more of the people who we need to work with to answer these questions and understand the nature of this risk and pull them all together and make the social structures so that the kind of research that we really badly need at this point can actually start to emerge.

Ariel Conn: A lot of what you’re talking about doesn’t sound like something that we can do in the short term, that it will take at least a decade, if not more to get some of this research accomplished. So in the interest of speed — which is one of the uncertainties we have, we don’t seem to have a good grasp of how much time we have before the climate could get really bad — what do we do in the short term? What do we do for the next decade? What do non-academics do?

 

Haydn Belfield: The thing is, it’s kind of two separate questions, right? We certainly know all we need to know to take really drastic, serious action on climate change. What we’re asking is a slightly more specific question, which is how can climate change, climate breakdown, climate chaos contribute to existential risk. So we already know with very high certainty that climate change is going to be terrible for billions of people in the world, that it’s going to make people’s lives harder, it’s going to make them getting out of extreme poverty much harder.

 

And we also know that the people who have contributed the least to the problem are going to be the ones that are screwed the worst by climate change. And it’s just so unfair, and so wrong, that I think we know enough now to take serious action on climate change. And not only is it wrong, it’s not in the interest of rich countries to live in this world of chaos, of worse weather events, and so on. So I think we already know enough, we have enough certainty on those questions to act very seriously, to reduce our emissions very quickly, to invest in as much clean technology as we can, and to collaborate collectively around the world to make those changes. And what we’re saying though, is about the different, more unusual question of how it contributes to existential risk more specifically. So I think I would just make that distinction pretty clear. 

 

Simon Beard: So there’s a direct answer to your question and an indirect answer to your question. Direct answer to your question is all the things you know you should be doing. Fly less, preferably not at all; eat less meat, preferably not at all, and perfectly not dairy, either. Every time there’s an election, vote, but also ask all the candidates — all the candidates, don’t just go for the ones who you think will give you the answer you like — “I’m thinking of voting for you. What are you going to do about climate change?” 

 

There are a lot of people all over the political spectrum who care about climate change. Yeah, there are political slumps in who cares more, and so on. But every political candidate has votes that they could pick up if they did more on climate change, irrespective of their political persuasion. And even if you have a political conviction, so that you’re always going to vote the same way, you can nudge candidates to get those votes and to do more on climate change by just asking that simple question: “I’m thinking of voting for you. What are you going to do about climate change?” That’s a really low buy, it’s good for election; If they get 100 letters, all saying that, and they’re all personal letters, and not just some mass campaign, it really does change the way that people think about the problems that they face. But I also want to challenge you a bit on this, “This is going to take decades,” because it depends — depends how we approach it.

 

Ariel Conn: So one example of research that can happen quickly and action that can occur quickly is this example that you give early on in the work that you’re doing, comparing the need to study climate change as a contributor to existential risk as the work that was done in the 80s, looking at how nuclear weapons can create a nuclear winter, and how that connects to an existential risk. And so I was hoping you could also talk a little bit about that comparison.

 

Simon Beard: Yeah, so I think this is really important and I know a lot of the things that we’re talking about here, about critical global systems and how they interact with each other and so on — it’s long winded, and it’s technical, and it can sound a bit boring. But this was, for me, a really big inspiration as for why we’re trying to look at it in this way. So when people started to explode nuclear weapons in the Manhattan Project in the early 1940s, right from the beginning, they were concerned about the kind of threats, or the kind of risks that these posed, and firstly thought, well, maybe it would set light to the upper atmosphere. And there were big worries about the radiation. And then, for a time, there were worries just about the explosive capacity. 

 

This was enough to raise a kind of general sense of alarm and threat. But none of these were really credible. They didn’t last; They didn’t withstand scientific scrutiny for very long. And then Carl Sagan and some colleagues did this research in the early 1980s on modeling the climate impacts of nuclear weapons, which is not a really intuitive thing to do, right? When you’ve got the most explosive weapon ever envisaged, and it has all this nuclear fallout and so, and you think, what’s this going to do to the global climate, that doesn’t seem like that’s going to be where the problems lie.

 

But they discover when they look at that, that no, it’s a big thing. If you have nuclear strikes on cities, it sends a lot of ash into the upper atmosphere. And it’s very similar to what happens if you have a very large asteroid, or a very large set of volcanoes going off; The kind of changes that you see in the upper atmosphere are very similar, and you get this dramatic global cooling. And this then threatens — as a lot of mass extinctions have — threatens the underlying food source. And that’s how humans starve. And this comes out in 1983, this is kind of 40 years after people started talking about nuclear risk. And it changes the game, because all of a sudden, in looking at this rather unusual topic, they find a really credible way in which nuclear winter leads to everyone dying.

 

The research is still much discussed, and what kind of nuclear warhead, what kind of nuclear explosions, and how many and would they need to hit cities, or would they need to hit areas with particularly large sulphur deposits, or all of these things — these are still being discussed. But all of a sudden, the top leaders, the geopolitical leaders start to take this threat seriously. And we know Reagan was very interested and explored this a lot, the Russians even more so. And it really does seem to have kick started a lot of nuclear disarmament debate and discussion and real action.

 

And what we’re trying to do in reframing the way that people research climate change as an existential threat is to look for something like that: What’s a credible way in which this really does lead to an existential catastrophe for humanity? Because that hasn’t been done yet. We don’t have that. We feel like we have it because everyone knows the threat and the risk. But really, we’re just at this area of kind of vague speculation. There’s a lot of room for people to step up with this kind of research. And the historical evidence suggests that this can make a real difference.

 

Haydn Belfield: We tend to think of existential risks as one-off threats — some big explosion, or some big thing, like an individual asteroid that hits an individual species of dinosaurs and then kills it, right — we tend to think of existential risks as one singular event. But really, that’s not how most mass extinctions happen. That’s not how civilizational collapses have tended to happen over history. The way that all of these things have actually happened, when you go back to look at archeological evidence or you go back to look at the fossil evidence, is that there’s a whole range of different things — different hazards and different internal capabilities of these systems, whether they’re species or societies — and they get overcome by a range of different things. 

 

So, often in archeological history — in the Pueblo Southwest, for example — there’ll be one set of climatic conditions, and one external shock that faces the community, and they react fine to it. But then, in a few different years, the same community is faced by some similar threats, but reacts completely differently and collapses completely. It’s not that there’s these one singular, overwhelming events from outside, it’s that you have to look at all the different systems that this one particular society or whatever relies on. And you have to look at when all of those things overcome the overall resilience of a system. 

 

Or looking at species, like what happens when sometimes a species can recover from an external shock, and sometimes there’s just too many things, and the conditions aren’t right, and they get overcome, and they go extinct. That’s where looking at existential risk, and looking at the study of how we might collapse or how we might go extinct — that’s where the field needs to go: It needs to go into looking at what are all the different hazards we face, how do they interact with the vulnerabilities that we have, and the internal dynamics of our systems that we rely on, and the different resilience of those systems, and how are we exposed to those hazards in different ways, and having a much more sophisticated, complicated, messy look at how they all interact. I think that’s the way that existential risk research needs to go.

 

Simon Beard: I agree. I think that fits in with various things we said earlier.

 

Ariel Conn: So then my final question for both of you is — I mean, you’re not even just looking at climate change as an existential threat; I know you look at lots of things and how they contribute to existential threats — but looking at climate change, what gives you hope?

 

Simon Beard: At a psychological level, hope and fear aren’t actually big day-to-day parts of my life. Because working in existential risk, you have this amazing privilege that you’re doing something, you’re working to make that difference between human extinction and civilization collapse and human survival and flourishing. It’s a waste to have that opportunity and to get too emotional about it. It’s a waste firstly because it is the most fascinating problem. It is intellectually stimulating; It is diverse; It allows you to engage with and talk to the best people, both in terms of intelligence and creativity, but also in terms of drive and passion, and activism and ability to get things done.

 

But also because it’s a necessary task: We have to get on with it, we have to do this. So I don’t know if I have hope. But that doesn’t mean that I’m scared or anxious, I just have a strong sense of what I have to do. I have to do what I can to contribute, to make a difference, to maximize my impact. That’s a series of problems and we have to solve those problems. If there’s one overriding emotion that I have in relation to my work, and what I do, and what gets me out of bed, it’s curiosity — which is, I think, at the end of the day, one of the most motivating emotions that exists. People often say to me, “What’s the thing I should be most worried about: nuclear war, or artificial intelligence or climate change? Like, tell me, what should I be most worried about?” You shouldn’t worry about any of those things. Because worry is a very disabling emotion.

 

People who worry stay in bed. I haven’t got time to do that. I had heart surgery about 18 months ago, a big heart bypass operation. And they warned me before that, after this surgery, you’re going to feel emotional, it happens to everyone. It’s basically a near death experience. You have to be cooled down to a state that you can’t recover on your own; They have to heat you up. Your body kind of remembers these things. And I do remember a couple of nights after getting home from that. And I just burst into floods of tears thinking about this kind of existential collapse, and, you know, what it would mean for my kids and how we’d survive it, and it was completely overwhelming. As overwhelming as you’d expect it to be for someone who has to think about that. 

 

But this isn’t how we engage with it. This isn’t science fiction stories that we’re telling ourselves to feel scared or feel a rush. This is a real problem. And we’re here to solve that problem. I’ve been very moved the last month or so by all the stuff about the Apollo landing missions. And it’s reminded me, sort of a big inspiration of my life, one of these bizarre inspirations of my life, was getting Microsoft Encarta 95, which was kind of my first all-purpose knowledge source. And when you loaded it up — because it was the first one on CD ROM — they had these sound clips and they included that bit of JFK’s speech about we choose to go to the moon, not because it’s easy, but because it’s hard. And that has been a really inspiring quote for me. And I think I’ve often chosen to do things because they’re hard. 

 

And it’s been kind of upsetting — this is the first time this kind of moon landing anniversary’s come up — and I realized no, he was being completely literal. Like the reason that I chose to go to the moon was it was so hard that the Russians couldn’t do it. So they were confident that they were going to win the race. And that was all that mattered. But for me, I think in this case, we’re choosing to do this research and to do this work, not because it’s hard, but because it’s easy. Because understanding climate change, being curious about it, working out new ways to adapt, and to mitigate, and to manage the risk, is so much easier than living with the negative consequences of it. This is the best deal on the table at the moment. This is the way that we maximize the benefit for minimizing the cost.

 

This is not the great big structural change that completely messes up our entire society, and reduces us to some kind of Greek primitivism. That’s what happens if climate change kicks in. That’s when we start to see people reduced to subsistence level, agricultural, whatever it is. Understanding the risk and responding to it: this is the way that we keep all the good things that our civilization has given us. This is the way that we keep international travel, that we keep our technology, that we keep our food and getting nice things from all around the world. 

 

And yes, it does require some sacrifices. But these are really small change in the scale of things. And once we start to make them we will find ways of working around it. We are very creative, we are very adaptable, we can adapt to the changes that we need to make to mitigate climate change. And we’ll be good at that. And I just wish that anyone listening to this podcast had that mindset, didn’t think about fear or about blame, or shame or anger — that they thought about curiosity, and they thought about what can I do, and how good this is going to be, how bright and open our future is, and how much we can achieve as a species.

 

If we can just get over these hurdles, these mistakes that we made years ago, for various reasons — often a small number of people in the land, you know, that’s what determined that we have petrol cars rather than battery cars — and we can undo them; It’s in our power, it’s in our gift. We are the species that can determine our own fate; We get to choose. And that’s why we’re doing this research. And I think if lots of people — especially if lots of people who are well educated, maybe scientists, maybe people who are thinking about a career in science — view this problem in that light, as what can I do? What’s the difference I can make? We’re powerful. It’s a much less difficult problem to solve and a much better ultimate payoff that we’ll get than if we try and solve this any other way, especially if we don’t do anything.

 

Ariel Conn: That was wonderful.

 

Simon Beard: Yeah, I’m ready to storm the barricade.

 

Ariel Conn: All right, Haydn try to top that.

 

Haydn Belfield: No way. That’s great. I think Simon said all that needs to be said on that.

 

Ariel Conn: All right. Well, thank you both for joining us today.

 

Simon Beard: Thank you. It’s been a pleasure.

 

Haydn Belfield: Yeah, absolute pleasure.

 

 

 

 

FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell

Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate? 

In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).

The CTBT prohibits all signatories from testing nuclear weapons of any size (North Korea, India, and Pakistan are not signatories). But the CTBT never actually entered into force, in large part because the U.S. has still not ratified it, though Russia did.

The existence of the treaty, even without ratification, has been sufficient to establish the norms and taboos necessary to ensure an international moratorium on nuclear weapons tests for a couple decades. But will that last? Or will the U.S., Russia, or China start testing nuclear weapons again? 

This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.

Topics discussed in this episode: 

  • The validity of the U.S. allegations –Is Russia really testing weapons?
  • The International Monitoring System — How effective is it if the treaty isn’t in effect?
  • The modernization of U.S/Russian/Chinese nuclear arsenals and what that means
  • Why there’s a push for nuclear testing
  • Why opposing nuclear testing can help ensure the US maintains nuclear superiority 

References discussed in this episode: 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: Welcome to another episode of the FLI Podcast. I’m your host Ariel Conn, and the big question I want to delve into this month is: will the U.S. or Russia or China start testing nuclear weapons again? Now, at the end of May, the Director of the U.S. Defense Intelligence Agency, the DIA, gave a statement about Russian and Chinese nuclear modernization trends. I want to start by reading a couple short sections of his speech.

About Russia, he said, “The United States believes that Russia probably is not adhering to its nuclear testing moratorium in a manner consistent with the zero-yield standard. Our understanding of nuclear weapon development leads us to believe Russia’s testing activities would help it to improve its nuclear weapons capabilities.”

And then later in the statement that he gave, he said, “U.S. government information indicates that China is possibly preparing to operate its test site year-round, a development that speaks directly to China’s growing goals for its nuclear forces. Further, China continues to use explosive containment chambers at its nuclear test site and Chinese leaders previously joined Russia in watering down language in a P5 statement that would have affirmed a uniform understanding of zero-yield testing. The combination of these facts and China’s lack of transparency on their nuclear testing activities raises questions as to whether China could achieve such progress without activities inconsistent with the Comprehensive Nuclear-Test-Ban Treaty.”

Now, we’ve already seen this year that the Intermediate-Range Nuclear Forces Treaty, the INF, has started to falter. The U.S. seems to be trying to pull itself out of the treaty and now we have reason possibly to be a little worried about the Comprehensive Test-Ban Treaty. So to discuss what the future may hold for this test ban treaty, I am delighted to be joined today by Jeffrey Lewis and Alex Bell.

Jeffrey is the Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies at the Middlebury Institute. Before coming to CNS, he was the Director of the Nuclear Strategy and Nonproliferation Initiative at the New America Foundation and prior to that, he worked with the ADAM Project at the Belfer Center for Science and International Affairs, the Association of Professional Schools of International Affairs, the Center for Strategic and International Studies, and he was once a Desk Officer in the Office of the Under Secretary of Defense for Policy. But he’s probably a little bit more famous as being the founder of armscontrolwonk.com, which is the leading blog and podcast on disarmament, arms control, and nonproliferation.

Alex Bell is the Senior Policy Director at the Center for Arms Control and Non-Proliferation. Previously, she served as a Senior Advisor in the Office of the Under Secretary of State for Arms Control and International Security. Before joining the Department of State in 2010, she worked on nuclear policy issues at the Ploughshares Fund and the Center for American Progress. Alex is on the board of the British American Security Information Council and she was also a Peace Corps volunteer. And she is fairly certain that she is Tuxedo, North Carolina’s only nuclear policy expert.

So, Alex and Jeffrey, thank you so much for joining me today.

Jeffrey Lewis: It’s great to be here.

Ariel Conn: Let’s dive right into questions. I was hoping one of you or maybe both of you could just sort of give a really quick overview or a super brief history of the Comprehensive Nuclear-Test-Ban Treaty –– especially who has signed and ratified, and who hasn’t signed and/or ratified with regard to the U.S., Russia, and China.

Jeffrey Lewis: So, there were a number of treaties during the Cold War that restricted nuclear explosions, so you had to do them underground. But in the 1990s, the Clinton administration helped negotiate a global ban on all nuclear explosions. So that’s what the Comprehensive Nuclear-Test-Ban Treaty is. The comprehensive part is, you can’t do any explosions of any yield.

And a curious feature of this agreement is that for the treaty to come into force, certain countries must sign and ratify the treaty. One of those countries was Russia, which has both signed and ratified it. Another country was the United States. We have signed it, but the Senate did not ratify it in 1999, and I think we’re still waiting. China has signed it and basically indicated that they’ll ratify it only when the United States does. India has not signed and not ratified, and North Korea and Iran –– not signed and not ratified.

So it’s been 23 years. There’s a Comprehensive Test-Ban Treaty Organization, which is responsible for getting things ready to go when the treaty is ready; I’m actually here in Vienna at a conference that they’re putting on. But 23 years later, the treaty is still not in force even though we haven’t had any nuclear explosions in the United States or Russia since the end of the Cold War.

Ariel Conn: Yeah. So my understanding is that even though we haven’t actually ratified this and it’s not enforced, most countries, with maybe one or two exceptions, do actually abide by it. Is that true?

Alex Bell: Absolutely. There are 184 member states to the treaty, 168 total ratifications, and the only country to conduct explosive tests in the 21st century is North Korea. So while it is not yet in force, the moratorium against explosive testing is incredibly strong.

Ariel Conn: And do you remain hopeful that that’s going to stay the case, or do comments from people like Lieutenant General Ashley have you concerned?

Alex Bell: It’s a little concerning that the nature of these accusations that came from Lieutenant General Ashley didn’t seem to follow the pattern of how the U.S. government historically has talked about compliance issues that it has seen with various treaties and obligations. We have yet to hear a formal statement from the Department of State who actually has the responsibility to manage compliance issues, nor have we heard from the main part of the Intelligence Community, the Office of the Director for National Intelligence. It’s a bit strange and it has had people thinking, what was the purpose of this accusation if not to sort of move us away from the test ban?

Jeffrey Lewis: I would add that during the debate inside the Trump administration, when they were writing what was called the Nuclear Posture Review, there was a push by some people for the United States to start conducting nuclear explosions again, something that it had not done since the early 1990s. So on the one hand, it’s easy to see this as a kind of straight forward intelligence matter: Are the Russians doing it or are they not?

But on the other hand, there has always been a group of people in the United States who are upset about the test moratorium, and don’t want to see the test ban ratified, and would like the United States to resume nuclear testing. And those people have, since the 1990s, always pointed at the Russians, claiming that they must be doing secret tests and so we should start our own.

And the kind of beautiful irony of this is that when you read articles from Russians who want to start testing –– because, you know, their labs are like ours, they want to do nuclear explosions –– they say, “The Americans are surely getting ready to cheat. So we should go ahead and get ready to go.” So you have these people pointing fingers at one another, but I think the reality is that there are too many people in the United States and Russia who’d be happy to go back to a world in which there was a lot of nuclear testing.

Ariel Conn: And so do we have reason to believe that the Russians might be testing low-yield nuclear weapons or does that still seem to be entirely speculative?

Alex Bell: I’ll let Jeffrey go into some of the historical concerns people have had about the Russian program, but I think it’s important to note that the Russians immediately denied these accusations with the Foreign Minister, Lavrov, actually describing them as delusional and the Deputy Foreign Minister, Sergei Ryabkov, affirmed that they’re in full and absolute compliance with the treaty and the unilateral moratorium on nuclear testing that is also in place until the treaty enters into force. He also penned an op-ed a number of years ago affirming that the Russians believed that any yield on any tests would violate the agreement.

Jeffrey Lewis: Yeah, you know, really from the day the test ban was signed, there have been a group of people in the United States who have argued that the U.S. and Russia have different definitions of zero –– which I don’t find very credible, but it’s a thing people say –– and that the Russians are using this to conduct very small nuclear explosions. This literally was a debate that tore the U.S. Intelligence Community apart during the Clinton administration and these fears led to a really embarrassing moment.

There was a seismic event, some ground motion, some shaking near the Russian nuclear test site in 1997 and the Intelligence Community decided, “Aha, this is it. This is a nuclear test. We’ve caught the Russians,” and Madeline Albright démarched Moscow for conducting a clandestine nuclear test in violation of the CTBT, which it had just signed, and it turned out it was an earthquake out in the ocean.

So there have been a group of people who have been making this claim for more than 20 years. I have never seen any evidence that would persuade me that this is anything other than something they say because they just don’t trust the Russians. I suppose it is possible –– even a stopped watch is right twice a day. But I think before we take any actions, it would behoove us to figure out if there are any facts behind this. Because when you’ve heard the same story for 20 years with no evidence, it’s like the boy who cried wolf. It’s kind of hard to believe

Alex Bell: And that gets back to the sort of strange way that this accusation was framed: not by the Department of State; It’s not clear that Congress has been briefed about it; It’s not clear our allies were briefed about it before Lieutenant General Ashley made these comments. Everything’s been done in a rather unorthodox way and for something as serious as a potential low-yield nuclear test, this really needs to be done according to form.

Jeffrey Lewis: It’s not typical if you’re going to make an accusation that the country is cheating on an arms control treaty to drive a clown car up and then have 15 clowns come out and honk some horns. It makes it harder to accept whatever underlying evidence there may be if you choose to do it in this kind of ridiculous fashion.

Alex Bell: And that would be for any administration, but particularly, an administration that has made a habit of getting out of agreements sort of habitually now.

Jeffrey Lewis: What I loved about the statement that the Defense Intelligence Agency released –– so after the DIA director made this statement, and it’s really worth watching because he reads the statement, which is super inflammatory and there was a reporter in the audience who had been given his remarks in advance. So someone clearly leaked the testimony to make sure there was a reporter there and the reporter asks a question, and then Ashley kind of freaks out and walks back what he said.

So DIA then releases a statement where they double down and say, “No, no, no, he really meant it,” but it starts with the craziest sentence I’ve ever seen, which is “The United States government, including the Intelligence Community, assesses,” which if you know anything about the way the U.S. government works is insane because only the Intelligence Community is supposed to assess. This implies that John Bolton had an assessment, and Mike Pompeo had an assessment, and just the comical manner in which it was handled makes it very hard to take seriously or to see it as anything other than just nakedly partisan assault on the test moratorium and the test ban.

Ariel Conn: So I want to follow up about what the implications are for the test ban, but I want to go back real quick just to some of the technical side of identifying a low-yield explosion. I actually have a background in seismology, so I know that it’s not that big of a challenge for people who study seismic waves to recognize the difference between an earthquake and a blast. And so I’m wondering how small a low yield test actually is. Is it harder to identify, or are there just not seismic stations that the U.S. has access to, or is there something else involved?

Jeffrey Lewis: Well so these are called hydronuclear experiments. They are so incredibly small. They are, on the order in the U.S., there’s something like four pounds of explosive, so basically less explosion than the actual conventional explosions that are used to detonate the nuclear weapon. Some people think the Russians have a slightly bigger definition that might go up to 100 kilograms, but these are mouse farts. They are so small that unless you have the seismic station sitting right next to it, you would never know.

In a way, I think that’s a perfect example of why we’re so skeptical because when the test ban was negotiated, there was this giant international monitoring system put into place. It is not just seismic stations, but it is hydroacoustic stations to listen underwater, infrasound stations to listen for explosions in the air, radionuclide stations to detect any radioactive particles that happen to escape in the event of a test. It’s all of this stuff and it is incredibly sensitive and can detect incredibly small explosions down to about 1,000 tons of explosive and in many cases even less.

And so what’s happened is the allegations against the Russians, every time we have better monitoring and it’s clear that they’re not doing the bigger things, then the allegations are they’re doing ever smaller things. So, again, the way in which it was rolled out was kind of comical and caused us, at least me, to have some doubts about it. It is also the case that the nature of the allegation –– that it’s these tiny, tiny, tiny, tiny experiments, which U.S. scientists, by the way, have said they don’t have any interest in doing because they don’t think they are useful –– it’s almost like the perfect accusation and so that also to me is a little bit suspicious in terms of the motives of the people claiming this is happening.

Alex Bell: I think it’s also important to remember when dealing with verification of treaties, we’re looking for things that would be militarily significant. That’s how we try to build the verification system: that if anybody tried to do anything militarily significant, we’d be able to detect that in enough time to respond effectively and make sure the other side doesn’t gain anything from the violation.

So you could say that experiments like this that our own scientists don’t think are useful are not actually militarily significant, so why are we bringing it up? Do we think that this is a challenge to the treaty overall or do we not like the nature of Russia’s violations? And further, if we’re concerned about it, we should be talking to the Russians instead of about them.

Jeffrey Lewis: I think that is actually the most important point that Alex just made. If you actually think that the Russians have a different definition of zero, then go talk to them and get the same definition. If you think that the Russians are conducting these tests, then talk to the Russians and see if you can get access. If the United States were to ratify the test ban and the treaty were to come into force, there is a provision for the U.S. to ask for an inspection. It’s just a little bit rich to me that the people making this allegation are also the people who refuse to do anything about it diplomatically. If they were truly worried, they’d try to fix the problem.

Ariel Conn: Regarding the fact that the Test-Ban Treaty isn’t technically in force, are a lot of the verification processes still essentially in force anyway?

Alex Bell: The International Monitoring System, as Jeff pointed out, was just sort of in its infancy when the treaty was negotiated and now it’s become this marvel of modern technology capable of detecting tests at even very low yields. And so it is up and running and functioning. It was monitoring the various North Korean nuclear tests that have taken place in this century. It also was doing a lot of additional science like tracking radio particulates that came from the Fukushima disaster back in 2011.

So it is functioning. It is giving readings to any party to the treaty, and it is particularly useful right now to have an independent international source of information of this kind. They specifically did put out a very brief statement following this accusation from the Defense Intelligence Agency saying that they had detected nothing that would indicate a test. So that’s about as far as I think they could get, as far as a diplomatic equivalent of, “What are you talking about?”

Jeffrey Lewis: I Googled it because I don’t remember it off the top of my head, but it’s 321 monitoring stations and 16 laboratories. So the entire monitoring system has been built out and it works far better than anybody thought it would. It’s just that once the treaty comes into force, there will be an additional provision, which is: in the event that the International Monitoring System, or a state party, has any reason to think that there is a violation, that country can request an inspection. And the CTBTO trains to send people to do onsite inspections in the event of something like this. So there is a mechanism to deal with this problem. It’s just that you have to ratify the treaty.

Ariel Conn: So what are the political implications, I guess, of the fact that the U.S. has not ratified this, but Russia has –– and that it’s been, I think you said 23 years? It sounds like the U.S. is frustrated with Russia, but is there a point at which Russia gets frustrated with the U.S.?

Jeffrey Lewis: I’m a little worried about that, yeah. The reality of the situation is I’m not sure that the United States can continue to reap the benefits of this monitoring system and the benefits of what I think Alex rightly described as a global norm against nuclear testing and sort of expect everybody else to restrain themselves while in the United States we refuse to ratify the treaty and talk about resuming nuclear testing.

And so I don’t think it’s a near term risk that the Russians are going to resume testing, but we have seen… We do a lot of work with satellite images at the Middlebury Institute and the U.S. has undertaken a pretty big campaign to keep its nuclear test site modern and ready to conduct a nuclear test on as little as six months’ notice. In the past few years, we’ve seen the Russians do the same thing.

For many years, they neglected their test site. It was in really poor shape and starting in about 2015, they started putting money into it in order to improve its readiness. So it’s very hard for us to say, “Do as we say, not as we do.”

Alex Bell: Yeah, I think it’s also important to realize that if the United States resumes testing, everyone will resume testing. The guardrails will be completely off and that doesn’t make any sense because having the most technologically advanced and capable nuclear weapons infrastructure like we do, we’re benefitted from a global ban on explicit testing. It means we’re sort of locking in our own superiority.

Ariel Conn: So we’re putting that at risk. So I want to expand the conversation from just Russia and the U.S. to pull China in as well because the talk that Ashley gave was also about China’s modernization efforts. And he made some comments that sounded almost like maybe China is considering testing as well. I was sort of curious what your take on his China comments are.

Jeffrey Lewis: I’m going to jump in and be aggressive on this one because my doctoral dissertation was on the history of China’s nuclear weapons program. The class I teach at the Middlebury Institute is one in which we look at declassified U.S. intelligence assessments and then we look at Chinese historical materials in order to see how wrong the intelligence assessments were. This specifically covers U.S. assessments of China’s nuclear testing, and the U.S. just has an awful track record on this topic.

I actually interviewed the former head of China’s nuclear weapons program once, and I was talking to him about this because I was showing him some declassified assessments and I was sort of asking him about, you know, “Had you done this or had you done that?” He sort of kind of took it all in and he just kind of laughed, and he said, “I think many of your assessments were not very accurate.” There was sort of a twinkle in his eye as he said it because I think he was just sort of like, “We wrote a book about it, we told you what we did.”

Anything is possible, and the point of these allegations is events are so small that they are impossible to disprove, but to me, that’s looking at it backwards. If you’re going to cause a major international crisis, you need to come to the table with some evidence, and I just don’t see it.

Alex Bell: The GEM, the Group of Eminent Members, which is an advisory group to the CTBTO, put it best when they said the most effective way to sort of deal with this problem is to get the treaty into force. So we could have intrusive short notice onsite inspections to detect and deter any possible violations.

Jeffrey Lewis: I actually got in trouble, I got to hushed because I was talking to a member and they were trying to work on this statement and they needed the member to come back in.

Ariel Conn: So I guess when you look at stuff like this –– so, basically, all three countries are currently modernizing their nuclear arsenals. Maybe we should just spend a couple minutes talking about that too. What does it mean for each country to be modernizing their arsenal? What does that sort of very briefly look like?

Alex Bell: Nuclear weapons delivery systems, nuclear weapons do age. You do have to maintain them, like you would with any weapon system, but fortunately, from the U.S. perspective, we have exceedingly capable scientists who are able to extend the life of these systems without testing. Jeffrey, if you want to go into what other countries are doing.

Jeffrey Lewis: Yeah. I think the simplest thing to do is to talk about, at least for the nuclear warheads part, I think as Alex mentioned, all of the countries are building new submarines, and missiles, and bombers that can deliver these nuclear weapons. And that’s a giant enterprise. It costs many billions of dollars every year. But when you actually look at the warheads themselves can tell you what we do in the United States. In some cases, we build new versions of existing designs. In almost all cases, we replace components as they age.

So the warhead design might stay the same, but piece by piece things get replaced. And because we’ve been replacing those pieces over time, if they have to put a new fuse in for a nuclear warhead, they don’t go back and build the ’70s era fuse. They build a new fuse. So even though we say that we’re only replacing the existing components and we don’t try to add new capabilities, in fact, we add new capabilities all the time because as all of these components get better than the weapons themselves get better, and we’re altering the characteristics of the warheads.

So the United States has a warhead on its submarine-launched ballistic missiles, and the Trump administration just undertook a program to give it a capability so that we can turn down the yield. So if we want to make it go off with a very small explosion, they can do that. It’s a full plate of the kinds of changes that are being made, and I think we’re seeing that in Russia and China too.

They are doing all of the same things to preserve the existing weapons they have. They rebuild designs that they have, and I think that they tinker with those designs. And that is constrained somewhat by the fact that there is no explosive testing –– that makes it harder to do those things, which is precisely why we wanted this ban in the first place –– but everybody is playing with their nuclear weapons.

And I think just because there’s a testing moratorium, the scientists who do this, some of them, because they want to go back to nuclear testing or nuclear explosions, they say, “If we could only test with explosions, that would be better.” So there’s even more they want to do, but let’s not act like they don’t get to touch the bombs, because they play with them all the time.

Alex Bell: Yeah. It’s interesting you brought up the low yield option for our submarine-launched ballistic missiles because the House of Representatives actually in the defense appropriations and authorization process that it’s going through right now actually blocked further funding and the deployment of this particular type of warhead because, in their opinion, the President already had plenty low-yield nuclear options, thank you very much. He doesn’t need anymore.

Jeffrey Lewis: Of course, I don’t think this president needs any nuclear options, but-

Alex Bell: But it just shows there’s definitely a political and oversight feature that comes into this modernization debate. The idea that even if the forces that Jeffrey talked about who’ve always wanted to return to testing, even if they could prevail upon a particular administration to go in that direction, it’s unlikely Congress would be as sanguine about it.

Nevada, where our former nuclear testing site is, now the Nevada National Security Site –– it’s not clear that Nevadans are going to be okay with a return to explosive nuclear testing, nor will the people of Utah who sit downwind from that particular site. So there’s actually a “not in my backyard” kind of feature to the debate about further testing.

Jeffrey Lewis: Yeah. The Department of Energy has actually taken… Anytime they do a conventional explosion at the Nevada site, they keep it a secret because they were going to do a conventional explosion 10 or 15 years ago and people got wind of it and were outraged because they were terrified the conventional explosion would kick up a bunch of dust and that there might still be radioactive particulates.

I’m not sure that that was an accurate worry, but I think it speaks to the lack of trust that people around the test site have, given some of the irresponsible things that the U.S. nuclear weapons complex has done over the years. That’s a whole other podcast, but you don’t want to live next to anything that NNSA overseas.

Alex Bell: There’s also a proximity issue. Las Vegas is incredibly close to that facility. Back in the day when they did underground testing there, it used to shake the buildings on the Strip. And Las Vegas has only expanded from 20, 30 years ago, so you’re going to have a lot of people that would be very worried.

Ariel Conn: Yeah. So that’s actually a question that I had. I mean, we have a better idea today of what the impacts of nuclear testing are. Would Americans approve of nuclear weapons being tested on our ground?

Jeffrey Lewis: Probably if they didn’t have to live next to them.

Alex Bell: Yeah. I’ve been to some of the states where we conducted tests other than Nevada. So Colorado, where we tried to do this brilliant idea of whether we could do fracking via nuclear explosion. You can see the problems inherent in that idea. Alaska, New Mexico, obviously, where the first nuclear test happened. We also tested weapons in Mississippi. So all of these states have been affected in various ways and radio particulates from the sites in Nevada have drifted as far away from Maine, and scientists have been able to trace cancer clusters half a continent away.

Jeffrey Lewis: Yeah, I would add that –– Alex mentioned testing in Alaska –– so there was a giant test in 1971 in Alaska called Cannikin. It was five megatons. So a megaton is 1,000 kilotons. Hiroshima was 20 kilotons and it really made some Canadians angry and the consequence of the angry Canadians was they founded Greenpeace. So the whole iconic Greenpeace on a boat was originally driven by a desire to stop U.S. nuclear testing in Alaska. So, you know, people get worked up.

Ariel Conn: Do you think someone in the U.S. is actively trying to bring testing back? Do you think that we’re going to see more of this or do you think this might just go away?

Jeffrey Lewis: Oh yeah. There was a huge debate at the beginning of the Trump administration. I actually wrote this article making fun of Rick Perry, the Secretary of Energy, who I have to admit has turned out to be a perfectly normal cabinet secretary in an administration that looks like the Star Wars Cantina.

Alex Bell: It’s a low bar.

Jeffrey Lewis: It’s a low bar, and maybe just barely, but Rick got over it. But I was sort of mocking him and the article was headlined, “Even Rick Perry isn’t dumb enough to resume nuclear testing,” and I got notes, people saying, “This is not funny. This is a serious possibility.” So, yeah, I think there has long been a group of people who did not want to end testing. U.S. labs refuse to prepare for the end of testing. So when the U.S. stopped, it was Congress just telling them to stop. They have always wanted to go back to testing, and these are the same people I think who are accusing the Russians of doing things, I think as much so that they can get out of the test ban as anything else.

Alex Bell: Yeah, I would agree with that assessment. Those people have always been here. It’s strange to me because most scientists have affirmed that we know more about our nuclear weapons now not blowing them up than we did before because of the advanced computer modeling, technological advances of the Stockpile Stewardship program, which is the program that extends the life of these warheads. They get to do a lot of great science, and they’ve learned a lot of things about our nuclear forces that we didn’t know before.

So it’s hard to make a case that it is absolutely necessary or would ever be absolutely necessary to return to testing. You would have to totally throw out our obligations that we have to things like the nuclear non-proliferation treaty, which is to pursue the cessation of an arms race in good faith, and a return to testing I think would not be very good faith.

Ariel Conn: Maybe we’ve sort of touched on this, but I guess it’s still not clear to me. Why would we want to return to testing? Especially if, like you said, the models are so good?

Jeffrey Lewis: I think you have to approach that question like an anthropologist. Because some countries are quite happy living under a test ban for exactly the reason that you pointed out, that they are getting all kinds of money to do all kinds of interesting science. And so Chinese seem pretty happy about it; The UK, actually –– I’ve met some UK scientists who are totally satisfied with it.

But I think the culture in the U.S. laboratories, which had really nothing to do with the reliability of the weapons and everything to do with the culture of the lab, was like the day that a young designer became a man or a woman was the day that person’s design went out into the desert and they had to stand there and be terrified it wasn’t going to work, and then feel the big rumble. So I think there are different ways of doing science. I think the labs in the United States were and are sentimentally attached to solving these problems with explosions.

Alex Bell: There’s also sort of a strange desire to see them. My first trip out to the test site, I was the only woman on the trip and we were looking at the Sedan Crater, which is just this enormous crater from an explosion underground that was much bigger than we thought it was going to be. It made this, I think it’s seven football fields across, and to me, it was just sort of horrifying, and I looked at it with dread. And a lot of the people who were on the trip reacted entirely differently with, “I thought it would be bigger,” and, “Wouldn’t it be awesome to see one of these go off, just once?” and had a much different take on what these tests were for and what they sort of indicated.

Ariel Conn: So we can actually test nuclear weapons without exploding them. Can you talk about what the difference is between testing and explosions, and what that means?

Jeffrey Lewis: The way a nuclear weapon works is you have a sphere of fissile material –– so that’s plutonium or highly enriched uranium –– and that’s surrounded by conventional explosives. And around that, there are detonators and electronics to make sure that the explosives all detonate at the exact same moment so that they spherically compress or implode the plutonium or highly enriched uranium. So when it gets squeezed down, it makes a big bang, and then if it’s a thermonuclear weapon, then there’s something called a secondary, which complicates it.

But you can do that –– you can test all of those components, just as long as you don’t have enough plutonium or highly enriched uranium in the middle to cause a nuclear explosion. So you can fill it with just regular uranium, which won’t go critical, and so you could test the whole setup that way for all of the things in a nuclear weapon that would make it a thermonuclear weapon. There’s a variety of different fusion research techniques you can do to test those kinds of reactions.

So you can really simulate everything, and you can do as many computer simulations as you want, it’s just that you can’t put it all together and get the big bang. And so the U.S. has built this giant facility at Livermore called NIF, the National Ignition Facility, which is a many billion-dollar piece of equipment, in order to sort of simulate some of the fusion aspects of a nuclear weapon. It’s an incredible piece of equipment that has taught U.S. scientists far more than they ever knew about these processes when they were actually exploding things. It’s far better for them, and they can do that. It’s completely legal.

Alex Bell: Yeah, the most powerful computer in the world belongs to Los Alamos. Its job is to help simulate these nuclear explosions and process data related to the nuclear stockpile.

Jeffrey Lewis: Yeah, I got a kick –– I always check in on that list, and it’s almost invariably one of the U.S. nuclear laboratories that has the top computer. And then one time I noticed that the Chinese had jumped up there for a minute and it was their laboratory.

Alex Bell: Yup, it trades back and forth.

Jeffrey Lewis: Good times.

Alex Bell: A lot of the data that goes into this is observational information and technical readings that we got from when we did explosive testing. And our testing record is far more extensive than any other country, which is one of the reasons why we have sort of this advantage that would be locked in, in the event of a CTBT entering into force.

Ariel Conn: Yeah, I thought that was actually a really interesting point. I don’t know if there’s more to elaborate on it, but the idea that the U.S. could actually sacrifice some of its nuclear superiority by ––

Alex Bell: Returning to testing?

Ariel Conn: Yeah.

Alex Bell: Yeah, because if we go, everyone goes.

Ariel Conn: There were countries that still weren’t thrilled even with the testing that is allowed. Can you elaborate on that a little bit?

Alex Bell: Yes. A lot of countries, particularly the countries that back the Treaty on the Prohibition of Nuclear Weapons, which is a new treaty that does not have any nuclear weapon states as a part of it, but it’s a total ban on the possession and use of nuclear weapons, and those countries are particularly frustrated with what they see as the slow pace of disarmament by the nuclear weapon states.

The Nonproliferation Treaty, which is sort of the glue that holds all this together, was indefinitely extended back in 1995. The price for that from the non-nuclear weapon states was the commitment of nuclear weapon states to sign and ratify a comprehensive test ban. So 25 years later almost, they’re still waiting.

Ariel Conn: I will add that, I think as of this week, I believe three of the United States –– California, New Jersey and Oregon –– have passed resolutions supporting the U.S. joining the treaty that actually bans nuclear weapons, that recent one.

Alex Bell: Yeah. It’s been interesting, while it’s something that the verification measures –– Jeffrey might have some thoughts on this too –– to me, principles aside, the verification measures in the Treaty on the Prohibition of Nuclear Weapons makes it sort of an unviable treaty. But from a messaging perspective, you’re seeing kind of the first time since the Cold War where citizenry around the world is saying, “You have to get rid of these weapons. They’re no longer acceptable. They’ve become liabilities, not assets.”

So while I don’t think the treaty itself is a workable treaty for the United States, I think that the sentiment behind it is useful in persuading leaders that we do need to do more on disarmament.

Jeffrey Lewis: I would just say that I think just like we saw earlier, there’s a lot of the U.S. wanting to have its cake and eat it too. And so the Nonproliferation Treaty, which is the big treaty that says, “Countries should not be able to acquire nuclear weapons,” it also commits the United States and the other nuclear powers to work toward disarmament. That’s not something they take seriously.

Just like with nuclear testing where you see this, “Oh, well, maybe we could edge back and do it,” you see the same thing just on disarmament issues generally. So having people out there who are insisting on holding the most powerful countries to account to make sure that they do their share, I also think is really important.

Ariel Conn: All right. So I actually think that’s sort of a nice note to end on. Is there anything else that you think is important that we didn’t get into or that just generally is important for people to know?

Alex Bell: I would just reiterate the point that if the U.S. government is truly concerned that Russia is conducting tests at even very low yields, that we need to be engaged in a conversation with them, that a global ban on nuclear explosive testing is good for every country in this world and we shouldn’t be doing things to derail the pursuit of such a treaty.

Ariel Conn: Agreed. All right, well, thank you both so much for joining today.

As always, if you’ve been enjoying the podcast, please take a moment to like it, share it, and maybe even leave a good review and I will be back again next month with another episode of the FLI Podcast.

FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

As we grapple with questions about AI safety and ethics, we’re implicitly asking something else: what type of a future do we want, and how can AI help us get there?

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.

Topics discussed in this episode include:

  • Hopes for the future of AI
  • AI-human collaboration
  • AI’s influence on art and creativity
  • The UN AI for Good Summit
  • Gaps in AI safety
  • Preparing AI for uncertainty
  • Holding AI accountable

Publications and resources discussed in this episode include:

Ariel: Hello and welcome to another episode of the FLI podcast. I’m your host Ariel Conn, and today we’ll be looking at how to address safety and ethical issues surrounding artificial intelligence, and how we can implement safe and ethical AIs both now and into the future. Joining us this month are Ashley Llorens and Francesca Rossi who will talk about what they’re seeing in academia, industry, and the military in terms of how AI safety is already being applied and where the gaps are that still need to be addressed.

Ashley is the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory where he directs research and development in machine learning, robotics, autonomous systems, and neuroscience all towards addressing national and global challenges. He has served on the Defense Science Board, the Naval Studies Board of the National Academy of Sciences, and the Center for a New American Security’s AI task force. He is also a voting member of the Recording Academy, which is the organization that hosts the Grammy Awards, and I will definitely be asking him about that later in the show.

Francesca is the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab. She is an advisory board member for FLI, a founding board member for the Partnership on AI, a deputy academic director of the Leverhulme Centre for the Future of Intelligence, a fellow with AAAI and EurAI (that’s e-u-r-a-i), and she will be the general chair of AAAI in 2020. She was previously Professor of Computer Science at the University of Padova in Italy, and she’s been president of IJCAI and the editor-in-chief of the Journal of AI Research. She is currently joining us from the United Nations AI For Good Summit, which I will also ask about later in the show.

So Ashley and Francesca, thank you so much for joining us today.

Francesca: Thank you.

Ashley: Glad to be here.

Ariel: Alright. The first question that I have for both of you, and Ashley, maybe I’ll direct this towards you first: basically, as you look into the future and you look at artificial intelligence becoming more of a role in our everyday lives — before we look at how everything could go wrong, what are we striving for? What do you hope will happen with artificial intelligence and humanity?

Ashley: My perspective on AI is informed a lot by my research and experiences at the Johns Hopkins Applied Physics Lab, which I’ve been at for a number of years. My earliest explorations had to do with applications of artificial intelligence to robotics systems, in particular underwater robotics systems, systems where signal processing and machine learning are needed to give the system situational awareness. And of course, light doesn’t travel very well underwater, so it’s an interesting task to make a machine see with sound for all of its awareness and all of its perception.

And in that journey, I realized how hard it is to have AI-enabled systems capable of functioning in the real world. That’s really been a personal research journey that’s turned into an institution-wide research journey for Johns Hopkins APL writ large. And we’re a large not-for-profit R & D organization that does national security, space exploration, and health. We’re about 7,000 folks or so across many different disciplines, but many scientists and engineers working on those kinds of problems — we say critical contributions to critical challenges.

So as I look forward, I’m really looking at AI-enabled systems, whether they’re algorithmic in cyberspace or they’re real-world systems that are really able to act with greater autonomy in the context of these important national and global challenges. So for national security: to have robotic systems that can be where people don’t want to be, in terms of being under the sea or even having a robot go into a situation that could be dangerous so a person doesn’t have to. And to have that system be able to deal with all the uncertainty associated with that.

You look at future space exploration missions where — in terms of AI for scientific discovery, we talk a lot about that — imagine a system that can perform science with greater degrees of autonomy and figure out novel ways of using its instruments to form and interrogate hypotheses when billions of miles away. Or in health applications where we can have systems more ubiquitously interpreting data and helping us to make decisions about our health to increase our lifespan, or health span as they say.

I’ve been accused of being a techno-optimist, I guess. I don’t think technology is the solution to everything, but it is my personal fascination. And in general, just having this AI capable of adding value for humanity in a real world that’s messy and sloppy and uncertain.

Ariel: Alright. Francesca, you and I have talked a bit in the past, and so I know you do a lot of work with AI safety and ethics. But I know you’re also incredibly hopeful about where we can go with AI. So if you could start by talking about some of the things that you’re most looking forward to.

Francesca: Sure. Partially focused on the need of developing autonomous AI systems that can act where humans cannot go, for example, and that’s definitely very, very important. I would like to focus more on the need also of AI systems that can actually work together with humans, augmenting our own capabilities to make decisions or to function in our work environment or in our private environment. That’s the focus of and the purpose of AI that I see, that I work on, and I focus on what are the challenges in making this system really work well with humans.

This means of course that while it may seem that in some sense it’s easier to develop an AI system that works together with humans because there is complementarity — some things are made by humans, some things are made by the machine. But actually, there are several additional challenges because you want these two entities, the human and the machine, to actually become a real team and work together and collaborate together to achieve a certain goal. You want these machines to be able to communicate, interact in a very natural way with human beings and you want these machines to be not just reactive to commands, but also proactive at trying to understand what the human being needs in that moment, in that context in order to provide all the information and knowledge that it needs from the data that surrounds whatever task is going to be addressed.

That’s the focus also of what IBM Business Model is, because of course IBM releases AI to be used in other companies so that their professional people can use it to do better the job that they’re doing. And it has many, many different interesting research directions. The one that I’m mostly focused on is around value alignment. How do you make sure that these systems know and are aware of the values that they should follow and of the ethical principles that they should follow, while trying to help human beings do whatever they need to do? And there are many ways that you can do that and many ways to model them to reason with these ethical principles and so on.

Being here in Geneva at AI For Good, I mean, in general, I think that here for example the emphasis is — and rightly so — about the sustainable development goals of the UN: these 17 goals that define a vision of the future, the future that we want. And we’re trying to understand how we can leverage technologies such as AI to achieve that vision. The vision can be slightly nuanced and different, but to me, the development of advanced AI is not the end goal, but is only a way to get to the vision of the future that I have. And so, to me, this AI For Good Summit and the 17 sustainable development goals define a vision of the future that is important to have when one has in mind how to improve technology.

Ariel: For listeners who aren’t as familiar with the sustainable development goals, we can include links to what all of those are in the podcast description.

Francesca: I was impressed at this AI For Good Summit. This Summit started three years ago with kind of 400 people. Then last year it was like 500 people, and this year there are 3,200 registered participants. To really give you an idea of how more and more everybody’s interested into these subjects.

Ariel: Have you also been equally impressed by the topics that are covered?

Francesca: Well, I mean, it started today. So I just saw in the morning there are five different parallel sessions that will go throughout the following two days. One is AI education and learning. One is health and wellbeing. One is AI human dignity and inclusive society. One is scaling AI for good. And one is AI for space. These five themes will go throughout two days together with many other smaller ones. But for what I’ve seen this morning, it’s really a very high level of the discussion. It’s going to be very impactful. Each event is unique, has its own specificity, but this event is unique because it’s focused on a vision of the future, which in this case are the sustainable development goals.

Ariel: Well, I’m really glad that you’re there. We’re excited to have you there. And so, you’re talking about moving towards futures where we have AIs that can do things that either humans can’t do or don’t want to do or isn’t safe, visions where we can achieve more because we’re working with AI systems as opposed to just humans trying to do things alone. But we still have to get to those points where this is being implemented safely and ethically.

I’ll come back to the question of what we’re doing right so far, but first, what do you see as the biggest gaps in AI safety and ethics? And this is a super broad question, but looking at it with respect to, say, the military or industry or academia. What are some of the biggest problems you see in terms of us safely applying AI to solve problems?

Ashley: It’s a really important question. My answer is going to center around uncertainty and dealing with that in the context of the operation of the system, and let’s say the implementation or the execution of the ethics of the system as well. But first, backing up to Francesca’s comment, I just want to emphasize this notion of teaming and really embrace this narrative in my remarks here.

I’ve heard it said before that every machine is part of some human workflow. I think a colleague Matt Johnson at the Florida Institute for Human and Machine Cognition says that, which I really like. And so, just to make clear, whether we’re talking about the cognitive enhancements, an application of AI where maybe you’re doing information retrieval, or even a space exploration example, it’s always part of a human-machine team. In the space exploration example, the scientists and the engineers are on the earth, maybe many light hours away, but the machines are helping them do science. But at the end of the day, the scientific discovery is really happening on earth with the scientists. And so, whether it’s a machine operating remotely or by cognitive assistance, it’s always part of a human-machine team. That’s just something I wanted to amplify that Francesca said.

But coming back to the gaps, a lot of times I think what we’re missing in our conversations is getting some structure around the role of uncertainty in these agents that we’re trying to create that are going to help achieve that bright future that Francesca was referring to. To help us think about this at APL, we think about agents as needing to perceive, decide, act in teams. This is a framework that just helps us understand these general capabilities that we’ll need and to start thinking about the role of uncertainty, and then combinations of learning and reasoning that would help agents to deal with that. And so, if you think about an agent pursuing goals, the first thing it has to do is get an understanding of the world states. This is this task of perception.

We often talk about, well, if an agent sees this or that, or if an agent finds itself in this situation, we want it to behave this way. Obviously, the trolley problem is an example we revisit often. I won’t go into the details there, but the question is, I think, given some imperfect observation of the world, how does the structure of that uncertainty factor into the correct functioning of the agent in that situation? And then, how does that factor into the ethical, I’ll say, choices or data-driven responses that an agent might have to that situation?

Then we talk about decision making. An agent has goals. In order to act on its goals, it has to decide about how certain sequences of actions would affect future states of the world. And then again how, in the context of an uncertain world, is the agent going to go about accurately evaluating possible future actions when it’s outside of a gaming environment, for example. How does uncertainty play into that and its evaluation of possible actions? And then in the carrying out of those actions, there may be physical reasoning, geometric reasoning that has to happen. For example, if an agent is going to act in a physical space, or reasoning about a cyber-physical environment where there’s critical infrastructure that needs to be protected or something like that.

And then finally, to Francesca’s point, the interactions, or the teaming with other agents that may be teammates or actually may be adversarial. And so, how does the reasoning about what my teammates might be intending to do, what the state of my teammates might be in terms of cognitive load if it’s a human teammate, what might the intent of adversarial agents be in confounding or interfering with the goals of the human-machine team?

And so, to recap a little bit, I think this notion of machines dealing with uncertainty in real world situations is one of the key challenges that we need to deal with over the coming decades. And so, I think having more explicit conversations about how uncertainty manifests in these situations, how you deal with it in the context of the real world operation of an AI-enabled system, and then how we give structure to the uncertainty in a way that should inform our ethical reasoning about the operation of these systems. I think that’s a very worthy area of focus for us over the coming decades.

Ariel: Could you walk us through a specific example of how an AI system might be applied and what sort of uncertainties it might come across?

Ashley: Yeah, sure. So think about the situation where there’s a dangerous environment, let’s say, in a policing action or in a terrorist situation. Hey, there might be hostiles in this building, and right now a human being might have to go into that building to investigate it. We’ll send a team of robots in there to do the investigation of the building to see if it’s safe, and you can think about that situation as analogous for a number of possible different situations.

And now, let’s think about the state of computer vision technology, where straight pattern recognition is hopefully a fair characterization of the state of the art, where we know we can very accurately recognize objects from a given universe of objects in a computer vision feed, for example. Well, now what happens if these agents encounter objects from outside of that universe of training classes? How can we start to bound the performance of the computer vision algorithm with respect to objects from unknown classes? You can start to get a sense from that progression, just from the perception part of that problem, from recognize, of these 200 possible objects, tell me which class it comes from, to having to do vision type tasks in environments that would present many new and novel objects that they may have to perceive and reason about.

You can think about that perception task now as extending to agents that might be in that environment and trying to ascertain from partial observations of what the agents might look like, partial observations of the things they might be doing to try to have some assessment of this is a friendly agent or this is an unfriendly agent, to reasoning about affordances of objects in the environment that might present our systems with ways of dealing with those agents that conform to ethical principles.

That was not a very, very concrete example, but hopefully starts to get one level deeper into the kinds of situations we want to put systems into and the kinds of uncertainty that might arise.

Francesca: To tie to what Ashley just said, we definitely need a lot more ways to have realistic simulations of what can happen in real life. So testbeds, sandboxes, that is definitely needed. But related to that, there is also this ongoing effort — which has already resulted in tools and mechanisms, but many people are still working on it — which is to understand better the error landscape that the machine learning approach may have. We know machine learning always has a small percentage of error in any given situation and that’s okay, but we need to understand what’s the robustness of the system in terms of that error, and also we need to understand the structure of that error space because this information can inform us on what are the most appropriate or less appropriate use cases for the system.

Of course, going from there, this understanding of the error landscape is just one aspect of the need for transparency on the capabilities and limitations of the AI systems when they are deployed. It’s a challenge that spans from academia or research centers to, of course, the business units and the companies developing and delivering AI systems. So that’s why at IBM we are working a lot on this issue of collecting information during the development and the design phase around the properties of the systems, because we think that understanding of this property is very important to really understand what should or should not be done with the system.

And then, of course, there is, as you know, a lot of work around understanding other properties of the system. Like, fairness is one of the values that we may want to inject, but of course it’s not as simple as it looks because there are many, many definitions of fairness and each one is more appropriate or less appropriate in certain scenarios and certain tasks. It is important to identify the right one at the beginning of the design and the development process, and then to inject mechanisms to detect and mitigate bias according to that notion of fairness that we have decided is the correct one for that product.

And so, this brings us also to the other big challenge, which is to help developers understand how to define these notions, these values like fairness that they need to use in developing the system — how to define them not just by themselves within the tech company, but also communicating with the communities that are going to be impacted by these AI product, and that may have something to say on what is the right definition of fairness that they care about. That’s why, for example, another thing that we did, besides developing research and also products, but we also invest a lot in educating developers in trying to help them understand in their everyday jobs how to think about these issues, whether it’s fairness, robustness, transparency, and so on.

And so, we built this very small booklet — we call it the Everyday AI Ethics Guide for Designers and Developers — that raises a lot of questions that should be in their mind in their everyday job. Because as you know, for example, if you don’t think about bias or fairness during these development phases and you just check whether your product is fair or not or when it’s ready to be deployed, then you may discover that actually you need to start from scratch again if you discover that it doesn’t have the right notion of fairness.

Another effort that we really care a lot about in this effort to build teams of humans and machines is the issue of explainability, to make sure that it is possible to understand why these systems are recommending certain decisions. Explainability is something that, especially in this environment of teaming AI machines, is very important, because without this capability of AI systems of explaining why they are recommending certain decision, then the human being part of the team will not in the long run trust the AI system, so will not adopt it possibly. And so we will also lose the positive and beneficial effect of the AI system.

The last thing that I want to say is that this education of developers extends actually much beyond the developers to also the policy makers. That’s why it’s important to have a lot of interaction with policy makers that need to really be educated about the state of the art, about the challenges, about the limits of current AI, in order to understand how to best drive the current technology, to be more and more advanced, but also beneficial and driven towards the beneficial efforts. And what are the right mechanisms to drive the technology into the direction that we want? Still needs a lot more multi-stakeholder discussion to really achieve the best results, I think.

Ashley: Just picking up on a couple of those themes that Francesca raised: first, I just want to touch on simulations. At the applied physics laboratory, one of the core things we do is develop systems for the real world. And so, as the tools of artificial intelligence are evolving, the art and the science of systems engineering is starting to morph into this AI systems engineering regime. And we see simulation as key, more key than it’s ever been, into developing real world systems that are enabled by AI.

One of the things we’re really looking into now is what we call live virtual constructive simulations. These are simulations that you can do distributed learning for agents in a constructive mode where you have highly parallelized learning, but where you actually have links and hooks for live interactions with humans to get the human-machine teaming. And then finally, bridging the gap between simulation and real world where some of the agents represented in the context of the human-machine teaming functionality can be virtual and some can actually be represented by real systems in the real world. And so, we think that these kinds of environments, these live virtual constructive environments, will be important for bridging the gap from simulation to real.

Now, in the context of that is this notion of sharing information. If you think about the complexity of the systems that we’re building, and the complexity and the uncertainty of the real world conditions — whether that’s physical or cyber or what have you — it’s going to be more and more challenging for a single development team to analytically characterize the performance of the system in the context of real-world environment. And so, I think as a community we’re really doing science; We’re performing science, fielding these complex systems in these real-world environments. And so, the more we can make that a collective scientific exploration where we’re setting hypotheses, performing these experiments — these experiments of deploying AI in real world situations — the more quickly we’ll make progress.

And then, finally, I just wanted to talk about accountability, which I think builds on this notion of transparency and explainability. From what I can see — and this is something we don’t talk about enough, I think — is I think we need to change our notion of accountability when it comes to AI-enabled systems. I think our human nature is we want individual accountability for individual decisions and individual actions. If an accident happens, our whole legal system, our whole accountability framework is, “Well, tell me exactly what happened that time,” and I want to get some accountability based on that and I want to see something improve based on that. Whether it’s a plane crash or a car crash, or let’s say there’s corruption in a Fortune 500 company — we want see the CFO fired and we want to see a new person hired.

I think when you look at these algorithms, they’re driven by statistics, and the statistics that drive these models are really not well suited for individual accountability. It’s very hard to establish the validity of a particular answer or classification or something that comes out of the algorithm. Rather, we’re really starting to look at the performance of these algorithms over a period of time. It’s hard to say, “Okay, this AI-enabled system: tell me what happened on Wednesday,” or, “Let me hold you accountable for what happened on Wednesday.” And more so, “Let me hold you accountable for everything that you did during the month of April that resulted in this performance.”

And so, I think our notion of accountability is going to have to embrace this notion of ensemble validity, validity over a collection of activities, actions, decisions. Because right now, I think if you look at the underlying mathematical frameworks for these algorithms, they’re not well supported for this notion of individual accountability for decisions.

Francesca: Accountability is very important. It needs a lot more discussion. This is one of the topic also that we have been discussing in this initiative by the European Commission in defining the AI Ethics Guidelines for Europe, and accountability is one of the seven requirements. But it’s not easy to define what it means. What Ashley said is one possibility: Change our idea of accountability from one specific instance to over several instances. That’s one possibility, but I think that that’s something that needs a lot more discussion with several stakeholders.

Ariel: You’ve both mentioned some things that sound like we’re starting to move in the right direction. Francesca, you talked about getting developers to think about some of the issues like fairness and bias before they start to develop things. You talked about trying to get policy makers more involved. Ashley, you mentioned the live virtual simulations. Looking at where we are today, what are some of the things that you think have been most successful in moving towards a world where we’re considering AI safety more regularly, or completely regularly?

Francesca: First of all, we’ve gone a really long way in a relatively short period of time, and the Future of Life Institute has been instrumental in building the community, and everybody understands that the only approach to address this issue is a multidisciplinary, multi-stakeholder approach. The Future of Life Institute, with the first Puerto Rico conference, showed very clearly that this is the approach to follow. So I think that in terms of building the community that discusses and identifies the issues, I think we have done a lot.

I think that at this point, what we need is greater coordination and also redundancy removal among all these different initiatives. I think we have to find, as a community, the main issues and the main principles and guidelines that we think are needed for the development of more advanced forms of AI, starting from the current state of the art. If you look at the values, at these guidelines or lists of principles around AI ethics from the values initiatives, they are of course different from each other but they have a lot in common. So we really were able to identify these issues, and this identification of the main issues is important as we move forward to more advanced versions of AI.

And then, I think another thing that also we are doing in a rather successful though not complete way is trying to move from research to practice. From high level principles to concrete, develop, and deploy the products that can embed these principles and guidelines into not just the scientific papers that are published, but also into the platform, the services, and the tool kits that companies use with their clients. We needed an initial phase where there were high level discussions about guidelines and principles, but now we are in the second phase where these go and percolate down to the business units and to how products are built and deployed.

Ashley: Yeah, just building on some of Francesca’s comments, I’ve been very inspired by the work of the Future of Life Institute and the burgeoning, I’ll say, emerging AI safety community. Similar to Francesca’s comment, I think that the real frontier here is now taking a lot of that energy, a lot of that academic exploration, research, and analysis and starting to find the intersections of a lot of those explorations with the real systems that we’re building.

You’re definitely seeing within IBM, as Francesca mentioned, within Microsoft, within more applied R & D organizations like Johns Hopkins APL, where I am, internal efforts to try to bridge the gap. And what I really want to try to work to catalyze in the coming years is a broader, more community-wide intersection between the academic research community looking out over the coming centuries and the applied research community that’s looking out over the coming decades, and find the intersection there. How do we start to pose a lot of these longer term challenge problems in the context of real systems that we’re developing?

And maybe we get to examples. Let’s say, for ethics, beyond the trolley problem and into posing problems that are more real-world or closer, better analogies to the kinds of systems we’re developing, the kinds of situations they will find themselves in, and start to give structure to some of the underlying uncertainty. Having our debates informed by those things.

Ariel: I think that transitions really nicely to the next question I want to ask you both, and that is, over the next 5 to 10 years, what do you want to see out of the AI community that you think will be most useful in implementing safety and ethics?

Ashley: I’ll probably sound repetitive, but I really think focusing in on characterizing — I think I like the way Francesca put it — the error landscape of a system as a function of the complex internal states and workings of the system, and the complex and uncertain real-world environments, whether cyber or physical that the system will be operating in, and really get deeper there. It’s probably clear to anyone that works in the space that we really need to fundamentally advance the science and the technology. I’ll start to introduce the word now: trust, as it pertains to AI-enabled systems operating in these complex and uncertain environments. And again, starting to better ground some of our longer-term thinking about AI being beneficial for humanity and grounding those conversations into the realities of the technologies as they stand today and as we hope to develop and advance them over the next few decades.

Francesca: Trust means building trust in the technology itself — and so the things that we already mentioned like making sure that it’s fair, value aligned, robust, explainable — but also building trust in those that produce the technology. But then, I mean, this is the current topic: How do we build trust? Because without trust we’re not going to adopt the full potential of the beneficial effect of the technology. It makes sense to also think in parallel, and more in the long-term, what’s the right governance? What’s the right coordination of initiatives around AI and AI ethics? And this is already a discussion that is taking place.

And then, after governance and coordination, it’s also important with more and more advanced versions of AI, to think about our identity, to think about the control issues, to think in general about this vision of the future, the wellbeing of the people, of the society, of the planet. And how to reverse engineer, in some sense, from a vision of the future to what it means in terms of a behavior of the technology, behavior of those that produce the technology, and behavior of those that regulate the technology, and so on.

We need a lot more of this reverse engineering approach, where instead of starting from the current state of the art of the technology and saying, “Okay, these are the properties that I think I want in this technology: fairness, robustness, transparency, and so on, because otherwise I don’t like this technology to be deployed without these properties.” And then see what happens in the next version, more advanced version of the technology, and think about possibly new properties and so on. This is one approach, but the other approach is that, “Okay, this is the vision of life in, I don’t know, 50 years from now. How do I go from that to the kind of the technology, to the direction that I want to push the technology towards to achieve that vision?

Ariel: We are getting a little bit short on time, and I did want to follow up with Ashley about his other job. Basically, Ashley, as far as my understanding, you essentially have a side job as a hip hop artist. I think it would be fun to just talk a little bit in the last couple of minutes that we have about how both you and Francesca see artificial intelligence impacting these more creative fields. Is this something that you see as enhancing artists’ abilities to do more? Do you think there’s a reason for artists to be concerned that AI will soon be a competition for them? What are your thoughts for the future of creativity and AI?

Ashley: Yeah. It’s interesting. As you point out, over the last decade or so, in addition to furthering my career as an engineer, I’ve also been a hip hop artist and I’ve toured around the world and put out some albums.I think where we see the biggest impact of technology on music and creativity, I think, is, one, in the democratization of access to creation. Technology is a lot cheaper. Having a microphone and a recording setup or something like that, from the standpoint of somebody that does vocals like me, is much more accessible to many more people. And then, you see advances and — you know, when I started doing music I would print CDs and press vinyl. There was no iTunes. And just, iTunes has revolutionized how music is accessed by people, and more generally how creative products are accessed by people in streaming, etc. So I think looking backward, we’ve seen most of the impact of technology on those two things: access to the creation and then access to the content.

Looking forward, will those continue to be the dominant factors in terms of how technology is influencing the creation of music, for example? Or will there be something more? Will AI start to become more of a creative partner? We’ll see that and it will be evolutionary. I think we already see technology being a creative partner more and more so over time. A lot of the things that I studied in school — digital signal processing, frequency, selective filtering — a lot of those things are baked into the tools already. And just as we see AI helping to interpret other kinds of signal processing products like radiology scans, we’ll see more and more of that in the creation of music where an AI assistant — for example, if I’m looking for samples from other music — an AI assistant that can comb through a large library of music and find good samples for me. Just as we do with Instagram filters — an AI suggesting good filters for pictures I take on my iPhone — you can see in music AI suggesting good audio filters or good mastering settings or something, given a song that I’m trying to produce or goals that I have for the feel and tone of the product.

And so, already I think as an evolutionary step, not even a revolutionary step, AI becoming more present in the creation of music. I think maybe, as in other application areas, we may see, again, AI being more of a teammate, not only in the creation of the music, but in the playing of the music. I heard an article or a cast on NPR about a piano player that developed an AI accompaniment for himself. And so, as he played in a live show, for example, there would be an AI accompaniment and you could dial back the settings on it in terms of how aggressive it was in rhythm and time, where it situated with respect to the lead performer. Maybe in hip hop we’ll see AI hype men or AI DJs. It’s expensive to travel overseas, and so somebody like me goes overseas to do a show, and instead of bringing a DJ with me, I have an AI program that can select my tracks and add cuts at the right places and things like that. So that was a long-winded answer, but there’s a lot there. Hopefully that was addressing your question.

Ariel: Yeah, absolutely. Francesca, did you have anything you wanted to add about what you think AI can do for creativity?

Francesca: Yeah. I mean, of course I’m less familiar of what AI is already doing right now, but I am aware of many systems from companies into the space of delivering content or music or so on, systems where the AI part is helping humans develop their own creativity even farther. And as Ashley said, I mean, I hope that in the future AI can help us be more creative — even people that maybe are less able than Ashley to be creative themselves. And I hope that this will enhance the creativity of everybody, because this will enhance the creativity, yes, in hip hop or in making songs or in other things, but also I think it will help to solve some very fundamental problems because having a population which is more creative, of course, is more creative in everything.

So in general, I hope that AI will help us human beings be more creative in all aspects of our life besides entertainment — which is of course very, very important for all of us for the wellbeing and so on — but also in all the other aspects of our life. And this is the goal that I think — going to the beginning where I said AI’s purpose should be the one of enhancing our own capabilities. And of course, creativity is also a very important capability that human beings have.

Ariel: Alright. Well, thank you both so much for joining us today. I really enjoyed the conversation.

Francesca: Thank you.

Ashley: Thanks for having me. I really enjoyed it.

Ariel: For all of our listeners, if you have been enjoying this podcast, please take a moment to like it or share it and maybe even give us a good review. And we will be back again next month.

FLI Podcast: The Unexpected Side Effects of Climate Change With Fran Moore and Nick Obradovich

It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.

In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.

Topics discussed in this episode include:

  • How getting used to climate change may make it harder for us to address the issue
  • The social cost of carbon
  • The effect of temperature on mood, exercise, and sleep
  • The effect of temperature on public safety and democratic processes
  • Why it’s hard to get people to act
  • What we can all do to make a difference
  • Why we should still be hopeful

Publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hello, and a belated happy Earth Day to everyone. I’m Ariel Conn, your host of The Future of Life podcast. And in honor of Earth Day this month, I’m happy to have two climate-related scientists joining the show. We’ve all heard about the devastating extreme weather that climate change will trigger; We’ve heard about melting ice caps, rising ocean levels, warming oceans, flooding, wildfires, hurricanes, and so many other awful natural events.

And it’s not hard to imagine how people living in these regions will be negatively impacted. But climate change won’t just affect us directly. It will also impact the economy, agriculture, our mental health, our sleep patterns, how we exercise, food safety, the effectiveness of policing, and more.

So today, I have two scientists joining me to talk about some of those issues. Doctor Nick Obradovich is a research scientist at the MIT Media Lab. He studies the way that climate change is likely impacting humanity now and into the future. And Doctor Fran Moore is an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. Her work sits at the intersection of climate science and environmental economics and is focused on understanding how climate change will affect the social and natural systems that people value.

So Nick and Fran, thank you so much for joining us.

Nick: Thanks for having us.

Fran: Thank you.

Ariel: Now, before we get into some of the topics that I just listed, I want to first look at a paper you both published recently called “Rapidly Declining Remarkability of Temperature Anomalies May Obscure Public Perception of Climate Change.” And essentially, as you describe in the paper, we’re like frogs in boiling water. As long as the temperatures continue to increase, we forget that it used to be cooler and we recalibrate what we consider to be normal for weather. So what may have been considered extreme 15 years ago, we now think of as normal.

Among other things, this can make trying to address climate change more difficult. I want both of you now to talk more about what the study was and what it means for how we address climate change. But first, if you could just talk about what prompted this study.

Fran: So I’ve been interested for a long time in the question of: as the climate changes and people are gradually exposed to this new weather in their everyday life that used to be very unusual but because of climate change more and more typical, how do we think about defining things like extreme events under those kind of conditions?

I think researchers have this intuition that there’s something about human perception and judgment that goes into that or that there’s some kind of limit of how humans kind of understand the weather that define what we think of as normal and extreme, but no one had really been able to measure it. What I think is really cool in this study, and working with Nick and our other coworkers, we’re able to use data from Twitter to actually measure what people think of as remarkable, and then we can show that that changed quickly over time.

Ariel: I found this use of social media to be really interesting. Can you talk a little bit about how you used Twitter? And I was also curious if that — aside from being a new source of information — does it also present limitations in any way or is it just exciting new information?

Nick: The crux of this insight was that we talk about the weather all the time. It’s sort of the way to pass time in casual conversation, to say hi to people, to awkwardly change the topic — if someone has said something a little awkward, start talking about the weather. And we realized that Twitter is a great source for what people are talking about, and I had been collecting billions of tweets over the last number of years. And Fran and I met, and then we got talking about this idea and we were like, “Huh, you know, I bet you could use Twitter to measure how people are talking about the weather.” And then Fran had the excellent insight that you could also use it to get a metric of how remarkable people find the weather by how unusually much they’re talking about unusual weather. And so that was kind of the crux of the insight there.

And then really what we did is we said, “Okay, what terms exist in the English language that might likely refer to weather when people are talking about the weather?” And we combed through the billions of tweets that I had in my store and found all of the tweets plausibly about the weather and used that for our analysis and then mapped that to the historical temperatures that people had experienced and also the rates of warming over time that the locations that people lived in had experienced.

Ariel: And what was the timeframe that you were looking at?

Fran: So it’s about three years: from March of 2014 to the end of 2016. But then we’re able to combine that with weather data that goes back to 1980. So what we can then look at — we can match the tweeting behavior going on in this relatively recent time period, but we can look at how is that explained by all the patterns of temperature change across these counties.

So what we found that, firstly, maybe exactly what you would expect, right, which is that the rate at which people tweet about particular temperatures depends on what is typical for that location, for that time of year. And so if you have very cold weather but that very cold weather is basically what you should be expecting, you’re going to tweet about that less than if that very cold weather is atypical.

But then what we were able to show is that what people think of as “usual” that defines this tweeting behavior changes really quickly, so that if you have these unusual temperatures multiple years in a row the tweeting response quickly starts to decline. So what that indicates is that people are adjusting their ideas of normal weather very quickly. And we’re actually able to use the tweets to directly estimate the rate at which this updating happens and, to our best estimate, we think that people are using approximately the last two to eight years as a baseline for establishing normal temperatures for that location for that time of year. When people think of, look at the weather outside, and they’re evaluating is it hot, is it cold, the reference point they’re using is set by the fairly recent past.

Ariel: What does this mean as we’re trying to figure out ways to address climate change?

Nick: When we saw this result, we were a bit troubled because it was faster than we would perhaps hope. I’m a political scientist by training, and I saw this and I said, “This is not ideal,” because if you have people getting used to a climate that is changing on geologically rapid scales but perhaps on human time scales somewhat slow — if people get used to that as it changes, then some of the things that we know helps to drive political action, policy, and political attention is just awareness of a problem. And so if you’re having people’s expectations adapt pretty quickly to climate change, then all of a sudden a hundred-degree day in North Dakota would have been very unusual in 2000 but maybe it’s fairly normal in 2030. And so as a result, people aren’t as aware of the signal that climate change is producing. And that could have some pretty troubling political implications.

Fran: My takeaway from this is that I think it certainly points to the risk that these conditions that are geologically or even historically very, very unusual — that they are not perceived as such. We’re really limited by our human perception, and that’s even within individuals, right — what we’re estimating is something that happens within an individual’s lifetime.

So what it means is that you can’t just assume that as climate change gets worse it’s going to automatically rise to the top of the political agenda in terms of urgency. And that, like a lot of other chronic, serious social problems we have, that it takes a lot of work on the part of activists and norm entrepreneurs to do something about climate change. And that just because it’s happening and it’s becoming, at least statistically or scientifically, increasingly clear that it’s happening, that won’t necessarily translate into people wanting to do something about it.

Ariel: And so you guys were looking more at what we might consider sort of abnormalities in relatively normal weather: if it’s colder in May than we’d expect or it’s hotter in January than we’d expect. But that’s not the same as some of the extreme weather events that we’ve also seen. I don’t know if this is sort of a speculative question, but do you think the extreme weather events could help counter our normalization of just changing temperatures or do you think we would eventually normalize the extreme weather events as well?

Nick: That’s a great question. So one of the things we didn’t look at is, for example, giant hurricanes, big wildfires, and things like that that are all likely to increase in frequency and severity in the future. So it could certainly be the case that the increase in frequency and intensity of those events offsets the adaptation, as you suggest. We actually are trying to think about ways to measure how people might adapt to other climate-driven phenomena aside from just regular, day-to-day temperature.

I hope that’s the case, right? Because if we’re also adapting to sea level rise pretty rapidly as it goes along and we’re also adapting to increased frequency of wildfires and things like that, a few things might happen; one being that if we’re getting used to semi-regular flooding, for example, we don’t move as quickly as we need to — up to the point where basically cities start getting inundated, and that could be very problematic. So I hope that what you suggest actually turns out to be the case.

Fran: I think that this is a question we get a lot, like, “Oh, well temperature is one thing, but really the thing that’s really going to spur people is these hurricanes or floods or these wildfires.” And I think that’s a hypothesis, but I would say it’s as yet untested. And sure, a hurricane is an extreme event, but when they start happening frequently, is that going to be subject to the same kind of normalization phenomenon that we show here? I would say I don’t know, and it’s possible it would look really different.

But I think it’s also possible that it wouldn’t, and that when you start seeing these happen on a very regular basis, that they become normalized in a very similar way to what you see here. And it might be that they spur some kind of adaptation or response policy, but the idea that they would automatically spur a lot of mitigation policy I think is something that people seem to think might be true, but I would say that we need some more empirical evidence.

Nick: I like to think of humans as an incredibly adaptable species. I think we’re a great species for that reason. We’re arguably the most successful ever. But our adaptability in this instance may perhaps prove to be part of our undoing, just in normalizing worsening conditions as they deteriorate around us. I hope that the hypothesis that Fran lays out ends up being the case: that, as the climate gets weirder and weirder, there is enough signal that people become concerned enough to do something about it. But it is just an empirical hypothesis at this point.

Fran: What I thought was a really neat thing that we were able to do in this paper was ask: are people just not talking about these conditions because they’ve normalized them and they’re no longer interesting or have people actually been able to take action to reduce the negative consequences of these conditions? And so to do that we used sentiment analysis. So this is something that Nick and our other author Patrick Baylis have used before: Just based on the words that are being used in the tweets, you can measure the overall mood being conveyed or the kind of emotional state of people sending those tweets and what very hot and very cold temperatures have negative effects on sentiment. And we find that those effects persist even if people stop talking about these unusual temperatures.

What that’s saying is that this is not a good news story of effective adaptation, that people are able to reduce the negative consequences of these temperatures. Actually, they’re still being very negatively affected by them — and they’re just not talking about them anymore. And that’s kind of the worst of both worlds.

Ariel: So I want to actually follow up with that because I had a question about that paper that you just referenced. And if I was reading it correctly, it sort of seemed like you’re saying that we basically get crankier as the weather falls onto either extreme of our preferred comfort zone. Is that right? Are we just going to be crankier as climate gets worse?

Nick: So that was the paper that Patrick Baylis and I had with a number of other co-authors, and the key point about that paper is that we were looking at historical contemporaneous weather and we weren’t looking for adaptation over time with that analysis. So what we found is that at certain level of temperature, for example when it’s really hot outside, people’s sentiment goes down — their mood is worsened. When it’s really cold outside, we also found that people’s sentiment was worsened; and we found that, for example, lots of precipitation made people unhappy as well.

But with that paper what we didn’t do was examine the degree to which — changes in the weather over time, people got used to those. And so that’s what we were able to do in this paper with Fran, and what we saw was, as Fran points out, troubling, which is that people weren’t substantially adapting to these temperature shocks over time, to longer term changes in climate —  they just weren’t talking about them as much.

So if you think though that there is no adaptation, then yeah, if the world becomes much hotter, on the hot end of things — so in the summer, in the northern hemisphere for example — people will probably be a bit grumpier. Importantly though, on the other side of things, in the wintertime, if you have warming, you might expect that people are in somewhat better moods because they’re able to enjoy nicer weather outside. So it is a little bit of a double-edged sword in that way, but again important that we don’t see that people are adapting, which is pretty critical.

Ariel: Okay. So we can potentially expect at least the possibility of decrease in life satisfaction just because of weather, without us even really appreciating that it’s the weather that’s doing it to us?

Nick: Yes, during hotter periods. The converse is that during the wintertime, in the northern hemisphere, we would have to say that warming temperatures, people would probably enjoy for the most part. If it was supposed to be 35 degrees Fahrenheit outside and it’s now 45 Fahrenheit, that’s a bit more pleasant. Now you can go with a lighter jacket.

So there will be those small positive benefits — although, as Fran is probably going to talk about here in a little bit, there are other big countervailing negatives that we need to consider too.

Fran: What I like about this paper that Nick and Patrick wrote previously on sentiment, they have these comparisons to it being a Monday or to home team loss. Sometimes it’s hard to put these measures in perspective, and so Mondays on average make people miserable and it being very, very hot out also makes people miserable in kind of similar ways to it being a Monday.

Nick: Yeah. We found that particularly cold temperatures, for example, were a similar magnitude of effect on positive sentiment. A reduced positive sentiment of a magnitude that was equivalent to a small earthquake in your location and things like that. So the magnitude effects of the weather are much larger than we necessarily thought that they would be, which we thought was I guess interesting. But also there was a whole big literature from psychology and economics and political science that had looked at weather and various outcomes and found that sometimes the effect sizes were very large and sometimes the effect sizes were effectively zero. So we tried to basically just provide the answer to that question in that paper: The weather matters.

Ariel: I want to go back to the idea of whether or not extreme events will be normalized, because I tend to be slightly cynical — and maybe this is hopeful for once — that the economic cost of the extreme events is not something we would normalize too, that we would not get used to having to spend billions of dollars a year, whatever it is, to rebuild cities.

And Fran, I think that touches on some of your work if I’m correct, in that you look at what some of these costs of climate change would be. So first, is that correct? Is that one of the things that you look at?

Fran: Yeah. A large component of my work has been on improving the representation of climate change damages, so kind of what we know from the physical sciences about how climate change affects the things that we care about and including the representation of that in the thing called the social cost of carbon, which is a measure that’s very relevant for the regulatory and policy analysis for climate change.

Ariel: Can you explain what the social cost of carbon is? What is being measured?

Fran: So if you think about when we emit a ton of CO2, right, and that ton of CO2 goes off into the elements of the earth and it’s going to affect the climate, that change in the climate is going to have consequences around the world in many different sectors and is going to stay in the atmosphere for a long time. And so those effects are going to persist far out into the future.

What the social cost of carbon is, it’s really just an accounting exercise that tries to quantify what are all those impacts and then add them all up together and put them in common units and assign that as a cost of that ton of CO2 that you emitted. You can see in that description why this is an ambitious exercise in that we’re talking about, theoretically there should be all these climate change impacts around the world for all time. And then there’s another step too, which is in order to aggregate these to add them up, you need to put everything into common units. So the units that we use are dollars, so that’s a critical economic valuation step in order to think about these things that happen in agriculture or they happen along coastlines or they affect mortality risk and how do you take all them and then put them into some kind of common unit and value them all.

And so depending on what type of impact you’re talking about, that’s more or less challenging. But it’s an important number because at least in the United States, we have a requirement that all regulations have to have passed a cost-benefit analysis. So in order to do a cost-benefit analysis of climate regulation, you need to understand what are the benefits of not emitting CO2? So pretty much any policy that’s affecting emissions needs to account for these damages in some way. That’s why this is very directly relevant to policy.

Ariel: I want to keep looking at what this means. In one of your papers you have a sentence that reads, “impacts on the agriculture increase from net benefits $2.7 ton per carbon to net cost of $8.5 per ton of CO2.” I think that seemed like a really good example for you to explain what these costs actually mean?

Fran: Yeah. This was an exercise I did a couple of years ago with coauthors Tom Hertel and Uris Baldos and Delavane Diaz. The idea was that we know now a lot about how climate change affects crop yields. There’s been an awful lot of work on that in economics and agricultural sciences. But that was essentially not represented in the social cost of carbon, where our estimates of climate change damages really came from studies that were either in the late 80s or the early 90s, and really our understanding of how climate change will affect agriculture has really changed since then.

What those numbers represent, the benefits of $2.7 per ton is what is currently represented in the models that calculate the social cost of carbon. So the fact that it’s negative, that indicates that these models were thinking that agriculture on net is going to benefit from climate change. This is largely because a combination of CO2 fertilization and a fair bit of assumption that in most of the world crops are going to benefit from higher temperatures. Now we know that’s more or less not the case.

When we look at how we think temperature and CO2 is going to affect the major crops around the world, we use these estimates from the IPCC, and then we introduce those into an economic model. This is a valuation set. That economic model will kind of account for the fact that countries can shift what they grow, they can change their consumption patterns, they can change their trading partners. A lot of these economic adjustments that we know can be done, and this modeling accounts for all of that. We find a fairly large negative effect of climate change on agriculture, which amounts to about $9 per ton of CO2, and those are kind of discounted paths. So you emit a ton of CO2 today, that’s the dollar value today of all the future damages that ton of CO2 will have via the agricultural sector.

Ariel: As a reminder, how many tons of CO2 were emitted, say, last year, or the year before? Something that we know?

Fran: We do know that. I’m not sure I can tell you that off the top of my head. I would caution you that you also don’t want to take this number and just multiply it by the total tons emitted, because this is a marginal value. This is merely about do we emit this ton or not? It’s really not a value that can be used for saying, “Okay, well the total damages from climate change are X.” There’s distinction between total damages and marginal damages, and the social cost of carbon number is very much about marginal damages.

So it’s like at the margin, how much should we tax CO2? It’s really not going to tell you, should we be on a two-degree pathway, or should we be on a four-degree pathway, or should we be on a 1.5-degree pathway? That you need a really different analysis for.

Ariel: I want to ask one more follow-up question to this, and then I want to get onto some of the other papers of Nick’s. What are the cost estimates that we’re looking at right now? What are you comfortable saying that we’re, I don’t know, losing this much money, we’re going to pay this much money, we’re going to negatively be impacted by X number of dollars?

Fran: The exercise that the Obama administration went through, a fairly comprehensive exercise to take the existing models and standardize them in certain ways to try and say, “What is the social cost of carbon value that we should use?” They have a number that’s around $40 per ton of CO2. If you take that number as a benchmark, there’s obviously a lot of uncertainty around it, and I think it’s fair to say a lot of that uncertainty is on the high end rather than on the low end. So if you think about probability distribution around that existing number, I would say there’s a lot of reasons why it might be higher than $40 per ton, and there’s a few, but not a ton, of reasons why it might be lower.

Ariel: Nick, was there anything you wanted to add to what Fran has just been talking about?

Nick: Yeah. The only thing I would say is I totally agree that the uncertainty is on the upper bound of the estimate of the social cost of carbon, and possibly on the extreme upper bound. So there are unknowns that we can’t estimate from the historical data in terms of being able to figure out what happens in the natural system and how that translates through to the social system and the social costs. We and Fran are basically just doing the best we can with the historical evidence that we can bring to bear on the question, but there are giant “unknown unknowns,” to quote Donald Rumsfeld.

Ariel: I want to sort of quantify this ever so slightly. I Googled it, and it looks like we are emitting in the tens of billions of tons of carbon each year? Does that sound right?

Fran: Check that it’s carbon and not CO2. I think it’s eight to nine gigatons of carbon.

Ariel: Okay.

Nick: CO2 equivalence.

Ariel: Anyway, it’s a lot.

Nick: It’s a lot, yeah.

Ariel: That’s the point.

Nick: It’s a lot; It’s increasing. I think 2018 was an increased blip in terms of the rate of emissions. We need to be decreasing, and we’re still increasing. Not great.

Ariel: All right. We’ll take a quick break from the economic side of things and what this will financially cost us, and look at some of the human impacts that we many not necessarily be thinking about, but which Nick has been looking into. I’m just going to go through a list of very quick questions that I asked about a few papers that I looked at.

The first one I looked at is apparently — and this makes sense when I think about it — climate change is going to impact our physical activity, because it’s too hot in places, or things like that. I was wondering if you could talk a little bit about the research you did into that and what you think the health implications are.

Nick: Yeah, totally. So I like to think about the climate impacts that are not necessarily easily and readily and immediately translated into dollar value because I think really we live in a pretty complex system, and when you turn up the temperature on that complex system, it’s probably going to affect basically everything. The question is what’s going to be affected and how much are the important things going to be affected? And so a lot of my work has focused on identifying things that we hadn’t yet thought about as social scientists in doing the social impact estimates in the cost of carbon and just raising questions about those areas.

Physical activity was one. The idea to look at that actually came from back in 2015 — there was a big heat wave in San Diego when I was living there, and I was in a regular running regimen. I would go running at 4:00 or 5:00 PM, but there were a number of weeks, definitely strings of days, where it was 100 degrees or more in October in San Diego, which is very unusual. At 4:00 PM it would be 100 degrees and kind of humid, so I just didn’t run as much for a couple of weeks, and that threw off my whole exercise schedule. I was like, “Huh, that’s an interesting impact of heat that I hadn’t really heard about.”

So I was like, “Well, I know this big data set that collects people’s reported physical activity over time, and has a decade worth of data on randomly sampled US, I think about a million randomly sampled US citizens.” Over a million. So I had those data, and I was like, “Well, I wonder if you see the weather and the climate that these people are living in, does that influence their exercise patterns?” What we found was a little bit surprising to me because I had thought about it on the hot end: “Oh, I stopped running because it was too hot.” But the reality is that temperature, and also rainfall, impact our physical activity patterns across the full distribution.

When it’s really cold outside, people don’t report being very physically active and one of the main reasons for that is one of the primary ways Americans get physical activity is by going outside for a run or a jog or a walk. When it’s very nasty outside, people report not being as physically active. We saw on the cold end of the distribution that as temperatures warmed up, people exercised more. That was actually up to a relatively high peak in that function. It was an inverted U shape, and the peak was relatively high in terms of temperature. It was somewhere around 84 degrees fahrenheit.

What we realized actually is that at least in the US, at least in some of the northern latitudes in the US, people might exercise more as temperatures warm up to a point. They might exercise more in the wintertime, for example. That was this small little silver lining in what is otherwise, from my research and from Fran’s research and most research on this topic, a cascade of negative news that is likely to result from climate change. But the health impacts of being more physically active are positive. It’s one of the most important things we can do for our health. So a small, positive impact of warming temperatures offset by all the other things that we’ve found.

Ariel: I know from personal experience I definitely don’t like to run in the winter. I don’t like ice, so that makes sense.

Nick: Ice, frostbite.

Ariel: Yeah.

Nick: All these things are … yeah. So just observationally, if I look out my window, and there’s a running path near me, I see dramatically more people on a sunny, mild day than I do during the middle of the winter. That’s how most people get their exercise. A lot of people, we know from the public health literature, if they’re not going out for a walk or a stroll, they’re not really getting any physical activity at all.

Ariel: Okay. So potential good news.

Nick: A little bit. Just a little bit.

Fran: Yeah. Nick moved from San Diego to Boston, so I think he’s got a better appreciation of the benefits of warmer wintertime temperatures.

Nick: I do! Although, and this is an important limitation in that study, is we didn’t really, again, look at adaptation over time. And what I found moving to Boston was that I got used to the cold winters much faster than I thought I would coming from San Diego, and now do go running in the wintertime here, though I thought I would barely be able to go outside. So perhaps that’s a positive thing in terms of our ability to adapt on the hotter end as well, and perhaps that undercuts a little bit the degree to which warming during the winter might increase physical activity.

This is a broader and more general point. A lot of these studies — it’s pretty hard to look at long-term adaptation over time because some of the data sets that we have just don’t give us enough span of time to really see people adapt their behaviors within person. So, many of the studies are kind of estimating the direct effect of temperature, for example, on physical activity, and not estimating how much long-term warming has changed people’s physical activity patterns. There are some studies that do that with respect to some outcomes — for example, agricultural yields. But it’s less common to do that with some of the public health-related outcomes and psychological-related outcomes.

Ariel: I want to ask about some of these other studies you’ve done as well, but do you think starting these studies now will help us get more research into this in the future?

Nick: Yeah. I think the more and the better data that we have, the better we’re going to be able to answer some of these questions. For example, the physical activity paper, also we did a sleep paper — the self-report data that we used in those papers are indeed just self-report data. So we’re able to get access to what are called actigraph data, or data that come from monitors like Fitbit and actually track people’s sleep and physical activity. We’re working on those follow-up studies, and the more data that we have and the longer that we have those data, the more we can identify potential adaptation over time.

Ariel: The sleep study was actually where I was going to go next. It seemed nicely connected to the physical activity one. Basically we’ve been told for years to get eight hours of sleep and to try to set the temperatures in our rooms to be cooler so that our quality of sleep is better. But it seems that increasing temperatures from climate change might affect that. So I was hoping you could weigh in on that too.

Nick: Yeah. I think you said it pretty well. The results in that paper basically indicate that higher nighttime temperatures outside, higher ambient temperatures outside, increase the frequency that people report a bad night of sleep. Basically what we say is absent adaptation, climate change might worsen human sleep in the future.

Now, one of the primary ways you adapt, as you just mentioned, is by turning the AC on, keeping it cooler in the room in the summertime, and trying to fight the fact that it’s — as it was in San Diego — it’s 90 degrees and humid at 12:00 AM. The problem with that is that a lot of our electricity grid is currently still on carbon. Until we decarbonize the grid, if we’re using more air conditioning to make it cooler and make it comfortable in our rooms in the summers, we are emitting more carbon.

That poses something else that Fran and I have talked about and others are starting to work on: the idea that it’s not a one-way street. In other words, if the climate system is changing, and it’s changing our behaviors in order to adapt to it, or just frankly changing our behaviors, we are potentially altering the amount of carbon that we put back into the system and the positive feedback loop that’s driven by humans this time, as opposed to permafrost and things like that. So, it’s a big, complex equation. And that makes estimating the social cost of carbon all the harder because it’s no longer just this one-way street. But if it means emitting carbon through behavioral effects of emitting that carbon causes the emission of more carbon, then you have a harder-to-estimate function.

Fran: Yeah, you’re right, and it is hard. I often get questions of like, “Oh, is this in the social cost of carbon? Is this?” And usually the answer is no.

Ariel: Yeah. I guess I’ve got another one sort of like that. I mean, I think studies indicate pretty well right now that if you don’t get enough sleep, you’re not as productive at work, and that’s going to cost the economy as well. Is stuff like that also being considered or taken into account?

Fran: I think in general, I think researchers’ ideas a few decades ago was very much that there were a very limited set of pathways by which a developed economy could be affected by climate. We could enumerate those, and they were things like agriculture or forestry and coastline affected by sea level rise. The newer work that’s being done now, like Nick’s papers that we just talked about, and a lot of other work, is showing that actually we seem to be very sensitive to temperature on a number of fronts, and that has these quite pervasive economic effects.

Fran: And so, yeah, the sleep question is a huge one, right? If you don’t get a good night’s sleep, that affects how much you can learn in school the next day, it affects your productivity at work the next day. So we do see evidence that temperature affects labor productivity in developed countries. Even in sectors that you think should be relatively well insulated against them, let’s say because there’s work that’s being done inside, there’s evidence too that high temperatures affect how well students can learn in school and their test scores. That has potentially a very long term effect on their educational trajectory in life and their ability to accumulate human capital and their earning potential in the future.

Fran: And so, these newer findings I think are suggesting that even developed economies are sensitive in ways that we’re only beginning to learn to climate change, and pretty much none of that is currently represented in our current estimates of the social cost of carbon.

Nick: Yeah, that’s a great point. And to add an example to that, I did a study last year in which I looked at government productivity, so government workers’ productivity. Because we had seen a number of these studies, as Fran mentioned, that private sector productivity was declining, and I was wondering if government workers that are tasked with overseeing our safety, especially in times of heat stress and other forms of stress, if those workers themselves were affected by heat stress and other forms of environmental stress.

We indeed found that they were, so we found that police officers were less likely to stop people in traffic stops even though there was an increased risk of traffic fatalities and also crime increases with higher temperatures as well. We found that food safety inspectors were less likely to do inspections. The probability of an inspection declined as the temperature increased, though the risk of violation conditional on an inspection happening increased. So it’s more likely that there’s a food safety problem when it’s hot out, but food safety inspectors were less likely to go out and do inspections.

That’s another thing that fits into, “Okay, we’re affected in really complex ways.” Maybe it’s the case that the food safety inspectors were less likely to go do their job because they were really tired because they didn’t sleep well the night before, or perhaps because they were grumpy because it was really hot outside. We don’t know exactly, but these systems are indeed really complicated and probably a lot of things are in play all at once.

Ariel: Another one that you have looked that I think is also important to consider in this whole complex system that’s being impacted by climate change is democratic processes.

Nick: Yeah, yeah. I’m a political scientist by training, and what we political scientists do is think a lot about politics, the democratic process, voting, turnout, and one of things that we know best in political science is this thing called retrospective voting or perhaps economic voting — basically the idea that people vote largely based on either how well they individually are doing, or how well they perceive their society is doing under the current incumbent. So in the US for example, if the economy is doing well the incumbent faces better prospects than if the economy is doing poorly. If individuals perceive that they are doing well, the incumbent faces better prospects.

I basically just sat down and thought for a while, and was like, you know, climate change across all these dimensions is likely to worsen both economic well-being, and also just personal, psychological, and physiological well-being. I wonder if it’s the case that it might somewhat disrupt the way that democracies function, and the way that elections function in democracies. For example, if you’re exposed to hotter temperatures there are lots of reasons to suspect that you might perceive being yourself less well-off — and whoever’s in office, you might just be a little bit less likely to vote for them in the next election.

So I put together a bunch of election results from a variety of countries around the world, a variety of democratic institutions around the world, and looked at the effect of hotter temperatures on the incumbent politicians’ prospects in the upcoming elections: So, what were the effects of the temperatures prior to the election on the electoral success of that incumbent? And what I found was that as you had unusual increases in temperature the year prior to an election, and as those got hotter on the distribution — so hotter places — you saw that the incumbent prospects declined in that election. Incumbent politicians were more likely to get thrown out of office when temperatures were unusually warm, especially in hotter places.

And that, as a political scientist, is a little bit troubling because it could be two things. It could be the case that politicians are being thrown out of office because they don’t respond well to the stressors associated with added temperature. So they could, for example, if there was a heatwave, and it caused some crop losses, maybe those politicians didn’t do a good enough job helping the people who lost those crops. But it also might just be the case that people are grumpier, and they’re not feeling as good, and there’s really no way the politician can respond, or the politician has limited resources and can only respond so much.

And if that’s the driving function then what you see is this exogenous shock leading to an ouster of a democratically elected politician, perhaps not directly related to the performance of that politician. And that can lead to added electoral churn; If you see increased rates of electoral churn where politicians are losing office with increasing frequency, it can shorten the electoral time horizons that politicians have. If they think that every election they stand a real good chance of losing office they may be less likely to pursue policies that have benefits over two or three election cycles. That was the crux of that paper.

Ariel: Fran, did you have anything you wanted to add to that?

Fran: I think it’s a really really fascinating question. This is one of my favorite of Nick’s papers. We think about how these really fundamental institutions that we think when we go to the ballot box, and we do our election, there’s a lot of factors that go into that, right? Even the very fact that you can pick up any kind of temperature signal on that is surprising to me, and I think it’s a really important finding. And then trying to pin down these mechanisms I think is interesting for trying to play out the scenarios of how does climate change proceed in terms of the effects of changing the political environment in which we’re operating, and having, like Nick said, these potentially long term effects on the types of issues politicians are willing to work on. It’s really important, and I think it’s something that needs more work.

Nick: Fran makes an excellent point embedded in there, which is the understanding of what we call causal mediation. In other words, if you see that hot temperatures lead to a reduction in GDP growth, why is that? What exactly is causing that? GDP growth is this huge aggregate of all of these different things. Why might temperature be causing that? Or even, for example, if you see that temperature is affecting people’s sleep quality, why is that the case? Is it because it’s influencing the degree to which people are stressed out during the day because they’re grumpier, they’re having more negative interactions, and then they’re thinking about that before they fall asleep? Is it due to purely physiological reasons, circadian rhythm and sleep cascades?

The short of it is, we don’t actually have very good answers to most of these questions for most of the climates impacts that we’ve looked at, and it’s pretty critical to have better answers, largely because if you want to adapt to coming climate changes, you’d like to spend your policy money on the things that are most important in those equations for reducing GDP growth or causing mental health outcomes or worsening people’s mood. You’d like to really be able to tell people precisely what they can do to adapt, and also spend money precisely where it’s needed, and it’s just strictly difficult science to be able to do that well.

Ariel: I want to actually go back real quick to something that you had said earlier, too: the idea that if politicians know that they’re unlikely to get elected during the next cycle, they’re also unlikely to plan long term. And I think especially when we’re looking at a situation like climate change where we need politicians who can plan long term, it seems like can this actually exacerbate our short-term thinking?

Nick: Yeah. That’s what I was concerned about, and still something that I am concerned about. As you get more and more extremes that are occurring more and more regularly and politicians are either responding well or not responding well to those extremes it may be somewhat like our weather and expectations paper — similar underlying psychological dynamics — which is just that people become more and more focused on their recent past, and their recent experience in history, and what’s going on now.

And if that’s the case then if you’re a politician, and you’ve had a bunch of hurricanes, or you’re dealing with the aftermath of hurricanes in your district, really should you be spending your policy efforts on carbon mitigation, or should you be trying to make sure that all of your constituents right now are housed and fed? That’s a little bit of a false dichotomy there, but it isn’t fully a false dichotomy because politicians only have so many resources, and they only have so much time. So as their risk of losing election goes up due to something that is more immediate, politicians will tend to focus on those risks as opposed to longer-term risks.

Ariel: I feel like in that example, too, in defense of the politicians, if you actually have to deal with people who are without homes and without food, that is sort of the higher priority.

Nick: Totally. I mean, I did a bunch of field work in Sub-Saharan Africa for my graduate studies and spent a lot of time in Malawi and South Africa, and talking to politicians there about how they felt about climate change, and specifically climate change mitigation policy. And half the time that I asked them they just looked at me as if I was crazy, and would explicitly say, like, “You must be crazy if you think that we have a  time horizon that gives us 20 years to worry about how our people are doing 20 years from now when they can’t feed themselves, and don’t have running water, and don’t have electricity right now. We’re working on the day to day things, the long term perspective just gets thrown out the window.” I think to a lesser degree that operates in every democratic polity.

Fran: This gets back to that question that we were talking about earlier: Are extreme events kind of fundamentally different in motivating action to reduce emissions? And this is exactly the reason why I’m not convinced that it’s the case, in that when you have the repeated extreme events, yes, there’s a lot of focus on rebuilding or restoring or kind of recovering from those events — potentially at the detriment of longer-term, less immediate action that would affect the long-term probability of getting those events in the future, which is reducing emissions.

And so I think it’s a very complex, causal argument to make in the face of a hurricane or a catastrophe that you need to be reducing emissions to address that, right, and that’s why I’m not convinced that just getting more and more disasters is going to automatically lead to more action on climate change. I think it’s actually almost this kind of orthogonal process that generates the political will to do something about climate change.

Having these disasters and operating in this very resource-constrained world — that’s a world in which action on climate change might be less likely, right? Doing some things that are quite costly involve a lot of political will and political leadership, and doing that in an environment where people are feeling vulnerable and feeling kind of exposed to natural disasters I think is actually going to be more difficult.

Nick: Yeah. So that’s an excellent point, Fran. I think you could see both things operating, which is I think you could see that people aren’t necessarily adapting their expectations to giant wildfires every single summer, that they realize that something is off and weird about that, but that they just simply can’t direct that attention to doing something about climate change because literally their house just burnt down. So they’re not going to be out in the streets lobbying their politicians as directly because they have more things to worry about. That is troubling to me, too.

Ariel: So that, I think, is a super, super important point, and now I have something new to worry about. It makes sense that the local communities that are being directly impacted by these horrific events have to deal with what’s just happened to them, but do we see an increase in external communities looking at what’s happening and saying, “Oh, we’ve got to stop this, and because we weren’t directly impacted we actually can do something?”

Nick: Anecdotally, somewhat yes. I mean, for example, if you look at the last couple of summers and the wildfire season, when there are big wildfire outbreaks the news media does a better than average job at linking that extreme weather to climate change, and starting to talk about climate change.

So if it is the case that people consume that news media and are now thinking about climate change more, that is good. And I think actually from some of the more recent surveys we’ve actually seen an uptick in awareness about climate change, worry about climate change, and willingness to list it as a top priority. So there are some positive trends on that front.

The bigger question is still an empirical one, though, which is what happens when you have 10 years of wildfires every summer. Maybe people are now not talking about it as much as they did in the very beginning.

Ariel: So I have two final questions for both of you. The first is: is there something that you think is really important for people to know or understand that we didn’t touch on?

Nick: I would say this, and this is maybe more extreme than Fran would say, but we are in really big trouble. We are in really, really big trouble. We are emitting more and faster than we were previously. We are probably dramatically underestimating the social cost of carbon because of all the reasons that we noted here and for many more, and the one thing that I kind of always tell people is don’t be lulled by the relatively banal feeling of your sleep getting disrupted, because if your sleep is disrupted it’s because everything is being disrupted, and it’s going to get worse.

We’ve not seen even a small fraction of  the likely total cost of climate change, and so yeah, be worried, and ideally use that worry in a productive way to lobby your politicians to do something about it.

Fran: I would say we talked about the social cost of carbon and the way it’s used, and I think sometimes it does get criticized because we know there’s a lot of things that it doesn’t capture, like what Nick’s been talking about, but I also know that we’re very confident that it’s greater than zero at this point, and substantially greater than zero, right? So the question of, should it be 40 dollars a ton, or should it be 100 dollars a ton, or should it be higher than that, is frankly quite irrelevant when right now we’re really not putting any price on carbon, we’re not doing any kind of ambitious climate policy.

Sometimes I think people get bogged down in these arguments of, is it bad, or is it catastrophic, and frankly either way we should be doing something to reduce our emissions, and they shouldn’t be going up, they should be going down, and we should be doing more than we’re doing right now. And arguing about where we end that process, or when we end that process of reducing our emissions is really not a relevant discussion to be having right now because right now everyone can agree that we need to start the process.

And so I think not getting too hung up on should it be two degrees, should it be 1.5, but just really focused on let’s do more, and let’s do it now, and let’s start that, and see where that gets us, and once we start that process and can begin to learn from it, that’s going to take us a long way to being where we want to be. I think these questions of, “Why aren’t we doing more than we’re doing now?” are the most important and some of the most interesting around climate change right now.

Nick: Yeah. Let’s do everything we can to avoid four or five degrees Celsius, and we can quibble over 1.5 or two later. Totally agree.

Ariel: Okay. So I’m going to actually add a question. So we’ve got two more questions for real this time I think. What do we do? What do you suggest we do? What can a listener right now do to help?

Fran: Vote. Make climate change your priority when you’re thinking about candidates, when you’re engaged in the democratic process, and when you’re talking to your elected representative — reach out to them, and make sure they know that this is the priority for you. And I would also say talk to your friends and family, right? Like these scientists or economists talking about this, that’s not something that’s going to reach everyone, right, but reaching out to your network of people who value your opinion, or just talking about this, and making sure people realize this is a critical issue for our generation, and the decisions we take now are going to shape the future of the planet in very real ways, and collectively we do have agency to do something about it.

Nick: Yes. I second all of that. I think the key is that no one can convince your friends and family that climate change is a threat perhaps better than you, the listener, can. Certainly Fran and I are not going to be able to convince your friends, and that’s just the way that humans work. We trust those that we are close to and trust. So if we want to get a collective movement to start doing something about carbon, it’s going to have to happen via the political process, and it’s also just going to have to happen in our social networks, by actually going out there and talking to people about it. So let’s do that.

Ariel: All right. So final question, now that we’ve gone through all these awful things that are going to happen: what gives you hope?

Fran: If we think about a world that solves this problem, that is a world that has come together to work on a truly global problem. The reason why we’ll solve this problem is because we recognize that we value the future, that we value people living in other countries, people around the world, and that we value nature and nonhuman life on the planet, and that we’ve taken steps to incorporate those values into how we organize our life.

When we think about that, that is a very big ask, right? We shouldn’t underestimate just how difficult this is to do, but we should also recognize that it’s going to be a really amazing world to live in. It’s going to provide a kind of foundation for all kinds of cooperation and collective action I think on other issues to build a better world.

Recognizing that that’s what we’re working towards, these are the values that we want to reflect in our society, and that is a really positive place to be, and a place that is worth working towards — that’s what’s giving me hope.

Nick: That’s a beautiful answer, Fran. I agree with that. It would be a great world to live in. The thing that I would say is giving me hope is actually if I had looked forward in 2010 and said, “Okay, where do I think that renewables are going to be? Where do I think that the electrification of vehicles is going to be?” I would have guessed that we would not be anywhere close to where we are right now on those fronts.

We are making much more progress on getting certain aspects of the economy and our lives decarbonized than I thought we would have been, even without any real carbon policy on those fronts. So that’s pretty hopeful for me. I think that as long as we can continue that trend we won’t have everything go poorly, but I also hesitate to hinge too much of our fate on the hope that technological advances from the past will continue at the same rate into the future. At the end of the day we probably really do need some policy, and we need to get together and engage in collective action to try and solve this problem. I hope that we can.

Ariel: I hope that we can, too. So Nick and Fran, thank you both so much for joining us today.

Nick: Thanks for having me.

Fran: Thanks so much for the interesting conversation.

Ariel: Yeah. I enjoyed this, thank you.

As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us no your preferred podcast platform.

 

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women’s International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions — especially for the military — and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.   

Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

Topics discussed in this episode include:

  • The value of verification, regardless of the challenges
  • The 1979 Sverdlovsk anthrax outbreak
  • The use of “rainbow” herbicides during the Vietnam War, including Agent Orange
  • The Yellow Rain Controversy

Publications and resources discussed in this episode include:

  • The Sverdlovsk anthrax outbreak of 1979, Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Ilona Popova, Alexis Shelokov, and Olga Yampolskaya, Science, 18 November 1994, Vol. 266, pp 1202-1208.
  • Preliminary Report- Herbicide Assessment Commission of the American Association for the Advancement of Science, Matthew Meselson, A. H. Westing, J. D. Constable, and Robert E. Cook, 30 December 1970, private circulation, 8 pp. Reprinted in Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6806-6807.
  • “Background Material Relevant to Presentations at the 1970 Annual Meeting of the AAAS”, Herbicide Assessment Commission of the AAAS, with A.H. Westing and J.D. Constable, December 1970, private circulation, 48 pp. Reprinted in the Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6807-6813.
  • “The Yellow Rain Affair: Lessons from a Discredited Allegation”, with Julian Perry Robinson Terrorism, War, or Disease? eds. A.L. Clunan, P.R. Lavoy, and SB Martin, Stanford University Press, Stanford, California. 2008, pp 72-96.
  • Yellow Rain by Thomas D. Seeley, Joan W. Nowicke, Matthew Meselson, Jeanne Guillemin and Pongthep Akratanakul, Scientific American, September 1985, Vol. 253, pp 128-137.

Click here for Part 1: From DNA to Banning Biological Weapons with Matthew Meselson and Max Tegmark

Four-ship formation on a defoliation spray run. (U.S. Air Force photo)

Ariel: Hi everyone. Ariel Conn here with the Future of Life Institute. And I would like to welcome you to part two of our two-part FLI podcast with special guest Matthew Meselson and special guest/co-host Max Tegmark. You don’t need to have listened to the first episode to follow along with this one, but I do recommend listening to the other episode, as you’ll get to learn about Matthew’s experiment with Franklin Stahl that helped prove Watson and Crick’s theory of DNA and the work he did that directly led to US support for a biological weapons ban. In that episode, Matthew and Max also talk about the value of experiment and theory in science, as well as how to get some of the world’s worst weapons banned. But now, let’s get on with this episode and hear more about some of the verification work that Matthew did over the years to help determine if biological weapons were being used or developed illegally, and the work he did that led to the prohibition of Agent Orange.

Matthew, I’d like to ask about a couple of projects that you were involved in that I think are really closely connected to issues of verification, and those are the Yellow Rain Affair and the Russian Anthrax incident. Could you talk a little bit about what each of those was?

Matthew: Okay, well in 1979, there was a big epidemic of anthrax in the Soviet city of Sverdlovsk, just east of the Ural mountains, in the beginning of Siberia. We learned about this epidemic not immediately but eventually, through refugees and other sources, and the question was, “What caused it?” Anthrax can occur naturally. It’s commonly a disease of bovids, that is cows or sheep, and when they die of anthrax, the carcass is loaded with the anthrax bacteria, and when the bacteria see oxygen, they become tough spores, which can last in the earth for a long, long time. And then if another bovid comes along and manages to eat something that’s got those spores, he might get anthrax and die, and the meat from these animals who died of anthrax, if eaten, can cause gastrointestinal anthrax, and that can be lethal. So, that’s one form of anthrax. You get it by eating.

Now, another form of anthrax is inhalation anthrax. In this country, there were a few cases of men who worked in leather factories with leather that had come from anthrax-affected animals, usually imported, which had live anthrax spores on the leather that got into the air of the shops where people were working with the leather. Men would breathe this contaminated air and the infection in that case was through the lungs.

The question here was, what kind of anthrax was this: inhalational or gastrointestinal? And because I was by this time known as an expert on biological weapons, the man who was dealing with this issue at the CIA in Langley, Virginia — a wonderful man named Julian Hoptman, a microbiologist by training — asked me if I’d come down and work on this problem at the CIA. He had two daughters who were away at college, and so he had a spare bedroom, so I actually lived with Julian and his wife. And in this way, I was able to talk to Julian night and day, both at the breakfast and dinner table, but also in the office. Of course, we didn’t talk about classified things except in the office.

Now, we knew from the textbooks that the incubation period for inhalation anthrax was thought to be four, five, six, seven days; Between the time you inhale it, four, five days later, if you hadn’t yet come down with it, you probably wouldn’t. Well, we knew from classified sources that people were dying of this anthrax over a period of six weeks, April all the way into the middle of May 1979. So, if the incubation period was really that short, you couldn’t explain how that would be airborne because a cloud goes by right away. Once it’s gone, you can’t inhale it anymore. So that made the conclusion that it was airborne difficult to reach. You could still say, well maybe it got stirred up again by people cleaning up the site, maybe the incubation period is longer than we thought, but there was a problem there.

And so the conclusion of our working group was that it was probable that it was airborne. In the CIA, at that time at least, in a conclusion that goes forward to the president, you couldn’t just say, “Well maybe, sort of like, kind of like, maybe if …” Words like that just didn’t work, because the poor president couldn’t make heads nor tails. Every conclusion had to be called “possible,” “probable,” or “confirmed.” Three levels of confidence.

So, the conclusion here was that it was probable that it was inhalation, and not ingestion. The Soviets said that it was bad meat, but I wasn’t convinced, mainly because of this incubation period thing. So I decided that the best thing to do would be to go and look. Then you might find out what it really was. Maybe by examining the survivors or maybe by talking to people — just somehow, if you got over there, with some kind of good luck, you could figure out what it was. I had no very clear idea, but when I would meet any high level Soviet, I’d say, “Could I come over there and bring some colleagues and we would try to investigate?”

The first time that happened was with a very high-level Soviet who I met in Geneva, Switzerland. He was a member of what’s called the Military Industrial Commission in the Soviet Union. They decided on all technical issues involving the military, and that would have included their biological weapons establishments, and we knew that they had a big biological laboratory in the city of Sverdlovsk, there was no doubt about that. So, I told them, “I want to go in and inspect. I’ll bring some friends. We’d like to look.” And he said, “No problem. Write to me.”

So, I wrote to him, and I also went to the CIA and said, “Look, I got to have a map because maybe they’d let me go there and take me to the wrong place, and I wouldn’t know it’s the wrong place, and I wouldn’t learn anything. So, the CIA gave me a map — which turned out to be wrong, by the way — but then I got a letter back from this gentleman saying no, actually they couldn’t let us go because of the shooting down of the Korean jet #007, if any of you remember that. A Russian fighter plane shot down a Korean jet — a lot of passengers on it and they all got killed. Relations were tense. So, that didn’t happen.

Then the second time, an American and the Russian Minister of Health got a Nobel prize. The winner over there was the minister of health named Chazov, and the fellow over here was Bernie Lown in our medical school, who I knew. So, I asked Bernie to take a letter when he went next time to see his friend Chazov in Moscow, to ask him if he could please arrange that I could take a team to Sverdlovsk, to go investigate on site. And when Bernie came back from Moscow, I asked him and he said, “Yeah. Chazov says it’s okay, you can go.” So, I sent a telex — we didn’t have email — to Chazov saying, “Here’s the team. We want to go. When can we go?” So, we got back a telex saying, “Well, actually, I’ve sent my right-hand guy who’s in charge of international relations to Sverdlovsk, and he looked around, and there’s really no evidence left. You’d be wasting your time,” which means no, right? So, I telexed back and said, “Well, scientists always make friends and something good always comes from that. We’d like to go to Sverdlovsk anyway,” and I never heard back. And then, the Soviet Union collapses, and we have Yeltsin now, and it’s the Russian Republic.

It turns out that a group of — I guess at that time they were still Soviets — Soviet biologists came to visit our Fort Detrick, and they were the guests of our Academy of Sciences. So, there was a welcoming party, and I was on the welcoming party, and I was assigned to take care of one particular one, a man named Mr. Yablokov. So, we got to know each other a little bit, and at that time we went to eat crabs in a Baltimore restaurant, and I told him I was very interested in this epidemic in Sverdlovsk, and I guess he took note of that. He went back to Russia and that was that. Later, I read in a journal that the CIA produced, abstracts from the Russian literature press, that Yeltsin had ordered his minister, or his assistant for Environment and Health, to investigate the anthrax epidemic back in 1979, and the guy who he appointed to do this investigation for him was my Mr. Yablokov, who I knew.

So, I sent a telex to Mr. Yablokov saying, “I see that President Yeltsin has asked for you to look into this old epidemic and decide what really happened, and that’s great, I’m glad he did that, and I’d like to come and help you. Could I come and help you?” So, I got back a telex saying, “Well, it’s a long time ago. You can’t bring skeletons out of the closet, and anyway, you’d have to know somebody there.” Basically it was a letter that said no. But then my friend Alex Rich of Cambridge Massachusetts, a great molecular biologist and X-ray crystallographer at MIT, had a party for a visiting Russian. Who is the visiting Russian but a guy named Sverdlov, like Sverdlovsk, and he’s staying with Alex. And Alex’s wife came over to me and said, “Well, he’s a very nice guy. He’d been staying with us for several days. I make him breakfast and lunch. I make the bed. Maybe you could take him for a while.”

So we took him into our house for a while, and I told him that I had been given a turn down by Mr. Yablokov, and this guy whose name is Sverdlov, which is an immense coincidence, said, “Oh, I know Yablokov very well. He’s a pal. I’ll talk to him. I’ll get it fixed so you can go.” Now, I get a letter. In this letter, handwritten by Mr. Yablokov, he said, “Of course, you can go, but you’ve got to know somebody there to invite you.” Oh, who would I know there?

Well, there had been an American Physicist, a solid-state physicist named Ellis who was there on a United States National Academy of Sciences–Russian Academy of Sciences Exchange Agreement doing solid-state physics with a Russian solid-state physicist there in Sverdlovsk. So, I called Don Ellis and I asked him, “That guy who you cooperated with in Sverdlovsk — whose name was Gubanov — I need someone to invite me to go to Sverdlovsk, and you probably still maintain contact with him over there in Sverdlovsk, and you could ask him to invite me.” And Don said, “I don’t have to do that. He’s visiting me today. I’ll just hand him the telephone.”

So, Mr. Gubanov comes on the telephone and he says, “Of course I’ll invite you, my wife and I have always been interested in that epidemic.” So, a few days later, I get a telex from the rector of the university there in Sverdlovsk, who was a mathematical physicist. And he says, “The city is yours. Come on. We’ll give you every assistance you want.” So we went, and I formed a little team, which included a pathologist, thinking maybe we’ll get ahold of some information of autopsies that could decide whether it was inhalation or gastrointestinal. And we need someone who speaks Russian; I had a friend who was a virologist who spoke Russian. And we need a guy who knows a lot about anthrax, and veterinarians know a lot about anthrax, so I got a veterinarian. And we need an anthropologist who knows a lot about how to work with people and that happened to be my wife, Jeanne Guillemin.

So, we all go over there, we were assigned a solid-state physicist, a guy named Borisov, to take us everywhere. He knew how to fix everything. Cars that wouldn’t work, and also the KGB. He was a genius, and became a good friend. It turns out that he had a girlfriend, and she, by this time, had been elected to be a member of the Duma. In other words, she’s a congresswoman. She’s from Sverdlovsk. She had been a friend of Yeltsin. She had written Yeltsin a letter, which my friend Borisov knew about, and I have a photocopy of the letter. What it says is, “Dear Boris Nikolayevich,”that’s Yeltsin, “My constituents here at Sverdlovsk want to know if that anthrax epidemic was caused by a government activity or not. Because if it was, the families of those who died — they’re entitled to double pension money, just like soldiers killed in war.” So, Yeltsin writes back, “We will look into it.” And that’s why my friend Yablokov got asked to look into it. It was decided eventually that it was the result of government activity — by Yeltsin, he decided that — and so he had to have a list of the people who were going to get the extra pensions. Because otherwise everybody would say, “I’d like to have an extra pension.” So there had to be a list.

So she had this list with 68 names of the people who had died of anthrax during this time period in 1979. The list also had the address where they lived. So,now my wife, Jeanne Guillemin, Professor of Anthropology at Boston College, goes door-to-door — with two Russian women who were professors at the university and who knew English so they could communicate with Jeanne — knocks on the doors: “We would like to talk to you for a little while. We’re studying health, we’re studying the anthrax epidemic of 1979. We’re from the university.”

Everybody let them in except one lady who said she wasn’t dressed, so she couldn’t let anybody in. So in all the other cases, they did an interview and there were lots of questions. Did the person who died have TB? Was that person a smoker? One of the questions was where did that person work, and did they work in the day or the night? We asked that question because we wanted to make a map. If it had been inhalation anthrax, it had to be windborne, and depending on the wind, it might have been blown in a straight line if the wind was of a more or less unchanging direction.

If, on the other hand, it was gastrointestinal, people get bad meat from black market sellers all over the place, and the map of where they were wouldn’t show anything important, they’d just be all over the place. So, we were able to make a map when we got back home, we went back there a second time to get more interviews done, and Jeanne went back a third time to get even more interviews done. So, finally we had interviews with families of nearly all of those 68 people, and so we had 68 map locations: where they lived, and where they worked, and whether it was day or night. Nearly all of them were daytime workers.

When we plotted where they lived, they lived all over the southern part of the city of Sverdlovsk. When we plotted where they were likely would have been in the daytime, they all fell in to one narrow zone with one point at the military biological lab. The lab was inside the city. The other point was at the city limit: The last case was at the edge of the city limit, the southern part. We also had meteorological information, which I had brought with me from the United States. We knew the wind direction every three hours, and there was only one day when the wind was constantly blowing in the same direction, and that same direction was exactly the direction along which the people who died of anthrax lived.

Well, bad meat does not blow around in straight lines. Clouds of anthrax spores do. It was rigorous: We could conclude from this, with no doubt whatsoever, that it had been airborne, and we published this in Science magazine. It was really a classic of epidemiology, you couldn’t ask for anything better. Also, the autopsy records were inspected by the pathologist along with our trip, and he concluded from the autopsy specimens that it was inhalation. So, there was that evidence, too, and that was published in the PNAS. So, that really ended the mystery. The Soviet explanation was just wrong, and the CIA explanation, which was only probable: it was confirmed.

Max: Amazing detective story.

Matthew: I liked going out in the field, using whatever science I knew to try and deal with questions of importance to arms control, especially chemical and biological weapons arms control. And that happened to me on three occasions, one I just told you. There were two others.

Ariel: So, actually real quick before you get into that. I just want to mention that we will share or link to that paper and the map. Because I’ve seen the map that shows that straight line, and it is really amazing, thank you.

Matthew: Oh good.

Max: I think at the meta level this is also a wonderful example of what you mentioned earlier there, Matthew, about verification. It’s very hard to hide big programs because it’s so easy for some little thing to go wrong or not as planned and then something like this comes out.

Matthew: Exactly. By the way, that’s why having a verification provision in the treaty is worth it even if you never inspect. Let’s say that the guys who are deciding whether or not to do something which is against the treaty, they’re in a room and they’re deciding whether or not to do it. Okay? Now it is prohibited by a treaty that provides for verification. Now they’re trying to make this decision and one guy says, “Let’s do it. They’ll never see it. They’ll never know it.” Another guy says, “Well, there is a provision for verification. They may ask for a challenge inspection.” So, even the remote possibility that, “We might get caught,” might be enough to make that meeting decide, “Let’s not do it.” If it’s not something that’s really essential, then there is a potential big price.

If, on the other hand, there’s not even a treaty that allows the possibility of a challenge inspection, if the guy says, “Well, they might find it,” the other guy is going to say, “How are they going to find it? There’s no provision for them going there. We can just say, if they say, ‘I want to go there,’ we say, ‘We don’t have a treaty for that. Let’s make a treaty, then we can go to your place, too.’” It makes a difference: Even a provision that’s never used is worth having. I’m not saying it’s perfection, but it’s worth having. Anyway, let’s go on to one of these other things. Where do you want me to go?

Ariel: I’d really love to talk about the Agent Orange work that you did. So, I guess if you could start with the Agent Orange research and the other rainbow herbicides research that you were involved in. And then I think it would be nice to follow that up with, sort of another type of verification example, of the Yellow Rain Affair.

Matthew: Okay. The American Association for the Advancement of Science, the biggest organization of science in the United States, became, as the Vietnam War was going on, more and more concerned that the spraying of herbicides in Vietnam might cause ecological or health harm. And so at successive national meetings, there were resolutions to have it looked into. And as a result of one of those resolutions, the AAAS asked a fellow named Fred Tschirley to look into it. Fred was at the Department of Agriculture, but he was one of the people who developed the military use of herbicides. He did a study, and he concluded that there was no great harm. Possibly to the mangrove forest, but even then they would regenerate.

But at the next annual meeting, there was more appealing on the part of the membership, and now they wanted the AAAS to do its own investigation, and the compromise was they’d do their own study to design an investigation, and they had to have someone to lead that. So, they asked a fellow named John Cantlon, who was provost of Michigan State University, would he do it, and he said yes. And after a couple of weeks, John Cantlon said, “I can’t do this. I’m being pestered by the left and the right and the opponents on all sides and it’s just, I can’t do it. It’s too political.”

So, then they asked me if I would do it. Well, I decided I’d do it. The reason was that I wanted to see the war. Here I’d been very interested in chemical and biological weapons; very interested in war, because that’s the place where chemical and biological weapons come into play. If you don’t know anything about war, you don’t know what you’re talking about. I taught a course at Harvard for over two years on war, but that wasn’t like being there. So, I said I’d do it.

I formed a little group to do it. A guy named Arthur Westing, who had actually worked with herbicides and who was a forester himself and had been in the army in Korea, and I think had a battlefield promotion to captain. Just the right combination of talents. Then we had a chemistry graduate student, a wonderful guy named Bob Baughman. So, to design a study, I decided I couldn’t do it sitting here in Cambridge, Massachusetts. I’d have to go to Vietnam and do a pilot study in order to design a real study. So, we went to Vietnam — by the way, via Paris, because I wanted to meet the Vietcong people, I wanted them to give me a little card we could carry in our boots that would say, if we were captured, “We’re innocent scientists, don’t imprison us.” And we did get such little cards that said that. We were never captured by the Vietcong, but we did have some little cards.

Anyway, we went to Vietnam and we found, to my surprise, that the military assistance command, that is the United States Military in Vietnam, very much wanted to help our investigation. They gave us our own helicopter. That is, they assigned a helicopter and a pilot to me. And anywhere we wanted to go, I’d just call a certain number the night before and then go to Tan Son Nhut Air Base, and there would be a helicopter waiting with a pilot instructed FAD — fly as directed.

So, one of the things we did was to fly over a valley on which herbicides had been sprayed to kill the rice. John Constable, the medical member of our team, and I did two flights of that so we could take a lot of pictures. And the man who had designed this mission, a chemical corps captain named Captain Franz, had designed the mission and requested it and gotten permission through a series of review processes that it was really an enemy crop production area, not an area of indigenous Montagnard people growing food for their own eating, but rather enemy soldiers growing it for themselves.

So we took a lot of pictures and as we flew, Colonel Franz said, “See down there, there are no houses. There’s no civilian population. It’s just military down there. Also, the rice is being grown on terraces on the hillsides. The Montagnard people don’t do that. They just grow it down in the valley. They don’t practice terracing. And also, the extent of the rice fields down there — that’s all brand new. Fields a few years ago were much, much smaller in area. So, that’s how we know that it’s an enemy crop production area.” And he was a very nice man, and we believed him. And then we got home, and we had our films developed.

Well, we had very good cameras and although you couldn’t see from the aircraft, you could certainly see in the film: The valley was loaded with little grass shacks with yellow roofs — meaning that they were built recently, because you have to replace the roofs every once in a while with straw and if it gets too old, it turns black, but if there’s yellow, it means that somebody is living in those. And there were hundreds and hundreds of them.

We got from the Food and Agriculture Organization in Rome how much rice you need to stay alive for one year, and what area in hectares of dry rice — because this isn’t patty rice, it’s dry rice — you’d need to make that much rice, and we measured the area that was under cultivation from our photographs, and the area was just enough to support that entire population, if we assumed that there were five people who needed to be fed in every one of the houses that we counted.

Also, we could get from the French aerial photography that they had done in the late 1940s, and it turns out that the rice fields had not expanded. They were exactly the same. So it wasn’t that the military had moved in and made bigger rice fields: They were the same. So, everything that Colonel Franz said was just wrong. I’m sure he believed it, but it was wrong.

So, we made great big color enlargements of our photographs — we took photographs all up and down this valley, 15 kilometers long — and we made one set for Ambassador Bunker; one copy for General Abrams — Creighton Abrams was the head of our military assistance command; and one set for Secretary of State Rogers; along with a letter saying that this one case that we saw may not be typical, but in this one case, this crop destruction program was achieving the opposite of what it intended. It was denying food to the civilian population and not to the enemy. It was completely mistaken. So, as a result, I think, of that, but I have no proof, only the time connection, but right after that in early November — we’d sent the stuff in early November — Ambassador Bunker and General Abrams ordered a new review of the crop destruction program. Was it in response to our photographs and our letter? I don’t know, but I think it was.

The result of that review was a recommendation by Ambassador Bunker and General Abrams to stop the herbicide program immediately. They sent this recommendation back in a top secret telegram to Washington. Well, the top-secret telegram fell into the hands of the Washington Post, and they published it. Well, now here are the Ambassador and the General on the spot, saying to stop doing something in Vietnam. How on earth can anybody back in Washington gainsay them? Of course, President Nixon had to stop it right away. There’d be no grounds. How could he say, “Well, my guys here in Washington, in spite of what the people on the spot say, tell us we should continue this program.”

So that very day, he announced that the United States would stop all herbicide operations in Vietnam in a rapid and orderly manner. That very day happened to be the day that I, John Constable, and Art Westing were on the stage at the annual meeting in Chicago of the AAAS, reporting on our trip to Vietnam. And the president of AAAS ran up to me to tell me this news, because it just came in while I was talking, giving our report. So, that’s how it got stopped, and thanks to General Abrams.

By the way, the last day I was in Vietnam, General Abrams had just come back from Japan — he’d had an operation for gallbladder, and he was still convalescing. We spent all morning talking with each other. And he asked me at one point, “What about the military utility of the herbicides?” And of course, I said I had no idea what it was, or not. And he said, “Do you want to know what I think?” I said, “Yes, sir.” He said, “I think it’s shit.” I said, “Well, why are we doing it here?” He said, “You don’t understand anything about this war, young man. I do what I’m ordered to do from Washington. It’s Washington who tells me to use this stuff, and I have to use it because if I didn’t have those 55-gallon drums of herbicides offloaded on the decks at Da Nang and Saigon, then they’d make walls. I couldn’t offload the stuff I need over those walls. So, I do let the chemical corps use this stuff.” He said, “Also, my son, who is a captain up in I Corps, agrees with me about that.”

I wrote something about this recently, which I sent to you, Ariel. I want to be sure my memory was right about the conversation with General Abrams — who, by the way, was a magnificent man. He is the man who broke through at the Battle of the Bulge in World War II. He’s the man about whom General Patton, the great tank general, said, “There’s only one tank officer greater than me, and it’s Abrams.”

Max: Is he the one after whom the Abrams tank is named?

Matthew: Yes, it was named after him. Yes. He had four sons, they all became generals, and I think three of them became four-stars. One of them who did become a four-star is still alive in Washington. He has a consulting company. I called him up and I said, “Am I right, is this what your dad thought and what you thought back then?” He said, “Hell, yes. It’s worse than that.” Anyway, that’s what stopped the herbicides. They may have stopped anyway. It was dwindling down, no question. Now the question of whether dioxin and herbicides have caused too many health effects, I just don’t know. There’s an immense literature about this and it’s nothing I can say we ever studied. If I read all the literature, maybe I’d have an opinion.

I do know that dioxin is very poisonous, and there’s a prelude to this order from President Nixon to stop the use of all herbicides. That’s what caused the United States to stop the use of Agent Orange specifically. That happened first, before I went to Vietnam. That happened for a funny reason. A Harvard student, a Vietnamese boy, came to my office one day with a stack of newspapers from Saigon in Vietnamese. I couldn’t read them, of course, but they all had pictures of deformed babies, and this student claimed that this was because of Agent Orange, that the newspaper said it was because of Agent Orange.

Well, deformed babies are born all the time and I appreciated this coming from him, but there’s nothing I could do about it. But then I got from a graduate student here — Bill Haseltine, now become a very wealthy man — he had a girlfriend and she was working for Ralph Nader one summer, and she somehow got a purloined copy of a study that had been ordered by the NIH of the possible keratogenic, mutagenic, and carcinogenic effects of common herbicides, pesticides, and fungicides.

This company, called the Bionetics company, had this huge contract that tests all these different compounds, and they concluded from this that there was only one of these chemicals that did anything that might be dangerous for people. That was 2,4,5-T, trichlorophenoxyacetic acid. Well, that’s what Agent Orange is made out of. So, I had this report that had not yet been released to the public saying that this could cause birth defects in humans if it did the same thing as it did in guinea pigs and mice. I thought, the White House better know about this. That’s pretty explosive: claims in the newspapers in Saigon and scientific suggestions that this stuff might cause birth defects.

So, I decided to go down to Washington and see President Nixon’s science advisor. That was Lee DuBridge, physicist. Lee DuBridge had been the president of Caltech when I was a graduate student there and so he knew me, and I knew him. So, I went down to Washington with some friends, and I think one of the friends was Arthur Galston from Yale. He was a scientist who worked on herbicides, not on the phenoxyacetic herbicides but other herbicides. So we went down to see the President’s science advisor, and I showed them these newspapers and showed him the Bionetics report. He hadn’t seen it, it was at too low a level of government for him to see it and it had not yet been released to the public. Then he did something amazing, Lee DuBridge: He picked up the phone and he called David Packard, who was the number two at the Defense Department. Right then and there, without consulting anybody else, without asking the permission of the President, they canceled Agent Orange.

Max: Wow.

Matthew: That was the end Agent Orange. Now, not exactly the end. I got a phone call from Lee DuBridge a couple of days later when I was back at Harvard. He says, “Matt, the DuPont people have come to me. It’s not Agent Orange itself, it’s an impurity in Agent Orange called dioxin, and they know that dioxin is very toxic, and the Agent Orange that they make has very little dioxin in it because they know it’s bad and they make the stuff at low temperature, when dioxin is a by-product, that’s made in very small amount. These other companies like Diamond Shamrock and other companies, Monsanto, who make Agent Orange for the military, it must be their Agent Orange. It’s not our Agent Orange.

So, in other words the question was, we just use the Dow Agent Orange — maybe that’s safe. But the question is does the Dow Agent Orange cause defects in mice? So, a whole new series of experiments were done with Agent Orange containing much less dioxin in it. It still made birth defects. So, since it still made birth defects in one species of rodent, you could hardly say, “Well, it’s okay then for humans.” So, that really locked it, closed it down, and then even the Department of Agriculture prohibited the use in the United States, except on land that would have been unlikely to get into the human food chain. So, that ended the use of Agent Orange.

That had happened already before we went to Vietnam. They were then using only Agent White and Agent Blue, two other herbicides, but Agent Orange had been knocked out ahead of time. But that was the end of the whole herbicide program. It was two things: the dioxin concern, on the one hand, stopping Agent Orange, and the decision of President Nixon; and militarily Bunker and Abrams had said, “It’s no use, we want to get it stopped, it’s doing more harm than good. It’s getting the civilian population against us.”

Max: One reaction I have to these fascinating stories is how amazing it is that back in those days politicians really trusted scientists. You could go down to Washington, there would be a science advisor. You know, we even didn’t have a presidential science advisor for a while now during this administration. Do you feel that the climate has changed somehow in the way politicians view scientists?

Matthew: Well, I don’t have a big broad view of the whole thing. I just get the impression, like you do, that there are more politicians who don’t pay attention to science than there used to be. There are still some, but not as many, and not in the White House.

Max: I would say we shouldn’t particularly just point fingers at any particular administration, I think there has been a general downward trend for people’s respect for scientists overall. If you go back to when you were born, Matthew, and when I was born, I think generally people thought a lot more highly about scientists contributing very valuable things to society and they were very interested in them. I think right now there are much more people who can name — If you ask the average person how many famous movie stars can they name, or how many billionaires can they name, versus how many Nobel laureates can they name, the answer is going to be kind of different from the way it was a long time ago. It’s very interesting to think about what we can do to more help people appreciate the things that they do care about, like living longer and having technology and so on, are things that they, to a large extent, owe to science. It isn’t just the nerdy stuff that isn’t relevant to them.

Matthew: Well, I think movie stars were always at the top of the list. Way ahead of Nobel Prize winners and even of billionaires, but you’re certainly right.

Max: The second thing that really strikes me, which you did so wonderfully there, is that you never antagonized the politicians and the military, but rather went to them in a very constructive spirit and said look, here are the options. And based on the evidence, they came to your conclusion.

Matthew: That’s right. Except for the people who actually were doing these programs — that was different, you couldn’t very well tell them that. But for everybody else, yes, it was a help. You need to offer help, not hindrance.

The last thing was the Yellow Rain. That, too, involved the CIA. I was contacted by the CIA. They had become aware of reports from Southeast Asia, particularly from Thailand, Hmong tribespeople who were living in Laos, coming out of Laos across the Mekong into Thailand, and telling stories of being poisoned by stuff dropped from airplanes. Stuff that they called kemi or yellow rain.

At first, I thought maybe there was something to this, there are some nasty chemicals that are yellow. Not that lethal, but who knows, maybe there is exaggeration in their stories. One of them is called adamsite, it’s yellow, it’s an arsenical. So we decided we’d have a conference, because there was a  mystery: What is this yellow rain? We had a conference. We invited people from the intelligence community, from the state department. We invited anthropologists. We invited a bunch of people to ask, what is this yellow rain?

By this time, we knew that the samples that had been turned in contained pollen. One reason we knew that was that the British had samples of this yellow rain and they had shown that it contains pollen. They had looked at the samples of the yellow rain brought in by the Hmong tribespeople, given to British officers — or maybe Americans, I don’t know — but found its way into the hands of British intelligence, who bring these samples back to Porton and they’re examined in various ways, but also under the microscope. And the fellow who looked at them under the microscope happened to be a beekeeper. He knew just what pollen grains look like. And he knew that there was pollen, and then they sent this information to the United States, and we looked at the samples of yellow rain we had, and they all contained — all these yellow samples contained pollen.

The question was, what is it? It’s got pollen in it. Maybe it’s very poisonous. The Montagnard people say it falls from the sky. It lands on leaves and on rocks. The spots were about two millimeters in diameter. It’s yellow or brown or red, different colors. What is it? So, we had this meeting in Cambridge, and one of the people there, Peter Ashton, is a great botanist, his specialty is the trees of Southeast Asia and in particular the great dipterocarp trees, which are like the oaks in our part of the world. And he was interested in the fertilization of these dipterocarps, and the fertilization is done by bees. They collect pollen, though, like other bees.

And so the hypothesis we came to at the end of this day-long meeting was that maybe this stuff is poisonous, and the bees get poisoned by it because it falls on everything, including flowers that have pollen, and the bees get sick, and these yellow spots, they’re the vomit of the bees. These bees are smaller individually than the yellow spots, but maybe several bees get together and vomit on the same spot. Really a crazy idea. Nevertheless, it was the best idea we could come up with that explained why something could be toxic but have pollen in it. It could be little drops, associated with bees, and so on.

A couple of days later, both Peter Ashton, the botanist, and I, noticed on the backs of our cars on the windshields, the rear windshields, yellow spots loaded with pollen. These were being dropped by bees,  these were the natural droppings of bees, and that gave us the idea that maybe there was nothing poisonous in this stuff. Maybe it was the natural droppings of bees that the people in the villages thought was poisonous, but that wasn’t. So, we decided we better go to Thailand and find out what’s happening.

So, a great bee biologist named Thomas Seeley, who’s now at Cornell — he was at Yale at that time — and I flew over to Thailand, and went up into the forest to see if bees defecate in showers. Now why did we do that? It’s because friends here said, “Matt, this can’t be the source of the yellow rain that the Hmong people complained about, because bees defecate one by one. They don’t go out in a great armada of bees and defecate all at once. Each bee goes out and defecates by itself. So, you can’t explain the showers — they’d only get tiny little driblets, and the Hmong people say they’re real showers, with lots of drops falling all at once.”

So, Tom Seeley and I went to Thailand, where they also had this kind of bee. So, we went there, and it turns out that they defecate all at once, unlike the bees here. Now they do defecate in showers here too, but they’re small showers. That’s because the number of bees in a nest here is rather small, but they do come out on the first warm days of spring, when there’s now pollen and nectar to be harvested, but those showers are kind of small. Besides that, the reason that there are showers at all even in New England is because the bees are synchronized by winter. Winter forces them to stay in their nest all winter long, during which they’re eating the stored-up pollen and getting very constipated. Now, when they fly out, they all fly out, they’re all constipated, and so you get a big shower. Not as big as the natives in Southeast Asia reported, but still a shower.

But in southeast Asia, there are no seasons. Too near the equator. So, there’s nothing that would synchronize the defecation of bees, and that’s why we had to go to Thailand to see if — even though there’s no winter to synchronize their defecation flights — if they nevertheless do go out in huge numbers and all at once.

So, we’re in Thailand and we go up into the Khao Yai National Park and find places where there are clearings in the forests where you could see up into the sky, where if there were bees defecating their feces would fall to the ground, not get caught up in the trees. And we put down big pieces, one meter square, of white paper, and anchored them with rocks, and went walking around in the forest some more, and come back and look at our pieces of white paper every once in a while.

And then suddenly we saw a large number of spots on the paper, which meant that they had defecated all at once. They weren’t going around defecating one by one by one. There were great showers then. That’s still a question: Why they don’t go out one by one? And there are some good ideas why, I won’t drag you into that. It’s the convoy principle, to avoid getting picked off one by one by birds. That’s why people think that they go out in great armadas of constipated bees.

So, this gave us a new hypothesis. The so-called yellow rain is all a mistake. It’s just bees defecating, which people confuse and think is poisonous. Now, that still doesn’t prove that there wasn’t a poison. What was the evidence for poison? The evidence was that the Defense Intelligence Agency was sending samples of this yellow rain and also samples of human blood and other materials to a laboratory in Minnesota that knew how to analyze for the particular toxin that the Defense establishment thought was the poison. It’s a toxin called trichothecene mycotoxins, there’s a whole family of them. And this lab reported positive findings in the samples from Thailand but not in controls. So that seemed to be real proof that there was poison.

Well, this lab is a lab that also produced trichothecene mycotoxins, and the way they analyzed for them was by mass spectroscopy, and everybody knows that if you’re going to do mass spectroscopy, you’re going to be able to detect very, very, very tiny amounts of stuff, and so you shouldn’t both make large quantities and try to detect small quantities in the same room, because there’s the possibility of cross contamination. I have an internal report from the Defense Intelligence Agency saying that that laboratory did have numerous false positive, and that probably all of their results were bedeviled by contamination from the trichothecenes that were in the lab, and also because there may have been some false reading of the mass spec diagram.

The long and short of it is that when other laboratories tried to find trichothecenes in their samples: the US Army looked at at least 80 samples and found nothing. The British looked at at least 60 samples, found nothing. The Swedes looked at some number of samples, I don’t know the number, but found nothing. The French looked at a very few samples at their military analytical lab, and the French found nothing. No lab could confirm it. There was one lab at Rutgers that thought it could confirm it, but I believe that they were suffering from contamination also, because they were a lab that worked with trichothecenes also.

So, the long and short of it is that the chemical evidence was no good, and finally the ambassador there decided that we should have another look — Ambassador Dean. And that the military should send out a team that was properly equipped to check up on these stories, because up until then there was no dedicated team. There were teams that would come up briefly, listen to the refugees’ stories, collect samples, and go back. So Ambassador Dean requested a team that would stay there. So out comes a team from Washington, stays there longer than a year. Not just a week, but longer than a year, and they tried to re-locate the Hmong people in the camps who had told these stories in the refugee camps.

They couldn’t find a single one who would tell the same story twice. Either because they weren’t telling the same story twice, or because the interpreter interpreted the same story differently. So, whatever it was. Then they did something else. They tried to find people who were in the same location at the same time as was claimed there was such attacks, and those people never confirmed the attack. They could never find any confirmation by interrogation of people.

Then also, there was a CIA unit out there in that theater questioning captured prisoners of war and also people who surrendered from the North Vietnamese army: the people who were presumably behind the use of this toxic stuff. And they interrogated hundreds of people, and one of these interrogators wrote an article in an Intelligence Agency Journal, but an open journal, saying that he doubted that there was anything to the yellow rain because they had interrogated so many people including chemical corps people from the North Vietnamese Army, that he couldn’t believe that there really was anything going on.

So we did some more investigating of various kinds, not just going to Thailand, but doing some analysis of various things. We looked at the samples — we found bee hairs in the samples. We found that the bee pollen in the samples of the alleged poison had no protein inside. You can stain pollen grains with something called Coomassie brilliant blue, and these pollen grains that were in the samples handed in by the refugees, that were given to us by the army and by the Canadians, by the Australians, they didn’t stain blue. Why not? Because if a pollen grain passes through the gut of a bee, the bee digests out all of the good protein that’s inside the pollen grain, as its nutrition.

So, you’d have to believe that the Soviets were collecting pollen not from plants, which is hard enough, but had been regurgitated by bees. Well, that’s insane. You could never get enough to be a weapon by collecting bee vomit. So the whole story collapsed, and we’ve written a longer account of this. The United States government has never said we were right, but a few years ago said that maybe they were wrong. So that’s at least something.

So one case we were right, and the Soviets were wrong. Another case, the Soviets were wrong, and we were right, and the third case, the herbicides, nobody was right or wrong. It was just that it was, in my view, by the way, it was useless militarily. I’ll tell you why.

If you spray the deep forest, hoping to find a military installation that you can now see because there are no more leaves, it takes four or five weeks for the leaves to fall off. So, you might as well drop little courtesy cards that say, “Dear enemy. We have now sprayed where you are with herbicide. In four or five weeks we will see you. You may choose to stay there, in which case, we will shoot you. Or, you have four or five weeks to move somewhere else, in which case, we won’t be able to find you. You decide.” Well, come on, what kind of a brain came up with that?

The other use was along roadsides, for convoys to be safer from snipers who might be hidden in the woods. You knock the leaves off the trees and you can see deeper into the woods. That’s right, but you have to realize the fundamental law of physics, which is that if you can see from A to B, B can see back to A, right? If there’s a clear light path from one point to another, there’s a clear light path in the other direction.

Now think about it. You are a sniper in the woods, and the leaves now have not been sprayed. They grow right up to the edge of the forest and a convoy is coming down the road. You can stick your head out a little bit but not for very long. They have long-range weapons; When they’re right opposite you, they have huge firepower. If you’re anywhere nearby, you could get killed.

Now, if we get rid of all the leaves, now I can stand way back into the forest, and still sight you between the trunks. Now, that’s a different matter. A very slight move on my part determines how far up the road and down the road I can see. By just a slight movement of my eye and my gun, I can start putting you under fire a couple kilometers up the road — you won’t even know where it’s coming from. And I can keep you under fire a few kilometers down the road, when you pass me by. And you don’t know where I am anymore. I’m not right up by the roadside, because the leaves would otherwise keep me from seeing anything. I’m back in there somewhere. You can pour all kinds of fire, but you might not hit me.

So, for all these reasons, the leaves are not the enemy. The leaves are the enemy of the enemy. Not of us. We’d like to get rid of the trunks — that’s different, we do that with bulldozers. But getting rid of the leaves leaves a kind of a terrain which is advantageous to the enemy, not to us. So, on all these grounds, my hunch is that by embittering the civilian population — and after all our whole strategy was to win the hearts and minds — by embittering the native population by wiping out their crops with drifting herbicide, the herbicides helped us lose the war, not win it. We didn’t win it. But it helped us lose it.

But anyway, the herbicides got stopped in two steps. First Agent Orange, because of dioxin and the report from the Bionetics Company, and second because Abrams and Bunker said, “Stop it.” We now have a treaty, by the way, the ENMOD treaty, that makes it illegal under international law to do any kind of large-scale environmental modification as a weapon of war. So, that’s about everything I know.

And I should add: you might say, how could they interpret something that’s common in that region as a poison? Well, in China, in 1970, I believe it was, the same sort of thing happened, but the situation was very different. People believed that yellow spots were falling from the sky, they were fallout from nuclear weapons tests being conducted by the Soviet Union, and they were poisonous.

Well, the Chinese government asked a geologist from a nearby university to go investigate, and he figured out — completely out of touch with us, he had never heard of us, we had never heard of him — that it was bee feces that were being misinterpreted by the villagers as fallout from nuclear weapons test done by Russians.

It was exactly the same situation, except that in this case there was no reason whatsoever to believe that there was anything toxic there. And why was it that people didn’t recognize bee droppings for what they were? After all, there’s lots of bees out there. There are lots of bees here, too. And if in April, or near that part of spring, you look at the rear windshield of your car, if you’ve been out in the countryside or even here in midtown, you will see lots of these spots, and that’s what those spots are.

When I was trying to find out what kinds of pollen were in the samples of the yellow rain — the so-called yellow rain — that we had, I went down to Washington. The greatest United States expert on pollen grains and where they come from was at the Smithsonian Institution, a woman named Joan Nowicki. I told her that bees make spots like this all the time and she said, “Nonsense. I never see it.” I said, “Where do you park your car?” Well there’s a big parking lot by the Smithsonian, we go down there, and her rear windshield was covered with these things. We see them all the time. They’re part of what we see, but we don’t take any account of.

Here at Harvard there’s a funny story about that. One of our best scientists here, Ed Wilson, studies ants — but also bees — but mostly ants. But he knows a lot about bees. Well, he has an office in the museum building, and lots of people come to visit the museum at Harvard, a great museum, and there’s a parking lot for them. Now there’s a graduate student who has, in those days, bee nests up on top of the museum building. He’s doing some experiments with bees. But these bees defecate, of course. And some of the nice people who come to see Harvard Museum park their cars there and some of them are very nice new cars, and they come back out from seeing the museum and there’s this stuff on their windshields. So, they go to find out who is it that they can blame for this and maybe do something about it or pay them get it fixed or I don’t know what — anyway, to make a complaint. So, they come to Ed Wilson’s office.

Well, this graduate student is a graduate student of Ed Wilson, and of course, he knows that he’s got bee nests up there, and so the secretary of Ed Wilson knows what this stuff is. And the graduate student has the job of taking a rag with alcohol on it and going down and gently wiping the bee feces off of the windshields of these distressed drivers, so there’s never any harm done. But now, when I had some of this stuff that I’d collected in Thailand, I took two people to lunch at the faculty club here at Harvard, and some leaves with these spots on them under a plastic petri dish, just to see if they would know.

Now, one of these guys, Carroll Williams, knew all about insects, lots of things about insects, and Wilson of course; and we’re having lunch and I bring out this petri dish with the leaves covered with yellow spots and asked them, two professors who are great experts on insects, what the stuff is, and they hadn’t the vaguest idea. They didn’t know. So, there can be things around us that we see every day, and even if we’re experts we don’t know what it is. We don’t notice it. It’s just part of the environment. We don’t notice it. I’m sure that these Hmong people were getting shot at, they were getting napalmed, they were getting everything else, but they were not getting poisoned. At least not by bee feces. It was all a big mistake.

Max: Thank you so much, both for this fascinating conversation and all the amazing things you’d done to keep science a force for good in the world.

Ariel: Yes. This has been a really, really great and informative discussion, and I have loved learning about the work that you’ve done, Matthew. So, Matthew and Max, thank you so much for joining the podcast.

Max: Well, thank you.

Matthew: I enjoyed it. I’m sure I enjoyed it more than you did.

Ariel: No, this was great. It’s truly been an honor getting to talk with you.

If you’ve enjoyed this interview, let us know! Please like it, share it, or even leave a good review. I’ll be back again next month with more interviews with experts.  

 

FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.  

In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.

Topics discussed in this episode include:

  • Watson and Crick’s double helix hypothesis
  • The value of theoretical vs. experimental science
  • Biological weapons and the U.S. biological weapons program
  • The Biological Weapons Convention
  • The value of verification
  • Future considerations for biotechnology

Publications and resources discussed in this episode include:

Click here for Part 2: Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

Ariel: Hi everyone and welcome to the FLI podcast. I’m your host, Ariel Conn with the Future of Life Institute, and I am super psyched to present a very special two-part podcast this month. Joining me as both a guest and something of a co-host is FLI president and MIT physicist Max Tegmark. And he’s joining me for these two episodes because we’re both very excited and honored to be speaking with Dr. Matthew Meselson. Matthew not only helped prove Watson and Crick’s hypothesis about the structure of DNA in the 1950s, but he was also instrumental in getting the U.S. to ratify the Geneva Protocol, in getting the U.S. to halt its Agent Orange Program, and in the creation of the Biological Weapons Convention. He is currently Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University where, among other things, he studies the role of sexual reproduction in evolution. Matthew and Max, thank you so much for joining us today.

Matthew: A pleasure.

Max: Pleasure.

Ariel: Matthew, you’ve done so much and I want to make sure we can cover everything, so let’s just dive right in. And maybe let’s start first with your work on DNA.

Matthew: Well, let’s start with my being a graduate student at Caltech.

Ariel: Okay.

Matthew: I had a been a freshman at Caltech but I didn’t like it. The teaching at that time was by rote except for one course, which was Linus Pauling’s course, General Chemistry. I took that course and I did a little research project for Linus, but I decided to go to graduate school much later at the University of Chicago because there was a program there called Mathematical Biophysics. In those days, before the structure of DNA was known, what could a young man do who liked chemistry and physics but wanted to find out how you could put together the atoms of the periodic chart and make something that’s alive?

There was a unit there called Mathematical Biophysics and the head of it was a man with a great big black beard, and that all seemed very attractive to a kid. So, I decided to go there but because of my freshman year at Caltech I got to know Linus’ daughter, Linda Pauling, and she invited me to a swimming pool party at their house in Sierra Madre. So, I’m in the water. It’s a beautiful sunny day in California, and the world’s greatest chemist comes out wearing a tie and a vest and looks down at me in the water like some kind of insect and says, “Well, Matt, what are you going to do next summer?”

I looked up and I said, “I’m going to the University of Chicago to Nicolas Rashevsky” that’s the man with the black beard. And Linus looked down at me and said, “But Matt, that’s a lot of baloney. Why don’t you come be my graduate student?” So, I looked up and said, “Okay.” That’s how I got into graduate school. I started out in X-ray crystallography, a project that Linus gave me to do. One day, Jacques Monod from the Institut Pasteur in Paris came to give a lecture at Caltech, and the question then was about the enzyme beta-galactosidase, a very important enzyme because studies of the induction of that enzyme led to the hypothesis of messenger RNA, also how genes are turned on and off. A very important protein used for those purposes.

The question of Monod’s lecture was: is this protein already lurking inside of cells in some inactive form? And when you add the chemical that makes it be produced, which is lactose (or something like lactose), you just put a little finishing touch on the protein that’s lurking inside the cells and this gives you the impression that the addition of lactose (or something like lactose) induces the appearance of the enzyme itself. Or the alternative was maybe the addition to the growing medium of lactose (or something like lactose) causes de novo production, a synthesis of the new protein, the enzyme beta-galactosidase. So, he had to choose between these two hypotheses. And he proposed an experiment for doing it — I won’t go into detail — which was absolutely horrible and would certainly not have worked, even though Jacques was a very great biologist.

I had been taking Linus’ course on the nature of the chemical bond, and one of the key take-home problems was: calculate the ratio of the strength of the Deuterium bond to the Hydrogen bond. I found out that you could do that in one line based on the — what’s called the quantum mechanical zero point energy. That impressed me so much that I got interested in what else Deuterium might have about it that would be interesting. Deuterium is heavy Hydrogen, with a neutron in the nucleus. So, I thought: what would happen if you exchange the water in something alive with Deuterium? And I read that there was a man who tried to do that with a mouse, but that didn’t work. The mouse died. Maybe because the water wasn’t pure, I don’t know.

But I had found a paper that you could grow bacteria, Escherichia coli, in pure heavy water with other nutrients added but no light water. So, I knew that you could make DNA from that as you could probably make DNA or also beta-galactosidase a little heavier by having it be made out of heavy Hydrogen rather than light. There’s some intermediate details here, but at some point I decided to go see the famous biophysicist Max Delbrück. I was in the Chemistry Department and Max was in the Biology Department.

And there was, at that time, a certain — I would say not a barrier, but a three-foot fence between these two departments. Chemists looked down on the biologists because they worked just with squiggly, gooey things. Then the physicists naturally looked down on the chemists and the mathematicians looked down on the physicists. At least that was the impression of us graduate students. So, I was somewhat fearsome to go meet Max Delbrück, and he also had a fearsome reputation, as not tolerating any kind of nonsense. But finally I went to see him — he was a lovely man actually — and the first thing he said when I sat down was, “What do you think about these two new papers of Watson and Crick?” I said I’d never heard about them.  Well, he jumped out of his chair and grabbed a heap of reprints that Jim Watson had sent to him, and threw them all at me, and yelled at me, and said, “Read these and don’t come back until you read them.”

Well, I heard the words “come back.” So I read the papers and I went back, and he explained to me that there was a problem with the hypothesis that Jim and Francis had for DNA replication. The idea of theirs was that the two strands come apart by unwinding the double helix. And if that meant that you had to unwind the entire parent double helix along its whole length, the viscous drag would have been impossible to deal with. You couldn’t drive it with any kind of reasonable biological motor.

So Max thought that you don’t actually unwind the whole thing: You make breaks, and then with little pieces you can unwind those and then seal them up. This gives you a kind of dispersive replication in which the two daughter molecules, each one has some pieces of the parent molecule but no complete strand from the parent molecule. Well, when he told me that, I almost immediately — I think it was almost immediately — realized that density separation would be a way to find out if this hypothesis predicted the finding of half heavy DNA after one generation. That is, one old strand together with one new strand forming one new duplex of DNA.

So I went to Linus Pauling and said, “I’d like to do that experiment,” and he gently said, “Finish your X-ray crystallography.” So, I didn’t do that experiment then. Instead I went to Woods Hole to be a teaching assistant in the Physiology course with Jim Watson. Jim had been living at Caltech that year in the faculty club, the Athenaeum, and so had I, so I had gotten to know Jim pretty well then. So there I was at Woods Hole, and I was not really a teaching assistant — I was actually doing an experiment that Jim wanted me to do — but I was meeting with the instructors.

One day we were on the second floor of the Lily building and Jim looked out the window and pointed down across the street. Sitting on the grass was a fellow, and Jim said, “That guy thinks he’s pretty smart. His name is Frank Stahl. Let’s give him a really tough experiment to do all by himself.” The Hershey–Chase Experiment. Well, I knew what that experiment was, and I didn’t think you could do it in one day, let alone just single-handedly. So I went downstairs to tell this poor Frank Stahl guy that they were going to give him a tough assignment.

I told him about that, and I asked him what he was doing. And he was doing something very interesting with bacteriophages. He asked me what I was doing, and I told him that I was thinking of finding out if DNA replicates semi-conservatively the way Watson and Crick said it should, by a method that would have something to do with density measurements in a centrifuge. I had no clear idea how to do that, just something by growing cells in heavy water and then switching them to light water and see what kind of DNA molecules they made in a density gradient in a centrifuge. And Frank made some good suggestions, and we decided to do this together at Caltech because he was coming to Caltech himself to be a postdoc that very next September.

Anyway, to make a long story short we made the experiment work, and we published it in 1958. That experiment said that DNA is made up of two subunits and when it replicates its subunits come apart, each one becomes associated with a new sub-unit. Now anybody in his right mind would have said, “By sub-unit you really mean a single polynucleotide chain. Isn’t that what you mean?” And we would have answered at that time, “Yes of course, that’s what we mean, but we don’t want to say that because our experiment doesn’t say that. Our experiment says that some kind of subunits do that — the subunits almost certainly are the single polynucleotide chains — but we want to confine our written paper to only what can be deduced from the experiment itself, and not go one inch beyond that.” It was later a fellow named John Cairns proved that the subunits were really the single polynucleotide chains of DNA.

Ariel: So just to clarify, those were the strands of DNA that Watson and Crick had predicted, is that correct?

Matthew: Yes, it’s the result that they would have predicted, exactly so. We did a bunch of other experiments at Caltech, some on mutagenesis and other things, but this experiment, I would say, had a big psychological value. Maybe its psychological value was more than anything else.

The year 1954, the year after Watson and Crick had published the structure of DNA and their speculations as to its biological meaning at Woods Hole, and Jim was there and Francis was there. I was there, as I mentioned. Rosalind Franklin was there. Sydney Brenner was there. It was very interesting because a good number of people there didn’t believe their structure for DNA, or that it had anything to do with life and genes, on the grounds that it was too simple, and life had to be very complicated. And the other group of people thought it was too simple to be wrong.

So two views: every one agreed that the structure that they had proposed was a simple one. Some people thought simplicity meant truth, and others thought that in biology, truth had to be complicated. What I’m trying to get at here is that after the structure was published it was just a hypothesis. It wasn’t proven by any methods of, for example, crystallography, to show — it wasn’t until much later that crystallography and a certain other kind of experiment actually proved that the Watson and Crick structure was right. At that time, it was a proposal based on model building.

So why was our experiment, the experiment showing the semi-conservative replication, of psychological value? It was because this is the first time you could actually see something. Namely, bands in an ultracentrifuge gradient. So, I think the effect of our experiment in 1958 was to make the DNA structure proposal of 1954 — it gave it a certain reality. Jim, in his book The Double Helix, actually says that he was greatly relieved when that came along. I’m sure he believed the structure was right all the time, but this certainly was a big leap forward in convincing people.

Ariel: I’d like to pull Max into this just a little bit and then we’ll get back to your story. But I’m really interested in this idea of the psychological value of science. Sort of very, very broadly, do you think a lot of experiments actually come down to more psychological value, or was your experiment unique in that way? I thought that was just a really interesting idea. And I think it would be interesting to hear both of your thoughts on this.

Matthew: Max, where are you?

Max: Oh, I’m just fascinated by what you’ve been telling us about here. I think of course, the sciences — we see again and again that experiments without theory and theory without experiments, neither of them would be anywhere near as amazing as when you have both. Because when there’s a really radical new idea put forth, half the time people at the time will dismiss it and say, “Oh, that’s obviously wrong,” or whatnot. And only when the experiment comes along do people start taking it seriously and vice versa. Sometimes a lot of theoretical ideas are just widely held as truths — like Aristotle’s idea of how the laws of motion should be — until somebody much later decides to put it to the experimental test.

Matthew: That’s right. In fact, Sir Arthur Eddington is famous for two things. He was one of the first ones to find experimental proof of the accuracy of Einstein’s theory of general relativity, and the other thing for which Eddington was famous was having said, “No experiment should be believed until supported by theory.”

Max: Yeah. Theorists and experiments have had this love-hate relationship throughout the ages, which I think, in the end, has been a very fruitful relationship.

Matthew: Yeah. In cosmology the amazing thing to me is that the experiments now cost billions or at least hundreds of millions of dollars. And that this is one area, maybe the only one, in which politicians are willing to spend a lot of money for something that’s so beautiful and theoretical and far off and scientifically fundamental as cosmology.

Max: Yeah. Cosmology is also a reminder again of the importance of experiment, because the big questions there — such as where did everything come from, how big is our universe, and so on — those questions have been pondered by philosophers and deep thinkers for as long as people have walked the earth. But for most of those eons all you could do was speculate with your friends over some beer about this, and then you could go home, because there was no further progress to be made, right?

It was only more recently when experiments gave us humans better eyes: where with telescopes, et cetera, we could start to see things that our ancestors couldn’t see, and with this experimental knowledge actually start to answer a lot of these things. When I was a grad student, we argued about whether our universe was 10 billion years old or 20 billion years old. Now we argue about whether it’s 13.7 or 13.8 billion years old. You know why? Experiment.

Matthew: And now is a more exciting time than any previous time, I think, because we’re beginning to talk about things like multi-universes and entanglement, things that are just astonishing and really almost foreign to the way that we’re able to think — that there’s other universes, or that there could be what’s called quantum mechanical entanglement: that things influence each other very far apart, so far apart that light could not travel between them in any reasonable time, but by a completely weird process, which Einstein called spooky action at a distance. Anyway, this is an incredibly exciting time about which I know nothing except from podcasts and programs like this one.

Max: Thank you for bringing this up, because I think the examples you gave right now actually are really, really linked to these breakthroughs in biology that you were telling us about, because I think we’ve been on this intellectual journey all along where we humans kept underestimating our ability to understand stuff. So for the longest time, we didn’t even really try our best because we assumed it was futile. People used to think that the difference between a living bug and a dead bug was that there was some sort of secret sauce, and the living bug has some sort life essence or something that couldn’t be studied with the tools of science. And then by the time people started to take seriously that maybe actually the difference between that living bug and the dead bug is that the mechanism is just broken in one of them, and you can study the mechanism — then you get to these kind of experimental questions that you were talking about. I think in the same way, people had previously shied away from asking questions about, not just about life, but about the origin of our universe for example, as being always hopelessly beyond where we were ever even able to do anything about, so people didn’t ask what experiments they could make. They just gave up without even trying.

And then gradually I think people were emboldened by breakthroughs in, for example, biology, to say, “Hey, what about — let’s look at some of these other things where people said we’re hopeless, too?” Maybe even our universe obeys some laws that we can actually set out to study. So hopefully we’ll continue being emboldened, and stop being lazy, and actually work hard on asking all questions, and not just give up because we think they’re hopeless.

Matthew: I think the key to making this process begin was to abandon supernatural explanations of natural phenomena. So long as you believe in supernatural explanations, you can’t get anywhere, but as soon as you give them up and look around for some other kind of explanation, then you can begin to make progress. The amazing thing is that we, with our minds that evolved under conditions of hunter-gathering and even earlier than that — that these minds of ours are capable of doing such things as imagining general relativity or all of the other things.

So is there any limit to it? Is there going to be a point beyond which we will have to say we can’t really think about that, it’s too complicated? Yes, that will happen. But we will by then have built computers capable of thinking beyond. So in a sense, I think once supernatural thinking was given up, the path was open to essentially an infinity of discovery, possibly with the aid of advanced artificial intelligence later on, but still guided by humans. Or at least by a few humans.

Max: I think you hit the nail on the head there. Saying, “All this is supernatural,” has been used as an excuse to be lazy over and over again, even if you go further back, you know, hundreds of years ago. Many people looked at the moon, and they didn’t ask themselves why the moon doesn’t fall down like a normal rock because they said, “Oh, there’s something supernatural about it, earth stuff obeys earth laws, heaven stuff obeys heaven laws, which are just different. Heaven stuff doesn’t fall down.”

And then Newton came along and said, “Wait a minute. What if we just forget about the supernatural, and for a moment, explore the hypothesis that actually stuff up there in the sky obeys the same laws of physics as the stuff on earth? Then there’s got to be a different explanation for why the moon doesn’t fall down.” And that’s exactly how he was led to his law of gravitation, which revolutionized things of course. I think again and again, there was again the rejection of supernatural explanations that led people to work harder on understanding what life really is, and now we see some people falling into the same intellectual trap again and saying, “Oh yeah, sure. Maybe life is mechanistic but intelligence is somehow magical, or consciousness is somehow magical, so we shouldn’t study it.”

Now, artificial intelligence progress is really, again, driven by people willing to let go of that and say, “Hey, maybe intelligence is not supernatural. Maybe it’s all about information processing, and maybe we can study what kind of information processing is intelligent and maybe even conscious as in having experiences.” There’s a lot learn at this meta level from what you’re saying there, Matthew, that if we resist excuses to not do the work by saying, “Oh, it’s supernatural,” or whatever, there’s often real progress we can make.

Ariel: I really hate to do this because I think this is such a great discussion, but in the interest of time, we should probably get back to the stories at Harvard, and then you two can discuss some of these issues — or others — a little more shortly in this interview. So yeah, let’s go back to Harvard.

Matthew: Okay, Harvard. So I came to Harvard. I thought I’d stay only five years. I thought it was kind of a duty for an American who’d grown up in the West to find out a little bit about what the East was like. But I never left. I’ve been here for 60 years. When I was here for about three years, my friend Paul Doty, a chemist, no longer living, asked me if I’d like to go work at the United States Arms Control and Disarmament Agency in Washington DC. He was on the general advisory board of that government branch, and it was embedded in the State Department building on 21st Street in Washington, but it was quite independent, it could report it directly to the White House, and it was the first year of its existence, and it was trying to find out what it should be doing.

And one of the ways it tried to find out what it should be doing was to hire six academics to come just for the summer. One of them was me, one of them was Freeman Dyson, the physicist, and there were four others. When I got there, they said, “Okay, you’re going to work on theater nuclear weapons arms control,” something I knew less than zero about. But I tried, and I read things and so on, and very famous people came to brief me — like Llewellyn Thompson, our ambassador to Moscow, and Paul Nitze, the deputy secretary of defense.

I realized that I knew nothing about this and although scientists often have the arrogance to think that they can say something useful about nearly anything if they think about it, here was something that so many people had thought about. So I went through my boss and said, “Look, you’re wasting your time and your money. I don’t know anything about this. I’m not gonna produce anything useful. I’m a chemist and a biologist. Why don’t you have me look into the arms control of that stuff?” He said, “Yeah, you could do whatever you want. We had a guy who did that, and he got very depressed and he killed himself. You could have his desk.”

So I decided to look into chemical and biological weapons. In those days, the arms control agency was almost like a college. We all had to have very high security clearances, and that was because the congress was worried that maybe there would be some leakers amongst the people doing this suspicious work in arms control, and therefore, we had to be in possession of the highest level of security clearance. This had, in a way, the unexpected effect that you could talk to your neighbor about anything. Ordinarily, you might not have clearance for what your neighbor, a different office, a different room, or a different desk was doing — but we had, all of us, such security clearances that we could all talk to each other about what we were doing. So it was like a college in that respect. It was a wonderful atmosphere.

Anyway, I decided I would just focus on biological weapons, because the two together would be too much for a summer. I went to the CIA, and a young man there showed me everything we knew about what other countries were doing with biological weapons, and the answer was we knew very little. Then I went to Fort Detrick to see what we were doing with biological weapons, and I was given a tour by a quite good immunologist who had been a faculty member at the Harvard Medical School, name was Leroy Fothergill. And we came to a big building, seven stories high. From a distance, you would think it had windows but when you get up close, they were phony windows. And I asked Dr. Fothergill, “What do we do in there?” He said, “Well, we have a big fermentor in there and we make Anthrax.” I said, “Well, why do we do that?” He said, “Well, biological weapons are a lot cheaper than nuclear weapons. It will save us money.”

I don’t think it took me very long, certainly by the time I got back to my office in the State Department Building, to realize that hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all. Because in the hands of other people, it would be like their having nuclear weapons. It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.

So that dawned on me. My office mate was Freeman Dyson, and I talked with him a little bit about it and he encouraged me greatly to pursue this. The more I thought about it, two things motivated me very strongly. Not just the illogic of it. The illogic of it motivated me only in the respect that it made me realize that any reasonable person could be convinced of this. In other words, it wouldn’t be a hard job to get this thing stopped, because anybody who’s thoughtful would see the argument against it. But there were two other aspects. One, it was my science: biology. It’s hard to explain, but that my science would be perverted in that way. But there’s another aspect, and that is the difference between war and peace.

We’ve had wars and we’ve had peace. Germany fights Britain, Germany is aligned with Britain. Britain fights France, Britain is aligned with France. There’s war. There’s peace. There are things that go on during war that might advance knowledge a little bit, but certainly, it’s during times of peace that the arts, the humanities, and science, too, make great progress. What if you couldn’t tell the difference and all the time is both war and peace? By that I mean, war up until now has been very special. There are rules of it. Basically, it starts with hitting a guy so hard that he’s knocked out or killed. Then you pick up a stone and hit him with that. Then you make a spear and spear him with that. Then you make a bow and arrow and spear him with that. Then later on, you make a gun and you shoot a bullet at him. Even a nuclear weapon: it’s all like hitting with an arm, and furthermore, when it stops, it’s stopped, and you know when it’s going on. It make sounds. It makes blood. It makes bang.

Now biological weapons, they could be responsible for a kind of war that’s totally surreptitious. You don’t even know what’s happening, or you know it’s happening but it’s always happening. They’re trying to degrade your crops. They’re trying to degrade your genetics. They’re trying to introduce nasty insects to you. In other words, it doesn’t have a beginning and an end. There’s no armistice. Now today, there’s another kind of weapon. It has some of those attributes: It’s cyber warfare. It might over time erase the distinction between war and peace. Now that really would be a threat to the advance of civilization, a permanent science fiction-like, locked in, war-like situation, never ending. Biological weapons have that potentiality.

So for those two reasons — my science, and it could erase the distinction between war and peace, could even change what it means to be human. Maybe you could change what the other guy’s like: change his genes somehow. Change his brain by maybe some complex signaling, who knows? Anyway, I felt a strong philosophical desire to get this thing stopped. Fortunately, I was in Harvard University, and so was Jack Kennedy. And although by that time he had been assassinated, he had left behind lots of people in the key cabinet offices who were Kennedy appointees. In particular, people who came from Harvard. So I could knock on almost any door.

So I went to Lyndon Johnson’s national security adviser, who had been Jack Kennedy’s national security adviser, and who had been the dean at Harvard who hired me, McGeorge Bundy, and said all these things I’ve just said. And he said, “Don’t worry, Matt, I’ll keep it out of the war plans.” I’ve never seen a war plan, but I guess if he said that, it was true. But that didn’t mean it wouldn’t keep on being developed.

Now here I should make an aside. Does that mean that the Army or the Navy or the Air Force wanted these things? No. We develop weapons in a kind of commercial way that is a part of the military. In this case, the Army Materiel Command works out all kinds of things: better artillery pieces, communication devices, and biological weapons. It doesn’t belong to any service. Then after, in this case, biological weapons, if the laboratories develop what they think is a good biological weapon, they still have to get one of the services — Air Force, Army, Navy, Marines —  to say, “Okay, we’d like that. We’ll buy some of that.”

There was always a problem here. Nobody wanted these things. The Air Force didn’t want them because you couldn’t calculate how many planes you needed to kill a certain number of people. You couldn’t calculate the human dose response, and beyond that you couldn’t calculate the dose that would reach the humans. There were too many unknowns. The Army didn’t like it, not only because they, too, wanted predictability, but because their soldiers are there, maybe getting infected by the same bugs. Maybe there’s vaccines and all that, but it also seemed dishonorable. The Navy didn’t want it because the one thing that ships have to be is clean. So oddly enough, biological weapons were kind of a step child.

Nevertheless, there was a dedicated group of people who really liked the idea and pushed hard on it. These were the people who were developing the biological weapons, and they had their friends in Congress, so they kept getting it funded. So I made a kind of a plan, like a protocol for doing an experiment, to get us to stop all this. How do you do that? Well, first you ask yourself: who can stop it? There’s only one person who can stop it. That’s the President of the United States.

The next thing is: what kind of advice is he going to get, because he may want to do something, but if all the advice he gets is against it, it takes a strong personality to go against the advice you’re getting. Also, word might get out, if it turned out you made a mistake, that they told you all along it was a bad idea and you went ahead anyway. That makes you a super fool. So the answer there is: well, you go to talk to the Secretary of Defense, and the Secretary of State, and the head of the CIA, and all of the senior people, and their people who are just below them.

Then what about the people who are working on the biological weapons? You have to talk to them, but not so much privately, because they really are dedicated. There were some people who are caught in this and really didn’t want to be doing it, but there were other people who were really pushing it, and it wasn’t possible, really, to tell them to quit your job and get out of this. But what you could do is talk with them in public, and by knowing more than they knew about their own subject — which meant studying up a lot — show that they were wrong.

So I literally crammed, trying to understand everything there was to know about aerobiology, diffusion of clouds, pathogenicity, history of biological weapons, the whole bit, so that I could sound more knowledgeable. I know that’s a sort of slightly underhanded way to win an argument, but it’s a way of convincing the public that the guys who are doing this aren’t so wise. And then you have to get public support.

I had a pal here who told me I had to go down to Washington and meet a guy named Howard Simons, who was the managing editor of the Washington Post. He had been a science journalist at The Post and that’s why some scientists up here in Harvard knew him. So, I went down there — Howie at that time was now managing editor — and I told him, “I want to get newspaper articles all over the country about the problem of biological weapons.” He took out a big yellow pad and he wrote down about 30 names. He said, “These are the science journalists at San Francisco Chronicle, Baltimore Sun, New York Times, et cetera, et cetera.” Put down the names of all the main science journalists. And he said to me, “These guys have to have something once a week to give their editor for the science columns, or the science pages. They’re always on the lookout for something, and biological weapons is a nice subject — they’d like to write about that, because it grabs people’s attention.”

So I arranged to either meet, or at least talk to all of these guys. And we got all kinds of articles in the press, and mainly reflecting the views that I had that this was unwise for the United States to pioneer this stuff. We should be in the position to go after anybody else who was doing it even in peacetime and get them to stop, which we couldn’t very well do if we were doing it ourselves. In other words, that meant a treaty. You have to have a treaty, which might be violated, but if it’s violated and you know, at least you can go after the violators, and the treaty will likely stop a lot of countries from doing it in the first place.

So what are the treaties? There’s an old treaty, a 1925 Geneva Protocol. The United States was not a party to it, but it does prohibit the first use of bacteriological or other biological weapons. So the problem was to convince the United States to get on board that treaty.

The very first paper I wrote for the President is called the Geneva Protocol of 1925. I never met President Nixon, but I did know Henry Kissinger: He’d been my neighbor at Harvard, the building next door to mine. There was a good lunch room on the third floor. We both ate there. He had started an arms control seminar, met every month. I went to that, all the meetings. We traveled a little bit in Europe together. So I knew him, and I wrote papers for Henry knowing that those would get to Nixon. The first paper that I wrote, as I said, was “The United States and the Geneva Protocol.” It made all these arguments that I’m telling you now about why the United States should not be in this business. Now, the Protocol also prohibits chemical weapons or the first use of chemical weapons.

Now, I should say something about writing papers for Presidents. You don’t want to write a paper that’s saying, “Here’s what you should do.” You have to put yourself in their position. There are all kinds of options on what they should do. So, you have to write a paper from the point of view of a reader who’s got to choose between a lot of options. He doesn’t have a choice to start with. So that’s the kind of paper you need to write. You’ve got to give every option a fair trial. You’ve got to do your best, both to defend every option and to argue against every option. And you’ve got to do it in no more than a very few number of pages. That’s no easy job, but you can do it.

So eventually, as you know, the United States renounced biological weapons in November of 1969. There was an off the record press briefing that Henry Kissinger gave to the journalists about this. And one of them, I think it was the New York Times guy, said, “What about toxin weapons?”

Now, toxins are poisonous things made by living things, like Botulinum toxin made by bacteria or snake venom, and those could be used as weapons in principle. You can read in this briefing, Henry Kissinger says, “What are toxins?” So what this meant, in other words, is that a whole new review, a whole new decision process had to be cranked up to deal with the question, “Well, do we renounce toxin weapons?” And there were two points of view. One was, “They are made by living things, and since we’re renouncing biological warfare, we should renounce toxins.”

The other point of view is, “Yeah, they’re made by living things, but they’re just chemicals, and so they can also be made by chemists in laboratories. So, maybe we should renounce them when they’re made by living things like bacteria or snakes, but reserve the right to make them and use them in warfare if we can synthesize them in chemical laboratories.” So I wrote a paper arguing that we should renounce them completely. Partly because it would be very confusing to argue that the basis for renouncing or not renouncing is who made them, not what they are. But also, I knew that my paper was read by Richard Nixon on a certain day on Key Biscayne in Florida, which was one of the places he’d go for rest and vacation.

Nixon was down there, and I had written a paper called “What Policy for Toxins.” I was at a friend’s house with my wife the night that the President and Henry Kissinger were deciding this issue. Henry called me, and I wasn’t home. They couldn’t find their copy of my paper. Henry called to see if I could read it to them, but he couldn’t find me because I was at a dinner party. Then Henry called Paul Doty, my friend, because he had a copy of the paper. But he looked for his copy and he couldn’t find it either. Then late that night Kissinger called Doty again and said, “We found the paper, and the President has made up his mind. He’s going to renounce toxins no matter how they’re made, and it was because of Matt’s paper.”

I had tried to write a paper that steered clear of political arguments — just scientific ones and military ones. However, there had been an editorial in the Washington Post by one of their editorial writers, Steve Rosenfeld, in which he wrote the line, “How can the President renounce typhoid only to embrace Botulism?”

I thought it was so gripping, I incorporated it under the topic of the authority and credibility of the President of the United States. And what Henry told Paul on the telephone was: that’s what made up the President’s mind. And of course, it would. The President cares about his authority and credibility. He doesn’t care about little things like toxins, but his authority and credibility… And so right there and then, he scratched out the advice that he’d gotten in a position paper, which was to take the option, “Use them but only if made by chemists,” and instead chose the option to renounce them completely. And that’s how that decision got made.

Ariel: That all ended up in the Biological Weapons Convention, though, correct?

Matthew: Well, the idea for that came from the British. They had produced a draft paper to take to the arms control talks with the Russians and other countries in Geneva, suggesting a treaty to prohibit biological weapons in war — not just the way the Geneva Protocol did, but would prohibit even their production and possession, not merely their use. Richard Nixon, in his renunciation by the United States, what he did was threefold. He got the United States out of the biological weapons business and decreed that Fort Detrick and other installations that had been doing that would hence forward be doing only peaceful things, like Detrick was partly converted to a cancer research institute, and all the biological weapons that had been stock piled were to be destroyed, and they were.

The other thing he did was renounce toxins. Another thing he decided to do was to resubmit the Geneva Protocol to the United States Senate for its advice and approval. And the last thing was to support the British initiative, and that was the Biological Weapons Convention. But you could only get it if the Russians agreed. But eventually, after a lot of negotiation, we got the Biological Weapons Convention, which is still in force. A little later we even got the Chemical Weapons Convention, but not right away because in my view, and in the view of a lot of people, we did need chemical weapons. Until we could be pretty sure that the Soviet Union was going to get rid of its chemical weapons, too.

If there are chemical weapons on the battlefield, soldiers have to put on gas masks and protective clothing, and this really slows down the tempo of combat action, so that if you could simply put the other side into that restrictive clothing, you have a major military accomplishment. Chemical weapons in the hands of only one side would give that side the option of slowing down the other side, reducing the mobility on the ground of the other side. So, until we got a treaty that had inspection provisions, which the chemical treaty does, and the biological treaty does not — well, it has a kind of challenge inspection, but no one’s ever done that, and it’s very hard to make it work — but the chemical treaty had inspection provisions that were obligatory, and have been extensive: with the Russians visiting our chemical production facilities, and our guys visiting theirs, and all kinds of verification. So that’s how we got the Chemical Weapons Convention. That was quite a bit later.

Max: So, I’m curious, was there a Matthew Meselson clone on the British side, thanks to whom the British started pushing this?

Matthew: Yes. There were of course, numerous clones. And there were numerous clones on this side of the Atlantic, too. None of these things could ever be ever done by just one person. But my pal Julian Robinson, who was at the University of Sussex in Brighton, he was a real scholar of chemical and biological weapons, knows everything about them, and their whole history, and has written all of the very best papers on this subject. He’s just an unbelievably accurate and knowledgeable historian and scholar. People would go to Julian for advice. He was a Mycroft. He’s still in Sussex.

Ariel: You helped start the Harvard Sussex Program on chemical and biological weapons. Is he the person you helped start that with, or was that separate?

Matthew: We decided to do that together.

Ariel: Okay.

Matthew: It did several things, but one of the main things it did was to publish a quarterly journal, which had a dispatch from Geneva — progress towards getting the Chemical Weapons Convention — because when we started the bulletin, the Chemical Convention had not yet been achieved. There were all kinds of news items in the bulletin; We had guest articles. And it finally ended, I think, only a few years ago. But I think it had a big impact; not only because of what was in it, but because, also, it united people of all countries interested in this subject. They all read the bulletin, and they all got a chance to write in the bulletin as well, and they occasionally meet each other, so it had an effect of bringing together a community of people interested in safely getting rid of chemical weapons and biological weapons.

Max: This Biological Weapons Convention was a great inspiration for subsequent treaties, first the ban on biological weapons, and then various other kinds of weapons, and today, we have a very vibrant debate about whether there should be also be a ban on lethal autonomous weapons, and inhumane uses of A.I. So, I’m curious to what extent you got lots of push-back back in those days from people who said, “Oh this is a stupid idea,” or, “This is never going to work,” and what the lessons are that could be learned from that.

Matthew: I think that with biological weapons, and also, but to a lesser extent, with chemical weapons, the first point was we didn’t need them. We had never really accepted World War I — when we were involved in the use of chemical weapons, that had been started. But it was never something that the military liked. They didn’t want to fight a war by encumberment. Biological weapons for sure not, once we realized that to make cheap weapons, they could get into the hands of people who couldn’t afford nuclear weapons, was idiotic. And even chemical weapons are relatively cheap and have the possibility of covering fairly large areas at a low price, and also getting into the hands of terrorists. Now, terrorism wasn’t much on anybody’s radar until more recently, but once that became a serious issue, that was another argument against both biological and chemical weapons. So those two weapons really didn’t have a lot of boosters.

Max: You make it sound so easy though. Did it never happen that someone came and told you that you were all wrong and that this plan was never going to work?

Matthew: Yeah, but that was restricted to the people who were doing it, and a few really eccentric intellectuals. As evidence of this: in the military, the office which dealt with chemical and biological weapons, the highest rank you could find in that would be a colonel. No general, just a colonel. You don’t get to be a general in the chemical corps. There were a few exceptions, basically old times, as kind of a left over from World War I. If you’re a part of the military that never gets to have a general or even a full colonel, you ain’t got much influence, right?

But if you talk about the artillery or the infantry, my goodness, I mean there are lots of generals — including four star generals, even five star generals — who come out of the artillery and infantry and so on, and then Air Force generals, and fleet admirals in the Navy. So that’s one way you can quickly tell whether something is very important or not.

Anyway, we do have these treaties, but it might be very much more difficult to get treaties on war between robots. I don’t know enough about it to really have an opinion. I haven’t thought about it.

Ariel: I want to follow up with a question I think is similar, because one of the arguments that we hear a lot with lethal autonomous weapons, is this fear that if we ban lethal autonomous weapons, it will negatively impact science and research in artificial intelligence. But you were talking about how some of the biological weapons programs were repurposed to help deal with cancer. And you’re a biologist and chemist, but it doesn’t sound like you personally felt negatively affected by these bans in terms of your research. Is that correct?

Matthew: Well, the only technically really important thing — that would have happened anyway — that’s radar, and that was indeed accelerated by the military requirement to detect aircraft at a distance. But usually it’s the reverse. People who had been doing research in fundamental science naturally volunteered or were conscripted to do war work. Francis Crick was working on magnetic torpedoes, not on DNA or hemoglobin. So, the argument that a war stimulates basic science is completely backwards.

Newton, he was director of the mint. Nothing about the British military as it was at the time helped Newton realize that if you shoot a projectile fast enough, it will stay in orbit; He figured that out by himself. I just don’t believe the argument that war makes science advance. It’s not true. If anything, it slows it down.

Max: I think it’s fascinating to compare the arguments that were made for and against a biological weapons ban back then with the arguments that are made for and against a lethal autonomous weapons ban today, because another common argument I hear for why people want lethal autonomous weapons today is because, “Oh, they’re going to be great. They’re going to be so cheap.” That’s like exactly what you were arguing is a very good argument against, rather than for, a weapons class.

Matthew: There’s some similarities and some differences. Another similarity is that even one autonomous weapon in the hands of a terrorist could do things that are very undesirable — even one. On the other hand, we’re already doing something like it with drones. There’s a kind of continuous path that might lead to this, and I know that the military and DARPA are actually very interested in autonomous weapons, so I’m not so sure that you could stop it, both because it’s continuous; It’s not like a real break.

Biological weapons are really different. Chemical weapons are really different. Whereas autonomous weapons still are working on the ancient primitive analogy of hitting a man with your fist, or shooting a bullet. So long as those autonomous weapons are still using guns, bullets, things like that, and not something that is not native to our biology like poison. But with a striking of a blow you can make a continuous line all the way through stones, and bows and arrows, and bullets, to drones, and maybe autonomous weapons. So discontinuity is different.

Max: That’s an interesting challenge, deciding where exactly one draws the line to be more challenging in this case. Another very interesting analogy, I think, between biological weapons and lethal autonomous weapons is the business of verification. You mentioned earlier that there was a strong verification protocol for the Chemical Weapons Convention, and there have been verification protocols for nuclear arms reduction treaties also. Some people say, “Oh, it’s a stupid idea to ban lethal autonomous weapons because you can’t think of a good verification system.” But couldn’t people have said that also as a critique of the Biological Weapons Convention?

Matthew:  That’s a very interesting point, because most people who think that verification can’t work have never been told what’s the basic underlying idea of verification. It’s not that you could find everything. Nobody believes that you could find every missile that might exist in Russia. Nobody ever would believe that. That’s not the point. It’s more subtle. The point is that you must have an ongoing attempt to find things. That’s intelligence. And there must be a heavy penalty if you find even one.

So it’s a step back from finding everything, to saying if you find even one then that’s a violation, and then you can take extreme measures. So a country takes a huge risk that another country’s intelligence organization, or maybe someone on your side who’s willing to squeal, isn’t going to reveal the possession of even one prohibited object. That’s the point. You may have some secret biological production facility, but if we find even one of them, then you are in violation. It isn’t that we have to find every single blasted one of them.

That was especially an argument that came from the nuclear treaties. It was the nuclear people who thought that up. People like Douglas McEachin at the CIA, who realized that there’s a more sophisticated argument. You just have to have a pretty impressive ability to find one thing out of many, if there’s anything out there. This is not perfect, but it’s a lot different from the argument that you have to know where everything is at all times.

Max: So, if I can paraphrase, is it fair to say that you simply want to give the parties to the treaty a very strong incentive not to cheat, because even if they get caught off base one single time, they’re in violation, and moreover, those who don’t have the weapons at that time will also feel that there’s a very, very strong stigma? Today, for example, I find it just fascinating how biology is such a strong brand. If you go ask random students here at MIT what they associate with biology, they will say, “Oh, new cures, new medicines.” They’re not going to say bioweapons. If you ask people when was the last time you read about a bioterrorism attack in the newspaper, they can’t even remember anything typically. Whereas, if you ask them about the new biology breakthroughs for health, they can think of plenty.

So, biology has clearly very much become a science that’s harnessed to make life better for people rather than worse. So there’s a very strong stigma. I think if I or anyone else here at MIT tried to secretly start making bioweapons, we’d have a very hard time even persuading any biology grad student to want to work with them because of the stigma. If one could create a similar stigma against lethal autonomous weapons, the stigma itself would be quite powerful, even absent the ability to do perfect verification. Does that make sense?

Matthew: Yes, it does, perfect sense.

Ariel: Do you think that these stigmas have any effect on the public’s interest or politicians’ interest in science?

Matthew: I think there’s still great fascination of people with science. I think that the exploration of space, for example: lots of people, not just kids — but especially kids — that are fascinated by it. Pretty soon, Elon Musk says in 2022, he’s going to have some people walking around on Mars. He’s just tested that BFR rocket of his that’s going to carry people to Mars. I don’t know if he’ll actually get it done but people are getting fascinated by the exploration of space, are getting fascinated by lots of medical things, are getting desperate about the need for a cure for cancer. I myself think we need to spend a lot more money on preventing — not curing but preventing cancer — and I think we know how to do it.

I think the public still has a big fascination, respect, and excitement from science. The politicians, it’s because, see, they have other interests. It’s not that they’re not interested or don’t like science. It’s because they have big money interests, for example. Coal and oil, these are gigantic. Harvard University has heavily invested in companies that deal with fossil fuels. Our whole world runs on fossil fuels mainly. You can’t fool around with that stuff. So it becomes a problem of which is going to win out, your scientific arguments, which are almost certain to be right, but not absolutely like one and one makes two — but almost — or the whole economy and big financial interests. It’s not easy. It will happen, we’ll convince people, but maybe not in time. That’s the sad part. Once it gets bad enough, it’s going to be bad. You can’t just turn around on a dime and take care of disastrous climate change.

Max: Yeah, this is very much the spirit of course, of the Future Life Institute, that Ariel’s podcast is run by. Technology, what it really does, it empowers us humans to do more, either more good things or more bad things. And technology in and of itself isn’t evil, nor is it morally good; It’s a tool, simply. And the more powerful it becomes, the more crucial it is that we also develop the wisdom to steer the technology for good uses. And I think what you’ve done with your biology colleagues is such an inspiring role model for all of the other sciences, really.

We physicists still feel pretty guilty about giving the world nuclear weapons, but we’ve also gave the world a lot of good stuff, from lasers, to smartphones and computers. Chemists gave the world a lot of great materials, but they also gave us, ultimately, the internal combustion engine and climate change. Biology, I think more than any other field, has clearly ended up very solidly on the good side. Everybody loves biology for what it does, even though it could have gone very differently, right? We could have had a catastrophic arms race, a race to the bottom, with one super power outdoing the other in bioweapons, and eventually these cheap weapons being everywhere, and on the black market, and bioterrorism every day. That future didn’t happen, that’s why we all love biology. And I am very honored to get to be on this call here with you, so I could personally thank you for your role on making it this way. We should not take this for granted, that it’ll be this way with all sciences, the way it’s become for biology. So, thank you.

Matthew: Yeah. That’s all right.

I’d like to end with one thought. We’re learning how to change the human genome. They won’t really get going for a while, and there’s some problems that very few people are thinking about. Not the so-called off target effects, that’s a well-known problem — but there’s another problem that I won’t go into, but it’s called epistasis. Nevertheless, 10 years from now, 100 years from now, 500 years from now, sooner or later we’ll be changing the human genome on a massive scale, making people better in various ways, so-called enhancements.

Now, a question arises. Do we know enough about the genetic basis of what makes us human to be sure that we can keep the good things about being human? What are those? Well, compassion is one. I’d say curiosity is another. Another is the feeling of needing to be needed. That sounds kind of complicated, I guess, but if you don’t feel needed by anybody — there’s some people who can go through life and they don’t need to feel needed. But doctors, nurses, parents, people who really love each other: the feeling of being needed by another human being, I think, is very pleasurable to many people, maybe to most people, and it’s one of the things that’s of essence of what it means to be human.

Now, where does this all take us? It means that if we’re going to start changing the human genome in any big time way, we need to know, first of all, what we most value in being human, and that’s a subject for the humanities, for everybody to talk about, think about. And then it’s a subject for the brain scientists to figure out what’s the basis of it. It’s got to be in the brain. But what is it in the brain? And we’re miles and miles and miles away in brain science from being able to figure out what is it in the brain — or maybe we’re not, I don’t know any brain science, I shouldn’t be shooting off my mouth — but we’ve got to understand those things. What is it in our brains that makes us feel good when we are of use to someone else?

We don’t want to fool around with whatever those genes are — do not monkey with those genes unless you’re absolutely sure that you’re making them maybe better — but anyway, don’t fool around. And figure out in the humanities, don’t stop teaching humanities. Learn from Sophocles, and Euripides, and Aeschylus: What are the big problems about human existence? Don’t make it possible for a kid to go through Harvard — as is possible today — without learning a single thing from Ancient Greece. Nothing. You don’t even have to use the word Greece. You don’t have to use the word Homer or any of that. Nothing, zero. Isn’t that amazing?

Before President Lincoln, everybody, to get to enter Harvard, had to already know Ancient Greek and Latin. Even though these guys were mainly boys of course, and they were going to become clergymen. They also, by the way — there were no electives — everyone had to take fluctions, which is differential calculus. Everyone had to take integral calculus. Every one had to take astronomy, chemistry, physics, as well as moral philosophy, et cetera. Well, there’s nothing like that anymore. We don’t all speak the same language because we’ve all had such different kinds of education, and also the humanities just get a short shrift. I think that’s very short sighted.

MIT is pretty good in humanities, considering it’s a technical school. Harvard used to be tops. Harvard is at risk of maybe losing it. Anyway, end of speech.

Max: Yeah, I want to just agree with what you said, and also rephrase it the way I think about it. What I hear you saying is that it’s not enough to just make our technology more powerful. We also need the humanities, and our humanity, for the wisdom of how we’re going to manage our technology and what we’re trying to use it for, because it does no good to have a really powerful tool if you aren’t wise and use it for the right things.

Matthew: If we’re going to change, we might even split into several species. Almost all of the other species have very close other species: neighbors. Especially if you can get them separated — there’s a colony on Mars and they don’t travel back and forth much — species will diverge. It takes a long, long, long, long time, but the idea there, like the Bible says, that we are fixed, nothing will change, that’s of course wrong. Human evolution is going on as we speak.

Ariel: We’ll end part one of our two-part podcast with Matthew Meselson here. Please join us for the next episode which serves as a reminder that weapons bans don’t just magically work. But rather, there are often science mysteries that need to be solved in order to verify whether a group has used a weapon illegally. In the next episode, Matthew will talk about three such scientific mysteries he helped solve, including the anthrax incident in Russia, the yellow rain affair in Southeast Asia, and the research he did that led immediately to the prohibition of Agent Orange. So please join us for part two of this podcast, which is also available now.

As always, if you’ve been enjoying this podcast, please take a moment to like it, share it, and maybe even leave a positive review. It’s a small action on your part, but it’s tremendously helpful for us.

FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but I want to start by looking at this from the research and development side of things. So my very first question for both of you is: do you agree that 2018 was more about impressive progress, and less about major breakthroughs? Or were there breakthroughs that really were important to the AI community that just didn’t make it into the mainstream press?

David: Broadly speaking I think I agree, although I have a few caveats for that. One is just that it’s a little bit hard to recognize always what is a breakthrough, and a lot of the things in the past that have had really big impacts didn’t really seem like some amazing new paradigm shift—it was sort of a small tweak that then made a lot of things work a lot better. And the other caveat is that there are a few works that I think are pretty interesting and worth mentioning, and the field is so large at this point that it’s a little bit hard to know if there aren’t things that are being overlooked.

Roman: So I’ll agree with you, but I think the pattern is more important than any specific breakthrough. We kind of got used to getting something really impressive every month, so relatively it doesn’t sound as good, all the AlphaStar, AlphaFold, AlphaZero happening almost every month. And it used to be it took 10 years to see something like that.

It’s likely it will happen even more frequently. We’ll conquer a new domain once a week or something. I think that’s the main pattern we have to recognize and discuss. There are significant accomplishments in terms of teaching AI to work in completely novel domains. I mean now we can predict protein folding, now we can have multi-player games conquered. That never happened before so frequently. Chess was impressive because it took like 30 years to get there.

David: Yeah, so I think a lot of people were kind of expecting or at least hoping for StarCraft or Dota to be solved—to see, like we did with AlphaGo, AI systems that are beating the top players. And I would say that it’s actually been a little bit of a let down for people who are optimistic about that, because so far the progress has been kind of unconvincing.

So the AlphaStar, which was a really recent result from last week, for instance: I’ve seen criticism of it that I think is valid that it was making more actions than a human could within a very short interval of time. So they carefully controlled the actions-per-minute that AlphaStar was allowed to take, but they didn’t prevent it from doing really short bursts of actions that really helped its micro-game, and that means that it can win without really being strategically superior to its human opponents. And I think the Dota results that OpenAI has had was also criticized as being sort of not the hardest version of the problem, and still the AI sort of is relying on some crutches.

Ariel: So before we get too far into that debate, can we take a quick step back and explain what both of those are?

David: So these are both real-time strategy games that are, I think, actually the two most popular real-time strategy games in the world that people play professionally, and make money playing. I guess that’s all to say about them.

Ariel: So a quick question that I had too about your description then, when you’re talking about AlphaStar and you were saying it was just making more moves than a person can realistically make. Is that it—it wasn’t doing anything else special?

David: I haven’t watched the games, and I don’t play StarCraft, so I can’t say that it wasn’t doing anything special. I’m basing this basically on reading articles and reading the opinions of people who are avid StarCraft players, and I think the general opinion seems to be that it is more sophisticated than what we’ve seen before, but the reason that it was able to win these games was not because it was out-thinking humans, it’s because it was out-clicking, basically, in a way that just isn’t humanly possible.

Roman: I would agree with this analysis, but I don’t see it as a bug, I see it as a feature. That just shows another way machines can be superior to people. Even if they are not necessarily smarter, they can still produce superior performance, and that’s what we really care about. Right? We found a different way, a non-human approach to solving this problem. That’s impressive.

David: Well, I mean, I think if you have an agent that can just click as fast as it wants, then you can already win at StarCraft, before this work. There needs to be something that makes it sort of a fair fight in some sense.

Roman: Right, but think what you’re suggesting: We have to handicap machines to make them even remotely within being comparative to people. We’re talking about getting to superintelligent performance. You can get there by many ways. You can think faster, you can have better memory, you can have better reaction time—as long as you’re winning in whatever domain we’re interested in, you have superhuman performance.

David Krueger: So maybe another way of putting this would be if they actually made a robot play StarCraft and made it use the same interface that humans do, such as a screen and mouse, there’s no way that it could have beat the human players. And so by giving it direct access to the game controls, it’s sort of not solving the same problem that a human is when they play this game.

Roman: I feel what you’re saying, I just feel that it is solving it in a different way, and we have pro-human bias saying, well that’s not how you play this game, you have an advantage. Human players usually rely on superior strategy, not just faster movements that may take advantage of it for a few nanoseconds, a couple of seconds. But it’s not a long-term sustainable pattern.

One of the research projects I worked on was this idea of artificial stupidity, we called it—kind of limiting machines to human-level capacity. And I think that’s what we’re talking about it here. Nobody would suggest limiting a chess program to just human-level memory, or human memorization of opening moves. But we don’t see it as a limitation. Machines have an option of beating us in ways humans can’t. That’s the whole point, and that’s why it’s interesting, that’s why we have to anticipate such problems. That’s where most of the safety and security issues will show up.

Ariel: So I guess, I think, Roman, your point earlier was sort of interesting that we’ve gotten so used to breakthroughs that stuff that maybe a couple of years ago would have seemed like a huge breakthrough is just run-of-the-mill progress. I guess you’re saying that that’s what this is sort of falling into. Relatively recently this would have been a huge deal, but because we’ve seen so much other progress and breakthroughs, that this is now interesting and we’re excited about it—but it’s not reaching that level of, oh my god, this is amazing! Is that fair to say?

Roman: Exactly! We get disappointed if the system loses one game. It used to be we were excited if it would match amateur players. Now it’s, oh, we played a 100 games and you lost one? This is just not machine-level performance, you disappoint us.

Ariel: David, do you agree with that assessment?

David: I would say mostly no. I guess, I think what really impressed me with AlphaGo and AlphaZero was that it was solving something that had been established as a really grand challenge for AI. And then in the case of AlphaZero, I think the technique that they actually used to solve it was really novel and interesting from a research point of view, and they went on to show that this same technique can solve a bunch of other board games as well.

And my impression from what I’ve seen about how they did AlphaStar and AlphaFold is that there were some interesting improvements and the performance is impressive but I think it’s neither, like, quite at the point where you can say we’ve solved it, we’re better than everybody, or in the case of protein folding, there’s not a bunch more room for improvement that has practical significance. And it’s also—I don’t see any really clear general algorithmic insights about AI coming out of these works yet. I think that’s partially because they haven’t been published yet, but from what I have heard about the details about how they work, I think it’s less of a breakthrough on the algorithm side than AlphaZero was.

Ariel: So you’ve mentioned AlphaFold. Can you explain what that is real quick?

David: This is the protein folding project that DeepMind did, and I think there’s a competition called C-A-S-P or CASP that happens every three years, and they sort of dominated that competition this last year doing what was described as two CASPs in one, so basically doubling the expected rate of improvement that people have seen historically at these tasks, or at least at the one that is the most significant benchmark.

Ariel: I find the idea of the protein folding thing interesting because that’s something that’s actually relevant to scientific advancement and health as opposed to just being able to play a game. Are we seeing actual applications for this yet?

David: I don’t know about that, but I agree with you that that is a huge difference that makes it a lot more exciting than some of the previous examples. I guess one thing that I want to say about that, though, is that it does look a little bit more to me like continuation of progress that was already happening in the communities. It’s definitely a big step up, but I think a lot of the things that they did there could have really happened over the next few years anyways, even without DeepMind being there. So, one of the articles I read put it this way: If this wasn’t done by DeepMind, if this was just some academic group, would this have been reported in the media? I think the answer is sort of like a clear no, and that says something about the priorities of our reporting and media as well as the significance of the results, but I think that just gives some context.

Roman: I’ll agree with David—the media is terrible in terms of what they report on, we can all agree on that. I think it was quite a breakthrough, I mean, to say that they not just beat the competition, but to actually kind of doubled performance improvement. That’s incredible. And I think anyone who got to that point would not be denied publication in a top journal; It would be considered very important in that domain. I think it’s one of the most important problems in medical research. If you can accurately predict this, possibilities are really endless in terms of synthetic biology, in terms of curing diseases.

So this is huge in terms of impact from being able to do it. As far as how applicable is it to other areas, is it a great game-changer for AI research? All those things can adapt between this ability to perform in real-life environments of those multiplayer games, and being able to do this. Look at how those things can be combined. Right? You can do things in the real world you couldn’t do before, both in terms of strategy games, which are basically simulations for economic competition, for wars, for quite a few applications where impact would be huge.

So all of it is very interesting. It’s easy to say that, “Well if they didn’t do it, somebody else maybe would do it in a couple of years.” But it’s almost always true for all inventions. If you look at the history of inventions, things like, I don’t know, telephone, have been invented at the same time by two or three people; radio, two or three people. It’s just the point where science gets enough ingredient technology where yeah, somebody’s going to do it, nice. But still, we give credit to whoever got there first.

Ariel: So I think that’s actually a really interesting point, because I think for the last few years we have seen sort of these technological advances but I guess we also want to be considering the advances that are going to have a major impact on humanity even if it’s not quite as technologically new.

David: Yeah, absolutely. I think the framing in terms of breakthroughs is a little bit unclear what we’re talking about when we talk about AI breakthroughs, and I think a lot of people in the field of AI kind of don’t like how much people talk about it in terms of breakthroughs because a lot of the progress is gradual and builds on previous work and it’s not like there was some sudden insight that somebody had that just changed everything, although that does happen in some ways.

And I think you can think of the breakthroughs both in terms of like what is the impact—is this suddenly going to have a lot of potential to change the world? You can also think of it, though, from the perspective of researchers as like, is this really different from the kind of ideas and techniques we’ve seen or seen working before? I guess I’m more thinking about the second right now in terms of breakthroughs representing really radical new ideas in research.

Ariel: Okay, well I will take responsibility for being one of the media people who didn’t do a good job with presenting AI breakthroughs. But I think both with this podcast and probably moving forward, I think that is actually a really important thing for us to be doing—is both looking at the technological progress and newness of something but also the impact it could have on either society or future research.

So with that in mind, you guys also have a good list of other things that did happen this year, so I want to start moving into some of that as well. So next on your list is manual dexterity in robots. What did you guys see happening there?

David: So this is something that’s definitely not my area of expertise, so I can’t really comment too much on it. But there are two papers that I think are significant and potentially representing something like a breakthrough in this application. In general robotics is really difficult, and machine learning for robotics is still, I think, sort of a niche thing, like most robotics is using more classical planning algorithms, and hasn’t really taken advantage of the new wave of deep learning and everything.

So there’s two works, one is QT-Opt, and the other one is Dactyl, and these are both by people from the Berkeley OpenAI crowd. And these both are showing kind of impressive results in terms of manual dexterity in robots. So there’s one that does a really good job at grasping, which is one of the basic aspects of being able to act in the real world. And then there’s another one that was sort of just manipulating something like a cube with different colored faces on it—that one’s Dactyl; the grasping one is QT-Opt.

And I think this is something that was paid less attention to in the media, because it’s been more of a story of kind of gradual progress I think. But my friend who follows this deep reinforcement learning stuff more told me that QT-Opt is the first convincing demonstration of deep reinforcement learning in the real world, as opposed to all these things we’ve seen in games. The real world is much more complicated and there’s all sorts of challenges with the noise of the environment dynamics and contact forces and stuff like this that have been really a challenge for doing things in the real world. And then there’s also the limited sample complexity where when you play a game you can sort of interact with the game as much as you want and play the game over and over again, whereas in the real world you can only move your robot so fast and you have to worry about breaking it, so that means in the end you can collect a lot less data, which makes it harder to learn things.

Roman: Just to kind of explain maybe what they did. So hardware’s expensive, slow: It’s very difficult to work with. Things don’t go well in real life; It’s a lot easier to create simulations in virtual worlds, train your robot in there, and then just transfer knowledge into a real robot in the physical world. And that’s exactly what they did, training that virtual hand to manipulate objects, and they could run through thousands, millions of situations and then it’s something you cannot do with an actual, physical robot at that scale. So, I think that’s a very interesting approach for why lots of people try doing things in virtual environments. Some of the early AGI projects all concentrated on virtual worlds as domain of learning. So that makes a lot of sense.

David: Yeah, so this was for the Dactyl project, which was OpenAI. And that was really impressive I think, because people have been doing this sim-to-real thing—where you train in simulation and then try and transfer it to the real world—with some success for like a year or two, but this one I think was really kind of impressive in that sense, because they didn’t actually train it in the real world at all, and what they had learned managed to transfer to the real world.

Ariel: Excellent. I’m going to keep going through your list. One thing that you both mentioned are GANs. So very quickly, if one of you, or both of you, could explain what a GAN is and what that stands for, and then we’ll get into what happened last year with those.

Roman: Sure, so this is a somewhat new way of doing creative generational visuals and audio. You have two neural networks competing, one is kind of creating fakes, and the other one is judging them, and you get to a point where they’re kind of 50/50. You can’t tell if it’s fake or real anymore. And it’s a great way to produce artificial faces, cars, whatever. Any type of input you can provide to the networks, they quickly learn to extract the essence of that image or audio and generate artificial data sets full of such images.

And there’s really exciting work on being able to extract properties from those, different styles. So if we talk about faces, for example: there could be a style for hair, a style for skin color, a style for age, and now it’s possible to manipulate them. So I can tell you things like, “Okay, Photoshop, I need a picture of a female, 20 years old, blonde, with glasses,” and it would generate a completely realistic face based on those properties. And we’re starting to see it show up not just in images but transferred to video, to generating whole virtual worlds. It’s probably the closest thing we ever had computers get to creativity: actually kind of daydreaming and coming up with novel outputs.

David: Yeah, I just want to say a little bit about the history of the research in GAN. So the first work on GANs was actually back four or five years ago in 2014, and I think it was actually kind of—didn’t make a huge splash at the time, but maybe a year or two after that it really started to take off. And research in GANs over the last few years has just been incredibly fast-paced and there’s been hundreds of papers submitted and published at the big conferences every year.

If you look just in terms of the quality of what is generated, this is, I think, just an amazing demonstration of the rate of progress in some areas of machine learning. The first paper had these sort of black and white pictures of really blurry faces, and now you can get giant—I think 256 by 256, or 512 by 512, or even bigger—really high resolution and totally indistinguishable from real photos, to the human eye anyway—images of faces. So it’s really impressive, and we’ve seen really consistent progress on that, especially in the last couple years.

Ariel: And also, just real quick, what does it stand for?

David: Oh, generative adversarial network. So it’s generative, because it’s sort of generating things from scratch, or from its imagination or creativity. And it’s adversarial because there are two networks: the one that generates the things, and then the one that tries to tell those fake images apart from real images that we actually collect by taking photos in the world.

Ariel: This is an interesting one because it can sort of transition into some ethics stuff that came up this past year, but I’m not sure if we want to get there yet, or if you guys want to talk a little bit more about some of the other things that happened on the research and development side.

David: I guess I want to talk about a few other things that have been making, I would say, sort of steady progress, like GANs. With a lot of interest in, I guess I would say, their ideas that are coming to fruition, even though some of these are not exactly from the last year, they sort of really started to improve themselves and become widely used in the last year.

Ariel: Okay.

David: I think this is actually used in maybe the latest, greatest GAN paper, is something that’s called feature-wise transformations. So this is an idea that actually goes back up to 40 years, depending on how you measure it, but has sort of been catching on in specific applications in machine learning in the last couple of years—starting with, I would say, style-transfer, which is sort of like what Roman mentioned earlier.

So the idea here is that in a neural network, you have what are called features, which basically correspond to the activations of different neurons in the network. Like how much that neuron likes what it’s seeing, let’s say. And those can also be interpreted as representing different kinds of visual patterns, like different kinds of textures, or colors. And these feature-wise transformations basically just take each of those different aspects of the image, like the color or texture in a certain location, and then allow you to manipulate that specific feature, as we call it, by making it stronger or amplifying whatever was already there.

And so you can sort of view this as a way of specifying what sort of things are important in the image, and that’s why it allows you to manipulate the style of images very easily, because you can sort of look at a certain painting style for instance, and say, oh this person uses a lot of wide brush strokes, or a lot of narrow brush strokes, and then you can say, I’m just going to modulate the neurons that correspond to wide or narrow brush strokes, and change the style of the painting that way. And of course you don’t do this by hand, by looking in and seeing what the different neurons represent. This all ends up being learned end-to-end. And so you sort of have an artificial intelligence model that predicts how to modulate the features within another network, and that allows you to change what that network does in a really powerful way.

So, I mentioned that it has been applied in the most recent GAN papers, and I think they’re just using those kinds of transformations to help them generate images. But other examples where you can explain what’s happening more intuitively, or why it makes sense to try and do this, would be something like visual question answering. So there you can have the modulation of the vision network being done by another network that looks at a question and is trying to help answer that question. And so it can sort of read the question and see what features of images might be relevant to answering that question. So for instance, if the question was, “Is it a sunny day outside?” then it could have the vision network try and pay more attention to things that correspond to signs of sun. Or if it was asked something like, “Is this person’s hair combed?” then you could look for the patterns of smooth, combed hair and look for the patterns of rough, tangled hair, and have those features be sort of emphasized in the vision network. That allows the vision network to pay attention to the parts of the image that are most relevant to answering the question.

Ariel: Okay. So, Roman, I want to go back to something on your list quickly in a moment, but first I was wondering if you have anything that you wanted to add to the feature-wise transformations?

Roman: All of it, you can ask, “Well why is this interesting, what are the applications for it?” So you are able to generate inputs, inputs for computers, inputs for people, images, sounds, videos. A lot of times they can be adversarial in nature as well—what we call deep fakes. Right? You can make, let’s say, a video of a famous politician say something, or do something.

Ariel: Yeah.

Roman: And this has very interesting implications for elections, for forensic science, for evidence. As those systems get better and better, it becomes harder and harder to tell if something is real or not. And maybe it’s still possible to do some statistical analysis, but it takes time, and we talked about media being not exactly always on top of it. So it may take 24 hours before we realize if this video was real or not, but the election is tonight.

Ariel: So I am definitely coming back to that. I want to finish going through the list of the technology stuff, but yeah I want to talk about deep fakes and in general, a lot of the issues that we’ve seen cropping up more and more with this idea of using AI to fake images and audio and video, because I think that is something that’s really important.

David: Yeah, it’s hard for me to estimate these things, but I would say this is probably, in terms of the impact that this is going to have societally, this is sort of the biggest story maybe of the last year. And it’s not like something that happened all of the sudden. Again, it’s something that has been building on a lot of progress in generative models and GANs and things like this. And it’s just going to continue, we’re going to see more and more progress like that, and probably some sort of arms’ race here where—I shouldn’t use that word.

Ariel: A competition.

David: A competition between people who are trying to use that kind of technology to fake things and people who are sort of doing forensics to try and figure out what is real and what is fake. And that also means that people are going to have to trust the people who have the expertise to do that, and believe that they’re actually doing that and not part of some sort of conspiracy or something.

Ariel: Alright, well are you guys ready to jump into some of those ethical questions?

David: Well, there are like two other broad things I wanted to mention, which I think are sort of interesting trends in the research community. One is just the way that people have been continuing to scale up AI systems. So a lot of the progress I think has arguably just been coming from more and more computation and more and more data. And there was a pretty great blog post by OpenAI about this last year that argued that the amount of computation that’s being used to train the most advanced AI systems is increasing by a factor of 10 times every year for the last several years, which is just astounding. But it also suggests that this might not be sustainable for a long time, so to the extent that you think that using more computation is a big driver of progress, we might start to see that slow down within a decade or so.

Roman: I’ll add another—what I think also is kind of building-on technology, not so much a breakthrough, we had it for a long time—but neural evolution is something I’m starting to pay a lot more attention to and that’s kind of borrowing from biology, trying to evolve ways for neural networks, optimized neural networks. And it’s producing very impressive results. It’s possible to run it in parallel really well, and it’s competitive with some of the leading alternative approaches.

So, the idea basically is you have this very large neural network, brain-like structure, but instead of trying to train it back, propagate errors, teach it in a standard neural networks way, you just kind of have a population of those brains competing for who’s doing best in a particular problem, and they share weights between good parents, and after a while you just evolve really well performing solutions to some of the most interesting problems.

Additionally you can kind of go meta-level on it and evolve architectures for the neural network itself—how many layers, how many inputs. This is nice because it doesn’t require much human intervention. You’re essentially letting the system figure out what the solutions are. We had some very successful results with genetic algorithms for optimization. We didn’t have much success with genetic programming, and now neural evolution kind of brings it back where you’re optimizing intelligence systems, and that’s very exciting.

Ariel: So you’re saying that you’ll have—to make sure I understand this correctly—there’s two or more neural nets trying to solve a problem, and they sort of play off of each other?

Roman: So you create a population of neural networks, and you give it a problem, and you see this one is doing really well, and that one. The others, maybe not so great. So you take weights from those two and combine them—like mom and dad, parent situation that produces offspring. And so you have this simulation of evolution where unsuccessful individuals are taken out of a population. Successful ones get to reproduce and procreate, and provide their high fitness weights to the next generation.

Ariel: Okay. Was there anything else that you guys saw this year that you want to talk about, that you were excited about?

David: Well I wanted to give a few examples of the kind of massive improvements in scale that we’ve seen. One of the most significant models and benchmarks in the community is ImageNet and training image classifiers that can tell you what a picture is a picture of on this dataset.So the whole sort of deep learning revolution was arguably started, or at least really came into the eyes of the rest of the machine learning community, because of huge success on this ImageNet competition. And training the model there took something like two weeks, and this last year there was a paper where you can train a more powerful model in less than four minutes, and they do this by using like 3000 graphics cards in parallel.

And then DeepMind also had some progress on parallelism with this model called IMPALA, which basically was in the context of reinforcement learning as opposed to classification, and there they sort of came up with a way that allowed them to do updates in parallels, like learn on different machines and combine everything that was learned in a way that’s asynchronous. So in the past the sort of methods that they would use for these reinforcement learning problems, you’d have to wait for all of the different machines to finish their learning on the current problem or instance that they’re learning about, and then combine all of that centrally—whereas the new method allows you to just as soon as you’re done computing or learning something, you can communicate it to the rest of the system, the other computers that are learning in parallel. And that was really important for allowing them to scale to hundreds of machines working on their problem at the same time.

Ariel: Okay, and so that, just to clarify as well, that goes back to this idea that right now we’re seeing a lot of success just scaling up the computing, but at some point that could slow things down essentially, if we had a limit for how much computing is possible.

David: Yeah, and I guess one of my points is also doing this kinds of scaling of computing requires some amount of algorithmic insight or breakthrough if you want to be dramatic as well. So this DeepMind paper I talked about, they had to devise new reinforcement learning algorithms that would still be stable when they had this real-time asynchronous updating. And so, in a way, yeah, a lot of the research that’s interesting right now is on finding ways to make the algorithm scale so that you can keep taking advantage of more and more hardware. And the evolution stuff also fits into that picture to some extent.

Ariel: Okay. I want to start making that transition into some of the concerns that we have for misuse around AI and how easy it is for people to be deceived by things that have been created by AI. But I want to start with something that’s hopefully a little bit more neutral, and talk about Google Duplex, which is the program that Google came out with, I think last May. I don’t know the extent to which it’s in use now, but they presented it, and it’s an AI assistant that can essentially make calls and set up appointments for you. So their examples were it could make a reservation at a restaurant for you, or it could make a reservation for you to get a haircut somewhere. And it got sort of mixed reviews, because on the one hand people were really excited about this, and on the other hand it was kind of creepy because it sounded human, and the people on the other end of the call did not know that they were talking to a machine.

So I was hoping you guys could talk a little bit I guess maybe about the extent to which that was an actual technological breakthrough versus just something—this one being more one of those breakthroughs that will impact society more directly. And then also I guess if you agree that this seems like a good place to transition into some of the safety issues.

David: Yeah, no, I would be surprised if they really told us about the details of how that worked. So it’s hard to know how much of an algorithmic breakthrough or algorithmic breakthroughs were involved. It’s very impressive, I think, just in terms of what it was able to do, and of course these demos that we saw were maybe selected for their impressiveness. But I was really, really impressed personally, just to see a system that’s able to do that.

Roman: It’s probably built on a lot of existing technology, but it is more about impact than what you can do with this. And my background is cybersecurity, so I see it as a great tool for like automating spear-phishing attacks on a scale of millions. You’re getting a real human calling you, talking to you, with access to your online data; Pretty much everyone’s gonna agree and do whatever the system is asking of you, if it’s credit card numbers, or social security numbers. So, in many ways it’s going to be a game changer.

Ariel: So I’m going to take that as a definite transition into safety issues. So, yeah, let’s start talking about, I guess, sort of human manipulation that’s happening here. First, the phrase “deep fake” shows up a lot. Can you explain what those are?

David: So “deep fakes” is basically just: you can make a fake video of somebody doing something or saying something that they did not actually do or say. People have used this to create fake videos of politicians, they’ve used it to create porn using celebrities. That was one of the things that got it on the front page of the internet, basically. And Reddit actually shut down the subreddit where people were doing that. But, I mean, there’s all sorts of possibilities.

Ariel: Okay, so I think the Reddit example was technically the very end of 2017. But all of this sort became more of an issue in 2018. So we’re seeing this increase in capability to both create images that seem real, create audio that seems real, create video that seems real, and to modify existing images and video and audio in ways that aren’t immediately obvious to a human. What did we see in terms of research to try to protect us from that, or catch that, or defend against that?

Roman: So here’s an interesting observation, I guess. You can develop some sort of a forensic tool to analyze it, and give you a percentage likelihood that it’s real or that it’s fake. But does it really impact people? If you see it with your own eyes, are you going to believe your lying eyes, or some expert statistician on CNN?

So the problem is it will still have tremendous impact on most people. We’re not very successful at convincing people about multiple scientific facts. They simply go outside, or it’s cold right now, so global warming is false. I suspect we’ll see exactly that with, let’s say, fake videos of politicians, where a majority of people easily believe anything they hear once or see once versus any number of peer reviewed publications disproving it.

David: I kind of agree. I mean, I think, when I try to think about how we would actually solve this kind of problem, I don’t think a technical solution that just allows somebody who has technical expertise to distinguish real from fake is going to be enough. We really need to figure out how to build a better trust infrastructure in our whole society which is kind of a massive project. I’m not even sure exactly where to begin with that.

Roman: I guess the good news is it gives you plausible deniability. If a video of me comes out doing horrible things I can play it straight.

Ariel: That’s good for someone. Alright, so, I mean, you guys are two researchers, I don’t know how into policy you are, but I don’t know if we saw as many strong policies being developed. We did see the implementation of the GDPR, and for people who aren’t familiar with the GDPR, it’s essentially European rules about what data companies can collect from your interactions online, and the ways in which you need to give approval for companies to collect your data, and there’s a lot more to it than that. One of the things that I found most interesting about the GDPR is that it’s entirely European based, but it had a very global impact because it’s so difficult for companies to apply something only in Europe and not in other countries. And so earlier this year when you were getting all of those emails about privacy policies, that was all triggered by the GDPR. That was something very specific that happened and it did make a lot of news, but in general I felt that we saw a lot of countries and a lot of national and international efforts for governments to start trying to understand how AI is going to be impacting their citizens, and then also trying to apply ethics and things like that.

I’m sort of curious, before we get too far into anything: just as researchers, what is your reaction to that?

Roman: So I never got as much spam as I did that week when they released this new policy, so that kind of gives you a pretty good summary of what to expect. If you look at history, we have regulations against spam, for example. Computer viruses are illegal. So that’s a very expected result. It’s not gonna solve technical problems. Right?

David: I guess I like that they’re paying attention and they’re trying to tackle these issues. I think the way GDPR was actually worded, it has been criticized a lot for being either much too broad or demanding, or vague. I’m not sure—there are some aspects of the details of that regulation that I’m not convinced about, or not super happy about. I guess overall it seems like people who are making these kinds of decisions, especially when we’re talking about cutting edge machine learning, it’s just really hard. I mean, even people in the fields don’t really know how you would begin to effectively regulate machine learning systems, and I think there’s a lot of disagreement about what a reasonable level of regulation would be or how regulations should work.

People are starting to have that sort of conversation in the research community a little bit more, and maybe we’ll have some better ideas about that in a few years. But I think right now it seems premature to me to even start trying to regulate machine learning in particular, because we just don’t really know where to begin. I think it’s obvious that we do need to think about how we control the use of the technology, because it’s just so powerful and has so much potential for harm and misuse and accidents and so on. But I think how you actually go about doing that is a really unclear and difficult problem.

Ariel: So for me it’s sort of interesting, we’ve been debating a bit today about technological breakthroughs versus societal impacts, and whether 2018 actually had as many breakthroughs and all of that. But I would guess that all of us agree that AI is progressing a lot faster than government does.

David: Yeah.

Roman: That’s almost a tautology.

Ariel: So I guess as researchers, what concerns do you have regarding that? Like do you worry about the speed at which AI is advancing?

David: Yeah, I would say I definitely do. I mean, we were just talking about this issue with fakes and how that’s going to contribute to things like fake news and erosion of trust in media and authority and polarization of society. I mean, if AI wasn’t going so fast in that direction, then we wouldn’t have that problem. And I think the rate that it’s going, I don’t see us catching up—or I should say, I don’t see the government catching up on its own anytime soon—to actually control the use of AI technology, and do our best anyways to make sure that it’s used in a safe way, and a fair way, and so on.

I think in and of itself it’s maybe not bad that the technology is progressing fast. I mean, it’s really amazing; Scientifically there’s gonna be all sorts of amazing applications for it. But there’s going to be more and more problems as well, and I don’t think we’re really well equipped to solve them right now.

Roman: I’ll agree with David, I’m very concerned at its relative rate of progress. AI development progresses a lot faster than anything we see in AI safety. AI safety is just trying to identify problem areas, propose some general directions, but we have very little to show in terms of solved problems.

If you look at our work in adversarial fields, maybe a little bit cryptography, the good guys have always been a step ahead of the bad guys, whereas here you barely have any good guys as a percentage. You have like less than 1% of researchers working directly on safety full-time. Same situation with funding. So it’s not a very optimistic picture at this point.

David: I think it’s worth definitely distinguishing the kind of security risks that we’re talking about, in terms of fake news and stuff like that, from long-term AI safety, which is what I’m most interested in, and think is actually even more important, even though I think there’s going to be tons of important impacts we have to worry about already, and in the coming years.

And the long-term safety stuff is really more about artificial intelligence that becomes broadly capable and as smart or smarter than humans across the board. And there, there’s maybe a little bit more signs of hope if I look at how the fields might progress in the future, and that’s because there’s a lot of problems that are going to be relevant for controlling or aligning or understanding these kind of generally intelligent systems that are probably going to be necessary anyways in terms of making systems that are more capable in the near future.

So I think we’re starting to see issues with trying to get AIs to do what we want, and failing to, because we just don’t know how to specify what we want. And that’s, I think, basically the core of the AI safety problem—is that we don’t have a good way of specifying what we want. An example of that is what are called adversarial examples, which sort of demonstrate that computer vision systems that are able to do a really amazing job at classifying images and seeing what’s in an image and labeling images still make mistakes that humans just would never make. Images that look indistinguishable to humans can look completely different to the AI system, and that means that we haven’t really successfully communicated to the AI system what our visual concepts are. And so even though we think we have done a good job of telling it what to do, it’s like, “tell us what this picture is of”—the way that it found to do that really isn’t the way that we would do it and actually there’s some very problematic and unsettling differences there. And that’s another field that, along with the ones that I mentioned, like generative models and GANs, has been receiving a lot more attention in the last couple of years, which is really exciting from the point of view of safety and specification.

Ariel: So, would it be fair to say that you think we’ve had progress or at least seen progress in addressing long-term safety issues, but some of the near-term safety issues, maybe we need faster work?

David: I mean I think to be clear, we have such a long way to go to address the kind of issues we’re going to see with generally intelligent and super intelligent AIs, that I still think that’s an even more pressing problem, and that’s what I’m personally focused on. I just think that you can see that there are going to be a lot of really big problems in the near term as well. And we’re not even well equipped to deal with those problems right now.

Roman: I’ll generally agree with David. I’m more concerned about long-term impacts. There are both more challenging and more impactful. It seems like short-term things may be problematic right now, but the main difficulty is that we didn’t start working on them in time. So problems like algorithmic fairness, bias, technological unemployment, are social issues which are quite solvable; They are not really that difficult from engineering or technical points of view. Whereas long-term control of systems which are more intelligent than you are—very much unsolved at this point in any even toy model. So I would agree with the part about bigger concerns but I think current problems we have today, they are already impacting people, but the good news is we know how to do better.

David: I’m not sure that we know how to do better exactly. Like I think a lot of these problems, it’s more of a problem of willpower and developing political solutions, so the ones that you mentioned. But with the deep fakes, this is something that I think requires a little bit more of a technical solution in the sense of how we organize our society so that people are either educated enough to understand this stuff, or so that people actually have someone they trust and have a reason to trust, who they can take their word for it on that.

Roman: That sounds like a great job, I’ll take it.

Ariel: It almost sounds like something we need to have someone doing in person, though.

So going back to this past year: were there, say, groups that formed, or research teams that came together, or just general efforts that, while maybe they didn’t produce something yet, you think could produce something good, either in safety or AI in general?

David: I think something interesting is happening in terms of the way AI safety is perceived and talked about in the broader AI and machine learning community. It’s a little bit like this phenomenon where once we solve something people don’t consider it AI anymore. So I think machine learning researchers, once they actually recognize the problem that the safety community has been sort of harping on and talking about and saying like, “Oh, this is a big problem”—once they say, “Oh yeah, I’m working on this kind of problem, and that seems relevant to me,” then they don’t really think that it’s AI safety, and they’re like, “This is just part of what I’m doing, making something that actually generalizes well and learns the right concept, or making something that is actually robust, or being able to interpret the model that I’m building, and actually know how it works.”

These are all things that people are doing a lot of work on these days in machine learning that I consider really relevant for AI safety. So I think that’s like a really encouraging sign, in a way, that the community is sort of starting to recognize a lot of the problems, or at least instances of a lot of the problems that are going to be really critical for aligning generally intelligent AIs.

Ariel: And Roman, what about you? Did you see anything sort of forming in the last year that maybe doesn’t have some specific result, but that seemed hopeful to you?

Roman: Absolutely. So I’ve mentioned that there is very few actual AI safety researchers as compared to the number of AI developers, researchers directly creating more capable machines. But the growth rate is much better I think. The number of organizations, the number of people who show interest in it, the number of papers I think is growing at a much faster rate, and it’s encouraging because as David said, it’s kind of like this convergence if you will, where more and more people realize, “I cannot say I built an intelligent system if it kills everyone.” That’s just not what an intelligent system is.

So safety and security become integral parts of it. I think Stuart Russell has a great example where he talks about bridge engineering. We don’t talk about safe bridges and secure bridges—there’s just bridges. If it falls down, it’s not a bridge. Exactly the same is starting to happen here: People realize, “My system cannot fail and embarrass the company, I have to make sure it will not cause an accident.”

David: I think that a lot of people are thinking about that way more and more, which is great, but there is a sort of research mindset, where people just want to understand intelligence, and solve intelligence. And I think that’s kind of a different pursuit. Solving intelligence doesn’t mean that you make something that is safe and secure, it just means you make something that’s really intelligent, and I would like it if people who had that mindset were still, I guess, interested in or respectful of or recognized that this research is potentially dangerous. I mean, not right now necessarily, but going forward I think we’re going to need to have people sort of agree on having that attitude to some extent of being careful.

Ariel: Would you agree though that you’re seeing more of that happening?

David: Yeah, absolutely, yeah. But I mean it might just happen naturally on its own, which would be great.

Ariel: Alright, so before I get to my very last question, is there anything else you guys wanted to bring up about 2018 that we didn’t get to yet?

David: So we were talking about AI safety and there’s kind of a few big developments in the last year. I mean, there’s actually too many I think for me to go over all of them, but I wanted to talk about something which I think is relevant to the specification problem that I was talking about earlier.

Ariel: Okay.

David: So, there are three papers in the last year, actually, on what I call superhuman feedback. The idea motivating these works is that even specifying what we want on a particular instance in some particular scenario can be difficult. So typically the way that we would think about training an AI that understands our intentions is to give it a bunch of examples, and say, “In this situation, I prefer if you do this. This is the kind of behavior I want,” and then the AI is supposed to pick up on the patterns there and sort of infer what our intentions are more generally.

But there can be some things that we would like AI systems to be competent at doing, ideally, that are really difficult to even assess individual instances of. Two examples that I like to use are designing a transit system for a large city, or maybe for a whole country, or the world or something. That’s something that right now is done by a massive team of people. Using that whole team to sort of assess a proposed design that the AI might make would be one example of superhuman feedback, because it’s not just a single human. But you might want to be able to do this with just a single human and a team of AIs helping them, instead of a team of humans. And there’s a few proposals for how you could do that that have come out of the safety community recently, which I think are pretty interesting.

Ariel: Why is it called superhuman feedback?

David: Actually, this is just my term for it. I don’t think anyone else is using this term.

Ariel: Okay.

David: Sorry if that wasn’t clear. The reason I use it is because there are three different, like, lines of work here. So there’s these two papers from OpenAI on what’s called amplification and debate, and then another paper from DeepMind on reward learning and recursive reward learning. And I like to view these as all kind of trying to solve the same problem. How can we assist humans and enable them to make good judgements and informed judgements that actually reflect what their preferences are when they’re not capable of doing that by themselves unaided. So it’s superhuman in the sense that it’s better than a single human can do. And these proposals are also aspiring to do things I think that even teams of humans couldn’t do by having AI helpers that sort of help you do the evaluation.

An example that Yan—who’s the lead author on the DeepMind paper, which I also worked on—gives is assessing an academic paper. So if you yourself aren’t familiar with the field and don’t have the expertise to assess this paper, you might not be able to say whether or not it should be published. But if you can decompose that task into things like: is the paper valid? Are the proofs valid? Are the experiments following a reasonable protocol? Is it novel? Is it formatted correctly for the venue where it’s submitted? And you got answers to all of those from helpers, then you could make the judgment. You’d just be like okay, it meets all of the criteria, so it should be published. The idea would be to get AI helpers to do those sorts of evaluations for you across a broad range of tasks, and allow us to explain to AIs, or teach AIs what we want across a broad range of tasks in that way.

Ariel: So, okay, and so then were there other things that you wanted to mention as well?

David: I do feel like I should talk about another thing that was, again, not developed last year, but really sort of took off last year—is this new kind of neural network architecture called the transformer, which is basically being used in a lot of places where convolutional neural networks and recurrent neural networks were being used before. And those were kind of the two main driving factors behind the deep learning revolution in terms of vision, where you use convolutional networks and things that have a sequential structure, like speech, or text, where people were using recurrent neural networks. And this architecture is actually motivated originally by the same sort of scaling consideration because it allowed them to remove some of the most computationally heavy parts of running these kind of models in the context of translation, and basically make it a hundred times cheaper to train a translation model. But since then it’s also been used in a lot of other contexts and has shown to be a really good replacement for these other kinds of models for a lot of applications.

And I guess the way to describe what it’s doing is it’s based on what’s called an attention mechanism, which is basically a way of giving a neural network the ability to pay more attention to different parts of an input than other parts. So like to look at one word that is most relevant to the current translation task. So if you’re imagining outputting words one at a time, then because different languages have words in different order, it doesn’t make sense to sort of try and translate the next word. You want to look through the whole input sentence, like a sentence in English, and find the word that corresponds to whatever word should come next in your output sentence.

And that was sort of the original inspiration for this attention mechanism, but since then it’s been applied in a bunch of different ways, including paying attention to different parts of the model’s own computation, paying attention to different parts of images. And basically just using this attention mechanism in the place of the other sort of neural architectures that people thought were really important to give you temporal dependencies across something sequential like a sentence that you’re trying to translate, turned out to work really well.

Ariel: So I want to actually pass this to Roman real quick. Did you have any comments that you wanted to add to either the superhuman feedback or the transformer architecture?

Roman: Sure, so superhuman feedback: I like the idea and I think people should be exploring that, but we can kind of look at similar examples previously. So, for a while we had situation where teams of human chess players and machines did better than just unaided machines or unaided humans. That lasted about ten years. And then machines became so much better, humans didn’t really contribute anything, it was kind of just like an additional bottleneck to consult with them. I wonder if long term this solution will face similar problems. It’s very useful right now, but it seems like, I don’t know if it will scale.

David: Well I want to respond to that, because I think it’s—the idea here is, in my mind, to have something that actually scales in the way that you’re describing, where it can sort of out-compete pure AI systems. Although I guess some people might be hoping that that’s the case, because that would make the strategic picture better in terms of people’s willingness to use safer systems. But this is more about just how can we even train systems—if we have the willpower, if people want to build a system that has the human in charge, and ends up doing what the human wants—how can we actually do that for something that’s really complicated?

Roman: Right. And as I said, I think it’s a great way to get there. So this part I’m not concerned about. It’s a long-term game with that.

David: Yeah, no, I mean I agree that that is something to be worried about as well.

Roman: There is a possibility of manipulation if you have a human in the loop, and that itself makes it not safer but more dangerous in certain ways.

David: Yeah, one of the biggest concerns I have for this whole line of work is that the human needs to really trust the AI systems that are assisting it, and I just don’t see that we have good enough mechanisms for establishing trust and building trustworthy systems right now, to really make this scale well without introducing a lot of risk for things like manipulation, or even just compounding of errors.

Roman: But those approaches, like the debate approach, it just feels like they’re setting up humans for manipulation from both sides, and who’s better at breaking the human psychological model.

David: Yep, I think it’s interesting, and I think it’s a good line of work. But I think we haven’t seen anything that looks like a convincing solution to me yet.

Roman: Agreed.

Ariel: So, Roman, was there anything else that you wanted to add about things that happened in the last year that we didn’t get to?

Roman: Well, as a professor, I can tell you that students stop learning after about 40 minutes. So I think at this point we’re just being counterproductive.

Ariel: So for what it’s worth, our most popular podcasts have all exceeded two hours. So, what are you looking forward to in 2019?

Roman: Are you asking about safety or development?

Ariel: Whatever you want to answer. Just sort of in general, as you look toward 2019, what relative to AI are you most excited and hopeful to see, or what do you predict we’ll see?

David: So I’m super excited for people to hopefully pick up on this reward learning agenda that I mentioned that Jan and me and people at DeepMind worked on. I was actually pretty surprised how little work has been done on this. So the idea of this agenda at a high level is just: we want to learn a reward function—which is like a score, that tells an agent how well it’s doing—learn reward functions that encode what we want the AI to do, and that’s the way that we’re going to specify tasks to an AI. And I think from a machine learning researcher point of view this is kind of the most obvious solution to specification problems and to safety—is just learner reward function. But very few people are really trying to do that, and I’m hoping that we’ll see more people trying to do that, and encountering and addressing some of the challenges that come up.

Roman: So I think by definition we cannot predict short-term breakthroughs. So what we’ll see is a lot of continuation of 2018 work, and previous work scaling up. So, if you have, let’s say, Texas hold ’em poker: so for two players, we’ll take it to six players, ten players, something like that. And you can make similar projections for other fields, so the strategy games will be taken to new maps, involve more players, maybe additional handicaps will be introduced for the bots. But that’s all we can really predict, kind of gradual improvement.

Protein folding will be even more efficient in terms of predicting actual structures: Any type of accuracy rates, if they were climbing from 80% to 90%, will hit 95, 96. And this is a very useful way of predicting what we can anticipate, and I’m trying to do something similar with accidents. So if we can see historically what was going wrong with systems, we can project those trends forward. And I’m happy to say that there is now at least two or three different teams working and collecting those examples and trying to analyze them and create taxonomies for them. So that’s very encouraging.

David: Another thing that comes to mind is—I mentioned adversarial examples earlier, which are these imperceptible differences to a human that change how the AI system perceives something like an image. And so far, for the most part, the field has been focused on really imperceptible changes. But I think now people are starting to move towards a broader idea of what counts as an adversarial example. So basically anything that a human thinks clearly should belong to this class and the AI system thinks clearly should belong to this other class that has sort have been constructed deliberately to create that kind of a difference.

And I think this going to be really interesting and exciting to see how the field tries to move in that direction, because as I mentioned, I think it’s hard to define how humans decide whether or not something is a picture of a cat or something. And the way that we’ve done it so far is just by giving lots of examples of things that we say are cats. But it turns out that that isn’t sufficient, and so I think this is really going to push a lot of people closer towards thinking about some of the really core safety challenges within the mainstream machine learning community. So I think that’s super exciting.

Roman: It is a very interesting topic, and I am in particular looking at a side subject in that, which is adversarial inputs for humans, and machines developing which I guess is kind of like optical illusions, and audio illusions, where a human is mislabeling inputs in a predictable way, which is allowing for manipulation.

Ariel: Along very similar lines, I think I want to modify my questions slightly, and also ask: coming up in 2019, what are you both working on that you’re excited about, if you can tell us?

Roman: Sure, so there has been a number of publications looking at particular limitations, either through mathematical proofs or through well known economic models, and what is possible in fact, from computational, complexity points of view. And I’m trying to kind of integrate those into a single model showing—in principle, not in practice, but even in principle—what can we do with the AI control problem? How solvable is it? Is it solvable? Is it not solvable? Because I don’t think there is a mathematically rigorous proof, or even a rigorous argument either way. So I think that will be helpful, especially with kind of arguing about importance of a problem and resource allocation.

David: I’m trying to think what I can talk about. I guess right now I have some ideas for projects that are not super well thought out, so I won’t talk about those. And I have a project that I’m trying to finish off which is a little bit hard to describe in detail, but I’ll give the really high level motivation for it. And it’s about something that people in the safety community like to call capability control. I think Nick Bostrom has these terms, capability control and motivation control. And so what I’ve been talking about most of the time in terms of safety during this podcast was more like motivation control, like getting the AI to want to do the right thing, and to understand what we want. But that might end up being too hard, or sort of limited in some respect. And the alternative is just to make AIs that aren’t capable of doing things that are dangerous or catastrophic.

A lot of people in the safety community sort of worry about capability control approaches failing because if you have a very intelligent agent, it will view these attempts to control it as undesirable, and try and free itself from any constraints that we give it. And I think a way of sort of trying to get around that problem is to sort of look at capability control from the lens of motivation control. So to basically make an AI that doesn’t want to influence certain things, and maybe doesn’t have some of these drives to influence the world, or to influence the future. And so in particular I’m trying to see how can we design agents that really don’t try to influence the future, and really only care about doing the right thing, right now. And if we try and do that in a sort of naïve way, or there ways that can fail, and we can get some sort of emergent drive to still try and optimize over the long term, or try and have some influence in the future. And I think to the extent we see things like that, that’s problematic from this perspective of let’s just make AIs that aren’t capable or motivated to influence the future.

Ariel: Alright! I think I’ve kept you both on for quite a while now. So, David and Roman, thank you so much for joining us today.

David: Yeah, thank you both as well.

Roman: Thank you so much.

FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.

Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings—most Americans, for example, don’t trust Facebook—were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.

This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University’s political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.

In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:

  • Demographic differences in perceptions of AI
  • Discrepancies between expert and public opinions
  • Public trust (or lack thereof) in AI developers
  • The effect of information on public perceptions of scientific issues

Research and publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi there. I’m Ariel Conn with the Future of Life Institute. Today, I am doing a special podcast, which I hope will be just the first in a continuing series, in which I talk to researchers about the work that they’ve just published. Last week, a report came out called Artificial Intelligence: American Attitudes and Trends, which is a survey that looks at what Americans think about AI. I was very excited when the lead author of this report agreed to come join me and talk about her work on it, and I am actually now going to just pass this over to her, and let her introduce herself, and just explain a little bit about what this report is and what prompted the research.

Baobao: My name is Baobao Zhang. I’m a PhD candidate in Yale University’s political science department, and I’m also a research affiliate with the Center for the Governance of AI at the University of Oxford. We conducted a survey of 2,000 American adults in June 2018 to look at what Americans think about artificial intelligence. We did so because we believe that AI will impact all aspects of society, and therefore, the public is a key stakeholder. We feel that we should study what Americans think about this technology that will impact them. In this survey, we covered a lot of ground. In the past, surveys about AI tend to have very specific focus, for instance on automation and the future of work. What we try to do here is cover a wide range of topics, including the future of work, but also lethal autonomous weapons, how AI might impact privacy, and trust in various actors to develop AI.

So one of the things we found is Americans believe that AI is a technology that should be carefully managed. In fact, 82% of Americans feel this way. Overall, Americans express mixed support for developing AI. 41% somewhat support or strongly support the development of AI, while there’s a smaller minority, 22%, that somewhat or strongly opposes it. And in terms of the AI governance challenges that we asked—we asked about 13 of them—Americans think all of them are quite important, although they prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake news online, preventing AI cyber attacks, and protecting data privacy.

Ariel: Can you talk a little bit about what the difference is between concerns about AI governance and concerns about AI development and more in the research world?

Baobao: In terms of the support for developing AI, we saw that as a general question in terms of support—we didn’t get into the specifics of what developing AI might look like. But in terms of the governance challenges, we gave quite detailed, concrete examples of governance challenges, and these tend to be more specific.

Ariel: Would it be fair to say that this report looks specifically at governance challenges as opposed to development?

Baobao: It’s a bit of both. I think we ask both about the R&D side, for instance we ask about support for developing AI and which actors the public trusts to develop AI. On the other hand, we also ask about the governance challenges. Among the 13 AI governance challenges that we presented to respondents, Americans tend to think all of them are quite important.

Ariel: What were some of the results that you expected, that were consistent with what you went into this survey thinking people thought, and what were some of the results that surprised you?

Baobao: Some of the results that surprised us is how soon the public thinks that high-level machine intelligence will be developed. We find that they think it will happen a lot sooner than what experts predict, although some past research suggests similar results. What didn’t surprise me, in terms of the AI governance challenge question, is how people are very concerned about data privacy and digital manipulation. I think these topics have been in the news a lot recently, given all the stories about hacking or digital manipulation on Facebook.

Ariel: So going back real quick to your point about the respondents expecting high-level AI happening sooner: how soon do they expect it?

Baobao: In our survey, we asked respondents about high-level machine intelligence, and we defined it as when machines are able to perform almost all tasks that are economically relevant today better than the median human today at each task. My co-author, Allan Dafoe, and some of my other team members, we’ve done a survey asking AI researchers—this was back in 2016—a similar question, and there we had a different definition of high-level machine intelligence that required a higher bar, so to speak. So that might have caused some difference. We’re trying to ask this question again to AI researchers this year. We’re doing continuing research, so hopefully the results will be more comparable. Even so, I think the difference is quite large.

I guess one more caveat is—we have in the footnote—we did ask the same definition as we asked AI experts in 2016 in a pilot survey on the American public, and we also found that the public thinks high-level machine intelligence will happen sooner than experts predict. So it might not just be driven by the definition itself, but the public and experts have different assessments. But to answer your question, the median respondent in our American public sample predicts that there’s a 54% probability of high-level machine intelligence being developed within the next 10 years, which is quite high of a probability.

Ariel: I’m hesitant to ask this, because I don’t know if it’s a very fair question, but do you have thoughts on why the general public thinks that high-level AI will happen sooner? Do you think it is just a case that there’s different definitions that people are referencing, or do you think that they’re perceiving the technology differently?

Baobao: I think that’s a good question, and we’re doing more research to investigate these results and to probe at it. One thing is that the public might have a different perception of what AI is compared to experts. In future surveys, we definitely want to investigate that. Another potential explanation is that the public lacks understanding of what goes into AI R&D.

Ariel: Have there been surveys that are as comprehensive as this in the past?

Baobao: I’m hesitant to say that there are surveys that are as comprehensive as this. We certainly relied on a lot of past survey research when building our surveys. The Eurobarometer had a couple of good surveys on AI in the past, but I think we cover both sort of the long-term and the short-term AI governance challenges, and that’s something that this survey really does well.

Ariel: Okay. The reason I ask that is I wonder how much people’s perceptions or misperceptions of how fast AI is advancing would be influenced by just the fact that we have had significant advancements just in the last couple of years that I don’t think were quite as common during previous surveys that were presented to people.

Baobao: Yes, that certainly makes sense. One part of our survey tries to track responses over time, so I was able to dig up some surveys going all the way back to the 1980s that were conducted by the National Science Foundation on the question of automation—whether automation will create more jobs or eliminate more jobs. And we find that compared with the historical data, the percentage of people who think that automation will create more jobs than it eliminates—that percentage has decreased, so this result could be driven by people reading in the news about all these advances in AI and thinking, “Oh, AI is getting really good these days at doing tasks normally done by humans,” but again, you would need much more data to sort of track these historical trends. So we hope to do that. We just recently received a grant from the Ethics and Governance of AI Fund, to continue this research in the future, so hopefully we will have a lot more data, and then we can really map out these historical trends.

Ariel: Okay. We looked at those 13 governance challenges that you mentioned. I want to more broadly ask the same two-part question of: looking at the survey in its entirety, what results were most expected and what results were most surprising?

Baobao: In terms of the AI governance challenge question, I think we had expected some of the results. We’d done some pilot surveys in the past, so we were able to have a little bit of a forecast, in terms of the governance challenges that people prioritize, such as data privacy, cyber attacks, surveillance, and digital manipulation. These were also things that respondents in the pilot surveys had prioritized. I think some of the governance challenges that people still think of as important, but don’t view as likely to impact large numbers of people in the next 10 years, such as critical AI systems failure—these questions are sort of harder to ask in some ways. I know that AI experts think about it a lot more than, say, the general public.

Another thing that sort of surprised me is how much people think value alignment— which is sort of an abstract concept—how much people think that’s quite important, and also likely to impact large numbers of people within the next 10 years. It’s up there with safety of autonomous vehicles or biased hiring algorithms, so that was somewhat surprising.

Ariel: That is interesting. So if you’re asking people about value alignment, were respondents already familiar with the concept, or was this something that was explained to them and they just had time to consider it as they were looking at the survey?

Baobao: We explained to them what it meant, and we said that it means to make sure that AI systems are safe, trustworthy, and aligned with human values. Then we gave a brief paragraph definition. We think that maybe people haven’t heard of this term before, or it could be quite abstract, so therefore we gave a definition.

Ariel: I would be surprised if it was a commonly known term. Then looking more broadly at the survey as a whole, you looked at lots of different demographics. You asked other questions too, just in terms of things like global risks and the potential for global risks, or generally about just perception of AI in general, and whether or not it was good, and whether or not advanced AI was good or bad, and things like that. So looking at the whole survey, what surprised you the most? Was it still answers within the governance challenges, or did anything else jump out at you as unexpected?

Baobao: Another thing that jumped out at me is that respondents who have computer science or engineering degrees tend to think that the AI governance challenges are less important across the board than people who don’t have computer science or engineering degrees. These people with computer science or engineering degrees also are more supportive of developing AI. I suppose that result is not totally unexpected, but I suppose in the news there is a sense that people who are concerned about AI safety, or AI governance challenges, tend to be those who have a technical computer background. But in reality, what we see are people who don’t have a tech background who are concerned about AI. For instance, women, those with low levels of education, or those who are low-income, tend to be the least supportive of developing AI. That’s something that we want to investigate in the future.

Ariel: There’s an interesting graph in here where you’re showing the extent to which the various groups consider an issue to be important, and as you said, people with computer science or engineering degrees typically don’t consider a lot of these issues very important. I’m going to list the issues real quickly. There’s data privacy, cyber attacks, autonomous weapons, surveillance, autonomous vehicles, value alignment, hiring bias, criminal justice bias, digital manipulation, US-China arms race, disease diagnosis, technological unemployment, and critical AI systems failure. So as you pointed out, the people with the CS and engineering degrees just don’t seem to consider those issues nearly as important, but you also have a category here of people with computer science or programming experience, and they have very different results. They do seem to be more concerned. Now, I’m sort of curious what the difference was between someone who has experience with computer science and someone who has a degree in computer science.

Baobao: I don’t have a very good explanation for the difference between the two, except for I can say that the people with experience, that’s a lower bar, so there are more people in the sample who have computer science or programming experience—and in fact, there’s 735 of them, compared to people who have computer science or engineering undergrad or graduate degrees, and that’s 195 people. I suppose those who have the CS or programming experience, that comprises a greater number of people. Going forward, in future surveys, we want to probe at this a bit more. We might look at what industries various people are working in, or how much experience they have either using AI or developing AI.

Ariel: And then I’m also sort of curious—I know you guys still have more work that you want to do—but I’m curious what you know now about how American perspectives are either different or similar to people in other countries.

Baobao: The most direct comparison that we can make is with respondents in the EU, because we have a lot of data based on the Eurobarometer surveys, and we find that Americans share similar concerns with Europeans about AI. So as I mentioned earlier, 82% of Americans think that AI is a technology that should be carefully managed, and that percentage is similar to what the EU respondents have expressed. Also, we find similar demographic trends, in that women, those with lower levels of income or lower levels of education, tend to be not as supportive of developing AI.

Ariel: I went through this list, and one of the things that was on it is the potential for a US-China arms race. Can you talk a little bit about the results that you got from questions surrounding that? Do Americans seem to be concerned about a US-China arms race?

Baobao: One of the interesting findings from our survey is that Americans don’t necessarily think the US or China is the best at AI R&D, which is surprising, given that these two countries are probably the best. That’s a curious fact that I think we need to be cognizant of.

Ariel: I want to interject there, and then we can come back to my other questions, because I was really curious about that. Is that a case of the way you asked it—it was just, you know, “Is the US in the lead? Is China in the lead?”—as opposed to saying, “Do you think the US or China are in the lead?” Did respondents seem confused by possibly the way the question was asked, or do they actually think there’s some other country where there’s even more research happening?

Baobao: We asked this question in a way that it has been asked about general scientific achievements that Pew Research Center has asked about, so we did it such that it’s a survey experiment where half of the respondents were randomly assigned to consider the US and half of the respondents were randomly assigned to consider China. We wanted to ask this question in this manner, so we get more specific distribution of responses. When you just ask who is in the lead, you’re only allowed to put down one, whereas we give respondents a number of choices, so you can be either best in the world or above average, et cetera.

In terms of people underestimating US R&D, I think this is reflective of the public underestimating US scientific achievements in general. Pew had a similar question in a 2015 survey, and while 45% of the scientists they interviewed think that scientific achievement in the US are the best in the world, only 15% of Americans expressed the same opinion. So this could just be reflecting this general trend.

Ariel: I want to go back to my questions about the US-China arms race, and I guess it does make sense, first, to just define what you are asking about with a US-China arms race. Is that focused more on R&D, or were you also asking about a weapons race?

Baobao: This is actually a survey experiment, where we present different messages to respondents about a potential US-China arms race, and we asked both about investment in AI military capabilities as well as developing AI in a more peaceful manner, and cooperation between the US and China in terms of general R&D. We found that Americans seem to both support the US investing more in AI military capabilities, to make sure that it doesn’t fall behind China’s, even though it would exacerbate a AI military arms race. On the other hand, they also support the US working hard with China to cooperate to avoid the dangers of a AI arms race, and they don’t seem to understand that there’s a trade-off between the two.

I think this result is important for policymakers trying to not exacerbate an arms race, or to prevent one, when communicating with the public—to communicate these trade-offs, although we find that messages that explain the risks of an arm race tend to decrease respondent support for the US investing more in AI military capabilities, but the other information treatments don’t seem to change public perceptions.

Ariel: Do you think it’s a misunderstanding of the trade-offs, or maybe just hopeful thinking that there’s some way to maintain military might while still cooperating?

Baobao: I think this is a question that involves further investigation. I apologize that I keep saying this.

Ariel: That’s the downside to these surveys. I end up with far more questions than get resolved.

Baobao: Yes, and we’re one of the first groups who are asking these questions, so we’re just at the beginning stages of probing this very important policy question.

Ariel: With a project like this, do you expect to get more answers or more questions?

Baobao: I think in the beginning stages, we might get more questions than answers, although we are certainly getting some important answers—for instance that the American public is quite concerned about the societal impacts of AI. With that result, then we can probe and get more detailed answers hopefully. What are they concerned about? What can policymakers do to alleviate these concerns?

Ariel: Let’s get into some of the results that you had regarding trust. Maybe you could just talk a little bit about what you asked the respondents first, and what some of their responses were.

Baobao: Sure. We asked two questions regarding trust. We asked about trust in various actors to develop AI, and we also asked about trust in various actors to manage the development and deployment of AI. These actors include parts of the US government, international organizations, companies, and other groups such as universities or nonprofits. We found that among the actors that are most trusted to develop AI, these include university researchers and the US military.

Ariel: That was a rather interesting combination, I thought.

Baobao: I would like to give it some context. In general, trust in institutions is low among the American public. Particularly, there’s a lot of distrust in the government, and university researchers and the US military are the most trusted institutions across the board, when you ask about other trust issues.

Ariel: I would sort of wonder if there’s political sides with which people are more likely to trust universities and researchers versus trust the military. Is that across the board respondents on either side of the political aisle trusted both, or were there political demographics involved in that?

Baobao: That’s something that we can certainly look into with our existing data. I would need to check and get back to you.

Ariel: The other thing that I thought was interesting with that—and we can get into the actors that people don’t trust in a minute—but I know I hear a lot of concern that Americans don’t trust scientists. As someone who does a lot of science communication, I think that concern is overblown. I think there is actually a significant amount of trust in scientists; There’s just some certain areas where it’s less, and I was sort of wondering what you’ve seen in terms of trust in science, and if the results of this survey have impacted that at all.

Baobao: I would like to add that among the actors that we asked who are currently building AI or planning to build AI, trust is relatively low amongst all these groups.

Ariel: Okay.

Baobao: So, even with university scientists: 50% of respondents say that they have a great amount of confidence or a fair amount of confidence in university researchers developing AI in the interest of the public, so that’s better than some of these other organizations, but it’s not super high, and that is a bit concerning. And in terms of trust in science in general—I used to work in the climate policy space before I moved into AI policy, and there, it’s a question that we struggle with in terms of trust in expertise with regards to climate change. I found that in my past research, communicating the scientific consensus in climate change is actually an effective messaging tool, so your concerns about distrust in science being overblown, that could be true. So I think going forward, in terms of effective scientific communication, having AI researchers deliver an effective message: I think that could be important in bringing the public to trust AI more.

Ariel: As someone in science communication, I would definitely be all for that, but I’m also all for more research to understand that better. I also want to go into the organizations that Americans don’t trust.

Baobao: I think in terms of tech companies, they’re not perceived as untrustworthy across the board. I think trust is still relatively high for tech companies, besides Facebook. People really don’t trust Facebook, and that could be because of all the recent coverage of Facebook violating data privacy, the Cambridge Analytica scandal, digital manipulation on Facebook, et cetera. So we conducted this survey a few months after the Cambridge Analytica Facebook scandal had been in the news, but we’ve also run some pilot surveys before all that press coverage of the Cambridge Analytica Facebook scandal had broke, and we also found that people distrust Facebook. So it might be something particular to the company, although that’s a cautionary tale for other tech companies, that they should work hard to make sure that the public trusts its products.

Ariel: So I’m looking at this list, and under the tech companies, you asked about Microsoft, Google, Facebook, Apple, and Amazon. And I guess one question that I have—the trust in the other four, Microsoft, Google, Apple, and Amazon appears to be roughly on par, and then there’s very limited trust in Facebook. But I wonder, do you think it’s just—since you’re saying that Facebook also wasn’t terribly trusted beforehand—do you think that has to do with the fact that we have to give so much more personal information to Facebook? I don’t think people are aware of giving as much data to even Google, or Microsoft, or Apple, or Amazon.

Baobao: That could be part of it. So, I think going forward, we might want to ask more detailed questions about how people use certain platforms, or whether they’re aware that they’re giving data to particular companies.

Ariel: Are there any other reasons that you think could be driving people to not trust Facebook more than the other companies, especially as you said, with the questions and testing that you’d done before the Cambridge Analytica scandal broke?

Baobao: Before the Cambridge Analytica Facebook scandal, there were a lot of news coverage around the 2016 elections of vast digital manipulation on Facebook, and on social media, so that could be driving the results.

Ariel: Okay. Just to be consistent and ask you the same question over and over again, with this, what did you find surprising and what was on par with your expectations?

Baobao: I suppose I don’t find the Facebook results that unsurprising, given its negative press coverage, and also from our pilot results. What I did find surprising is the high levels of trust in the US military to develop AI, because I think some of us in the AI policy community are concerned about military applications of AI, such as lethal autonomous weapons. But on the other hand, Americans seem to place a high general level of trust in the US military.

Ariel: Yeah, that was an interesting result. So if you were going to move forward, what are some questions that you would ask to try to get a better feel for why the trust is there?

Baobao: I think I would like to ask some questions about particular uses or applications of AI these various actors are developing. Sometimes people aren’t aware that the US military is perhaps investing in this application of AI that they might find problematic, or that some tech companies are working on some other applications. I think going forward, we might do more of these survey experiments, where we give information to people and see if that increases or decreases trust in the various actors.

Ariel: What did Americans think of high-level machine learning and AI?

Baobao: What we found is that the public thinks, on balance, it will be more bad than good: So we have 15% of respondents who think it will be extremely bad, possibly leading to human extinction, and that’s a concern. On the other hand, only 5% thinks it will be extremely good. There’s a lot of uncertainty. To be fair, it is about a technology that a lot of people don’t understand, so 18% said, “I don’t know.”

Ariel: What do we take away from that?

Baobao: I think this also reflects on our previous findings that I talked about, where Americans expressed concern about where AI is headed: that there are people with serious reservations about AI’s impact on society. Certainly, AI researchers and policymakers should take these concerns seriously, invest a lot more research into how to prevent the bad outcomes and how to make sure that AI can be beneficial to everyone.

Ariel: Were there groups who surprised you by either being more supportive of high-level AI and groups who surprised you by being less supportive of high-level AI?

Baobao: I think the results for support of developing high-level machine intelligence versus support for developing AI, they’re quite similar. The correlation is quite high, so I suppose nothing is entirely surprising. Again, we find that people with CS or engineering degrees tend to have higher levels of support.

Ariel: I find it interesting that people who have higher incomes seem to be more supportive as well.

Baobao: Yes. That’s another result that’s pretty consistent across the two questions. We also performed analysis looking at these different levels of support for developing high-level machine intelligence, controlling for support of developing AI, and what we find there is that those with CS or programming experience have greater support of developing high-level machine intelligence, even controlling for support of developing AI. So there, it seems to be another tech optimism story, although we need to investigate further.

Ariel: And can you explain what you mean when you say that you’re analyzing the support for developing high-level machine learning with respect to the support for AI? What distinction are you making there?

Baobao: Sure. So we use a multiple linear regression model, where we’re trying to predict support for developing high-level machine intelligence using all these demographic characteristics, but also including respondent’s support for developing AI, to see if there’s something driving the support for developing high-level machine intelligence in spite of controlling for developing AI. And we find that controlling for support for developing AI, having CS or programming experience is further correlated with support of developing high-level machine intelligence. I hope that makes sense.

Ariel: For the purposes of the survey, how do you distinguish between AI and high-level machine learning?

Baobao: We defined AI as computer systems that perform tasks or make decisions that usually require human intelligence. So that’s a more general definition, versus high-level machine intelligence defined in such a way where the AI is doing most economically relevant tasks at the level of the median human.

Ariel: Were there inconsistencies between those two questions, where you were surprised to find support for one and not support for the other?

Baobao: We can sort of probe it further, to see if there’s people who answer differently for those two questions. We haven’t looked into it, but certainly that’s something that we can with our existing data.

Ariel: Were there any other results that you think researchers specifically should be made aware of, that could potentially impact the work that they’re doing in terms of developing AI?

Baobao: I guess here’s some general recommendations. I think it’s important for researchers or people working in an adjacent space to do a lot more scientific communication to explain to the public what they’re doing—particularly maybe AI safety researchers, because I think there’s a lot of hype about AI in the news, either how scary it is or how great it will be, but I think some more nuanced narratives would be helpful for people to understand the technology.

Ariel: I’m more than happy to do what I can to try to help there. So for you, what are your next steps?

Baobao: Currently, we’re working on two projects. We’re hoping to run a similar survey in China this year, so we’re currently translating the questions into Chinese and changing the questions to have more local context. So then we can compare our results—the US results with the survey results from China—which will be really exciting. We’re also working on surveying AI researchers about various aspects of AI, both looking at their predictions for AI development timelines, but also their views on some of these AI governance challenge questions.

Ariel: Excellent. Well, I am very interested in the results of those as well, so I hope you’ll keep us posted when those come out.

Baobao: Yes, definitely. I will share them with you.

Ariel: Awesome. Is there anything else you wanted to mention?

Baobao: I think that’s it.

Ariel: Thank you so much for joining us.

Baobao: Thank you. It’s a pleasure talking to you.

 

 

Podcast: Existential Hope in 2019 and Beyond

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable  just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.

The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts–Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg–about their views on the present, the future, and the path between them.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

We hope you’ll come away feeling inspired and motivated–not just to prevent catastrophe, but to facilitate greatness.

Topics discussed in this episode include:

  • How technology aids us in realizing personal and societal goals.
  • FLI’s successes in 2018 and our goals for 2019.
  • Worldbuilding and how to conceptualize the future.
  • The possibility of other life in the universe and its implications for the future of humanity.
  • How we can improve as a species and strategies for doing so.
  • The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
  • Existential hope and what it looks like now and far into the future.

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone. Welcome back to the FLI podcast. I’m your host, Ariel Conn, and I am truly excited to bring you today’s show. This month, we’re departing from our standard two-guest interview format because we wanted to tackle a big and fantastic topic for the end of the year that would require insight from a few extra people. It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine.

And so, as we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

I’m delighted to present Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark and Anders Sandberg, all of whom were kind enough to come on the show and talk about why they’re so hopeful for the future and just how amazing that future could be.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and she created the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

Over the course of a few days, I interviewed all six of our guests, and I have to say, it had an incredibly powerful and positive impact on my psyche. We’ve merged these interviews together for you here, and I hope you’ll all also walk away feeling a bit more hope for humanity’s collective future, whatever that might be.

But before we go too far into the future, let’s start with Anthony and Max, who can talk a bit about where we are today.

Anthony: I’m Anthony Aguirre, I’m one of the founders of the Future of Life Institute. And in my day job, I’m a Physicist at the University of California at Santa Cruz.

Max: I am Max Tegmark, a professor doing physics and AI research here at MIT, and also the president of the Future of Life Institute.

Ariel: All right. Thank you so much for joining us today. I’m going to start with sort of a big question. That is, do you think we can use technology to solve today’s problems?

Anthony: I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

Max: Take, for example, poverty. It’s not like we don’t have the technology right now to eliminate poverty. But we’re steering the technology in such a way that there are people who starve to death, and even in America there are a lot of children who just don’t get enough to eat, through no fault of their own.

Anthony: So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them. Now, that being said, there are things that are more amenable to better technology, and things that are less amenable. And there are technologies that tend to, rather than functioning as kind of an extension of our will, will take on a bit of a life of their own. If you think about technologies like medicine, or good farming techniques, those tend to be sort of overall beneficial and really are kind of accomplishing purposes that we set. You know, we want to be more healthy, we want to be better fed, we build the technology and it happens. On the other hand, there are obviously technologies that are just as useful or even more useful for negative purposes — socially negative or things that most people agree are negative things: landmines, for example, as opposed to vaccines. These technologies come into being because somebody is trying to accomplish their purpose — defending their country against an invading force, say — but once that technology exists, it’s kind of something that is easily used for ill purposes.

Max: Technology simply empowers us to do good things or bad things. Technology isn’t evil, but it’s also not good. It’s morally neutral. Right? You can use fire to warm up your home in the winter or to burn down your neighbor’s house. We have to figure out how to steer it and where we want to go with it. I feel that there’s been so much focus on just making our tech powerful right now — because that makes money, and it’s cool — that we’ve neglected the steering and the destination quite a bit. And in fact, I see the core goal of the Future of Life Institute: Help bring back focus on the steering of our technology and the destination.

Anthony: There are also technologies that are really tricky in that they give us what we think we want, but then we sort of regret having later, like addictive drugs, or gambling, or cheap sugary foods, or-

Ariel: Social media.

Anthony: … certain online platforms that will go unnamed. We feel like this is what we want to do at the time; We choose to do it. We choose to eat the huge sugary thing, or to spend some time surfing the web. But later, with a different perspective maybe, we look back and say, “Boy, I could’ve used those calories, or minutes, or whatever, better.” So who’s right? Is it the person at the time who’s choosing to eat or play or whatever? Or is it the person later who’s deciding, “Yeah, that wasn’t a good use of my time or not.” Those technologies I think are very tricky, because in some sense they’re giving us what we want. So we reward them, we buy them, we spend money, the industries develop, the technologies have money behind them. At the same time, it’s not clear that they make us happier.

So I think there are certain social problems, and problems in general, that technology will be tremendously helpful in improving as long as we can act to sort of wisely try to balance the effects of technology that have dual use toward the positive, and as long as we can somehow get some perspective on what to do about these technologies that take on a life of their own, and tend to make us less happy, even though we dump lots of time and money into them.

Ariel: This sort of idea of technologies — that we’re using them and as we use them we think they make us happy and then in the long run we sort of question that — is this a relatively modern problem, or are there examples of anything that goes further back that we can learn from from history?

Anthony: I think it goes fairly far back. Certainly drug use goes a fair ways back. I think there have been periods where drugs were used as part of religious or social ceremonies and in other kind of more socially constructive ways. But then, it’s been a fair amount of time where opiates and very addictive things have existed also. Those have certainly caused social problems back at least a few centuries.

I think a lot of these examples of technologies that give us what we seem to want but not really what we want are ones in which we’re applying the technology to a species — us — that developed in a very different set of circumstances, and that contrast between what’s available and what we evolutionarily wanted is causing a lot of problems. The sugary foods are an obvious example where we can now just supply huge plenitudes of something that was very rare and precious back in more evolutionary times — you know, sweet calories.

Drugs are something similar. We have a set of chemistry that helps us out in various situations, and then we’re just feeding those same chemical pathways to make ourselves feel good in a way that is destructive. And violence might be something similar. Violent technologies go way, way back. Those are another one that are clearly things that we want to invent to further our will and accomplish our goals. They’re also things that may at some level be addictive to humans. I think it’s not entirely clear exactly how — there’s a strange mix there, but I think there’s certainly something compelling and built into at least many humans’ DNA that promotes fighting and hunting and all kinds of things that were evolutionarily useful way back when and perhaps less useful now. It had a clear evolutionary purpose with tribes that had to defend themselves, with animals that needed to be killed for food. But feeding that desire to run around and hunt and shoot people, which most people aren’t doing in real life, but tons of people are doing in video games. So there’s clearly some built in mechanism that’s rewarding that behavior as being fun to do and compelling. Video games are obviously a better way to express that than running around and doing it in real life, but it tells you something about some circuitry that is still there and is left over from early times. So I think there are a number of examples like that — this connection between our biological evolutionary history and what technology makes available in large quantities — where we really have to think carefully about how we want to play that.

Ariel: So, as you look forward to the future, and sort of considering some of these issues that you’ve brought up, how do you envision us being able to use technology for good and maybe try to overcome some of these issues? I mean, maybe it is good if we’ve got people playing video games instead of going around shooting people in real life.

Anthony: Yeah. So there may be examples where some of that technology can fulfill a need in a less destructive way than it might otherwise be. I think there are also plenty of examples where a technology can root out or sort of change the nature of a problem that would be enormously difficult to do something about without a technology. So for example, I think eating meat, when you analyze it from almost any perspective, is a pretty destructive thing for humanity to be doing. Ecologically, ethically in terms of the happiness of the animals, health-wise: so many things are destructive about it. And yet, you really have the sense that it’s going to be enormously difficult — it would be very unlikely for that to change wholesale on a relatively short period of time.

However, there are technologies — clean meat, cultured meat, really good tasting vegetarian meat substitutes — that are rapidly coming to market. And you could imagine if those things were to get cheap and widely available and perhaps a little bit healthier, that could dramatically change that situation relatively quickly. I think if a non-ecologically destructive, non-suffering inducing, just as tasty and even healthier product were cheaper, I don’t think people would be eating meat. Very few people actually like, I think, intrinsically the idea of having an animal suffer in order for them to eat. So I think that’s an example of something that would be really, really hard to change through just social actions. Could be jump started quite a lot by technology — that’s one of the ones I’m actually quite hopeful about.

Global warming I think is a similar one — it’s on some level a social and economic problem. It’s a long-term planning problem, which we’re very bad at. It’s pretty clear how to solve the global warming issue if we really could think on the right time scales and weigh the economic costs and benefits over decades — it’d be quite clear that mitigating global warming now and doing things about it now might take some overall investment that would clearly pay itself off. But we seem unable to accomplish that.

On the other hand, you could easily imagine a really cheap, really power-dense, quickly rechargeable battery being invented and just utterly transforming that problem into a much, much more tractable one. Or feasible, small-scale nuclear fusion power generation that was cheap. You can imagine technologies that would just make that problem so much easier, even though it is ultimately kind of a social or political problem that could be solved. The technology would just make it dramatically easier to do that.

Ariel: Excellent. And so thinking more hopefully — even when we’re looking at what’s happening in the world today, news is usually focusing on all the bad things that have gone wrong — when you look around the world today, what do you think, “Wow, technology has really helped us achieve this, and this is super exciting?”

Max: Almost everything I love about today is the result of technology. It’s because of technology that we’ve more than doubled the lifespan that we humans used to have, most of human history. More broadly, I feel that the technology is empowering us. Ten thousand years ago, we felt really, really powerless; We were these beings, you know, looking at this great world out there and having very little clue about how it worked — it was largely mysterious to us — and even less ability to actually influence the world in a major way. Then technology enabled science, and vice versa. So the sciences let us understand more and more how the world works, and let us build this technology which lets us shape the world to better suit us. Helping produce much better, much more food, helping keep us warm in the winter, helping make hospitals that can take care of us, and schools that can educate us, and so on.

Ariel: Let’s bring on some of our other guests now. We’ll turn first to Gaia Dempsey. How do you envision technology being used for good?

Gaia: That’s a huge question.

Ariel: It is. Yes.

Gaia: I mean, at its essence I think technology really just means a tool. It means a new way of doing something. Tools can be used to do a lot of good — making our lives easier, saving us time, helping us become more of who we want to be. And I think technology is best used when it supports our individual development in the direction that we actually want to go — when it supports our deeper interests and not just the, say, commercial interests of the company that made it. And I think in order for that to happen, we need for our society to be more literate in technology. And to me that’s not just about understanding how computing platforms work, but also understanding the impact that tools have on us as human beings. Because they don’t just shape our behavior, they actually shape our minds and how we think.

So I think we need to be very intentional about the tools that we choose to use in our own lives, and also the tools that we build as technologists. I’ve always been very inspired by Douglas Engelbart’s work, and I think that — I was revisiting his original conceptual framework on augmenting human intelligence, which he wrote and published in 1962 — and I really think he had the right idea, which is that tools used by human beings don’t exist in a vacuum. They exist in a coherent system and that system involves language: the language that we use to describe the tools and understand how we’re using them; the methodology; and of course the training and education around how we learn to use those tools. And I think that as a tool maker it’s really important to think about each of those pieces of an overarching coherent system, and imagine how they’re all going to work together and fit into an individual’s life and beyond: you know, the level of a community and a society.

Ariel: I want to expand on some of this just a little bit. You mentioned this idea of making sure that the tool, the technology tool, is being used for people and not just for the benefit, the profit, of the company. And that that’s closely connected to making sure that people are literate about the technology. One, just to confirm that that is actually what you were saying. And, two, I mean one of the reasons I want to confirm this is because that is my own concern — that it’s being too focused for making profit and not enough people really understand what’s happening. My question to you is, then, how do we educate people? How do we get them more involved?

Gaia: I think for me, my favorite types of tools are the kinds of tools that support us in developing our thinking and that help us accelerate our ability to learn. But I think that some of how we do this in our society is not just about creating new tools or getting trained on new tools, but really doesn’t have very much to do with technology at all. And that’s in our education system, teaching critical thinking. And teaching, starting at a young age, to not just accept information that is given to you wholesale, but really to examine the motivations and intentions and interests of the creator of that information, and the distributor of that information. And I think these are really just basic tools that we need as citizens in a technological society and in a democracy.

Ariel: That actually moves nicely to another question that I have. Well, I actually think the sentiment might be not quite as strong as it once was, but I do still hear a lot of people who sort of approach technology as the solution to any of today’s problems. And I’m personally a little bit skeptical that we can only use technology. I think, again, it comes back to what you were talking about with it’s a tool so we can use it, but I think it just seems like there’s more that needs to be involved. I guess, how do you envision using technology as a tool, and still incorporating some of these other aspects like teaching critical thinking?

Gaia: You’re really hitting on sort of the core questions that are fundamental to creating the kind of society that we want to live in. And I think that we would do well to spend more time thinking deeply about these questions. I think technology can do really incredible, tremendous things in helping us solve problems and create new capabilities. But it also creates a new set of problems for us to engage with.

We’ve sort of coevolved with our technology. So it’s easy to point to things in the culture and say, “Well, this never would have happened without technology X.” And I think that’s true for things that are both good and bad. I think, again, it’s about taking a step back and taking a broader view, and really not just teaching critical thinking and critical analysis, but also systems level thinking. And understanding that we ourselves are complex systems, and we’re not perfect in the way that we perceive reality — we have cognitive biases, we cannot necessarily always trust our own perceptions. And I think that’s a lifelong piece of work that everyone can engage with, which is really about understanding yourself first. This is something that Yuval Noah Harari talked about in a couple of his recent books and articles that he’s been writing, which is: if we don’t do the work to really understand ourselves first and our own motivations and interests, and sort of where we want to go in the world, we’re much more easily co-opted and hackable by systems that are external to us.

There are many examples of recommendation algorithms and sentiment analysis — audience segmentation tools that companies are using to be able to predict what we want and present that information to us before we’ve had a chance to imagine that that is something we could want. And while that’s potentially useful and lucrative for marketers, the question is what happens when those tools are then utilized not just to sell us a better toothbrush on Amazon, but when it’s actually used in a political context. And so with the advent of these vast machine learning, reinforcement learning systems that can look at data and look at our behavior patterns and understand trends in our behavior and our interests, that presents a really huge issue if we are not ourselves able to pause and create a gap, and create a space between the information that’s being presented to us within the systems that we’re utilizing and really our own internal compass.

Ariel: You’ve said two things that I think are sort of interesting, especially when they’re brought together. And the first is this idea that we’ve coevolved with technology — which, I actually hadn’t thought of it in that phrase before, and I think it’s a really, really good description. But then when we consider that we’ve coevolved with technology, what does that mean in terms of knowing ourselves? And especially knowing ourselves as our biological bodies, and our limiting cognitive biases? I don’t know if that’s something that you’ve thought about much, but I think that combination of ideas is an interesting one.

Gaia: I mean, I know that I certainly already feel like I’m a cyborg. Part of knowing myself is — it does involve understanding the tools that I use, that feel that they are extensions of myself. That kind of comes back to the idea of technology literacy, and systems literacy, and being intentional about the kinds of tools that I want to use. For me, my favorite types of tools are the kind that I think are very rare: the kind that support us developing the capacity for long-term thinking, and for being true to the long-term intentions and goals that I set for myself.

Ariel: Can you give some examples of those?

Gaia: Yeah, I’ll give a couple examples. One example that’s sort of probably familiar to a lot of people listening to this comes from the book Ready Player One. And in this book the main character is interacting with his VR system that he sort of lives and breathes in every single day. And at a certain point the system asks him: do you want to activate your health module? I forgot exactly what it was called. And without giving it too much thought, he kind of goes, “Sure. Yeah, I’d like to be healthier.” And it instantiates a process whereby he’s not allowed to log into the OASIS without going through his exercise routine every morning. To me, what’s happening there is: there is a choice.

And it’s an interesting system design because he didn’t actually do that much deep thinking about, “Oh yeah, this is a choice I really want to commit to.” But the system is sort of saying, “We’re thinking through the way that your decision making process works, and we think that this is something you really do want to consider. And we think that you’re going to need about three months before you make a final decision as to whether this is something you want to continue with.”

So that three month period or whatever, and I believe it was three months in the book, is what’s known as an akrasia horizon. Which is a term that I learned through a different tool that is sort of a real life version of that, which is called Beeminder. And the akrasia horizon is, really, it’s a time period that’s long enough that it will sort of circumvent a cognitive bias that we have to really prioritize the near term at the expense of the future. And in the case of the Ready Player One example, the near term desire that he would have that would circumvent the future — his long-term health — is, “I don’t feel like working out today. I just want to get into my email or I just want to play a video game right now.” And a very similar sort of setup is created in this tool Beeminder, which I love to use to support some goals that I want to make sure I’m really very motivated to meet.

So it’s a tool where you can put in your goals and you can track them either yourself by entering the data manually, or you can connect to a number of different tracking capabilities like RescueTime and others. And if you don’t stay on track with your goals, they charge your credit card. It’s a very effective sort of motivating force. And so I sort of have a nickname: I call these systems time bridges. Which are really choices made by your long-term thinking self, that in some way supersedes the gravitational pull toward mediocrity inherent in your short-term impulses.

It’s about experimenting too. And this is one particular system that creates consequences and accountability. And I love systems. For me if I don’t have systems in my life that help me organize the work that I want to do, I’m hopeless. That’s why I like to collect and I’m sort of an avid taster of different systems, and I’ll try anything, and really collect and see what works. And I think that’s important. It’s a process of experimentation to see what works for you.

Ariel: Let’s turn to Allison Duettmann now, for her take on how we can use technology to help us become better versions of ourselves and to improve our societal interactions.

Allison: I think there are a lot of technological tools that we can use to aid our reasoning and sense making and coordination. So I think that technologies can be used to help with reasoning, for example, by mitigating trauma, or bias, or by augmenting our intelligence. That’s the whole point of creating AI in the first place. Technologies can also be used to help with collective sense-making, for example with truth-finding and knowledge management, and I think your hypertexts and prediction markets — something that Anthony’s working on — are really worthy examples here. I also think technologies can be used to help with coordination. Mark Miller, who I’m currently writing a book with, likes to say that if you lower the risks of cooperation, you’ll get a more cooperative world. I think that most cooperative interactions may soon be digital.

Ariel: That’s sort of an interesting idea, that there’s risks to cooperation. Can you maybe expand on that a little bit more?

Allison: Yeah, sure. I think that most of our interactions are already digital ones, for some of us at least, and they will be more and more so in the future. So I think that one step to lowering the risk of cooperation is establishing cybersecurity as a first step, because this would decrease the risk of digital coercion. But I do think that’s only part of it, because rather than just freeing us from the restraints that keep us from cooperating, we also need to equip us with the tools to cooperate, right?

Ariel: Yes.

Allison: I think some of those may be smart contracts to allow individuals to credibly commit, but there may be others too. I just think that we have to realize that the same technologies that we’re worried about in terms of risks are also the ones that may augment our abilities to decrease those risks.

Ariel: One of the things that came to mind as you were talking about this, using technology to improve cooperation — when we look at the world today, technology isn’t spread across the globe evenly. People don’t have equal access to these tools that could help. Do you have ideas for how we address various inequality issues, I guess?

Allison: I think inequality is a hot topic to address. I’m currently writing a book with Mark Miller and Christine Peterson on a few strategies to strengthen civilization. In this book we outline a few paths to do so, but also potential positive outcomes. One of the outcomes that we’re outlining is a voluntary world in which all entities can cooperate freely with each other to realize their interests. It’s kind of based on the premise that finding one utopia that works for everyone is hard, and is perhaps impossible, but that in the absence of knowing what’s in everyone’s interest, we shouldn’t try to impose any interests by one entity — whether that’s an AI or an organization or a state — but we should try to create a framework in which different entities, with different interests, whether they’re human or artificial, can pursue their interests freely by cooperating. And I think If you look at the strategy, it has worked pretty well so far. If you look at society right now it’s really not perfect, but by allowing humans to cooperate freely and engage in some mutually beneficial relationships, civilization already serves our interests quite well. And it’s really not perfect by far, I’m not saying this, but I think as a whole, our civilization at least tends imperfectly to plan for pareto-preferred paths. We have survived so far, and in better and better ways.

So a few ways that we propose to strengthen this highly involved process is by proposing kind of general recommendations for solving coordination problems, and then a few more specific ideas on reframing a few risks. But I do think that enabling a voluntary world in which different entities can cooperate freely with each other is the best we can do, given our limited knowledge of what is in everyone’s interests.

Ariel: I find that interesting, because I hear lots of people focus on how great intelligence is, and intelligence is great, but it does often seem — and I hear other people say this — that cooperation is also one of the things that our species has gotten right. We fail at it sometimes, but it’s been one of the things, I think, that’s helped.

Allison: Yeah, I agree. I hosted an event last year at the Internet Archive on different definitions of intelligence. Because in the paper that we wrote last year, we have this very grand, or broad conception of intelligence, which includes civilization as an intelligence. So I think you may be asking yourself the question of, what does it mean to be intelligent, and if what we care about is problem-solving ability then I think that civilization certainly classifies as a system that can solve more problems than any individual that is within it alone. So I do think this is part of the cooperative nature of the individual parts within civilization, and so I don’t think that cooperation and intelligence are mutually exclusive at all. Marvin Minsky wrote this amazing book, Society of Mind, and in much of this, has similar ideas.

Ariel: I’d like to take this idea and turn it around, and this is a question specifically for Max and Anthony: looking back at this past year, how has FLI helped foster cooperation and public engagement surrounding the issues we’re concerned about? What would you say were FLI’s greatest successes in 2018?

Anthony: Let’s see, 2018. What I’ve personally enjoyed the most, I would say, is starting the engagement between the technical researchers and the nonprofit community really starting to get more engaged with state and federal governments. So for example the Asilomar principles — which were generated at this nexus of business and nonprofit and academic thinkers about AI and related things — I think were great. But that conversation didn’t really include much from people in policy, and governance, and governments, and so on. So, starting to see that thinking, and those recommendations, and those aspirations of the community of people who know about AI and are thinking hard about it and what it should do and what it shouldn’t do — seeing that start to come into the political sphere, and the government sphere, and the policy sphere I think is really encouraging.

That seems to be happening in many places at some level. I think the local one that I’m excited about is the passage of the California legislature of a resolution endorsing the Asilomar principles. That felt really good to see that happen and really encouraging that there were people in the legislature that — we didn’t go and lobby them to do that, they came to us and said, “This is really important. We want to do something.” And we worked with them to do that. That was super encouraging, because it really made it feel like there is a really open door, and there’s a desire in the policy world to do something. This thing is getting on people’s radar, that there’s a huge transformation coming from AI.

They see that their responsibility is to do something about that. They don’t intrinsically know what they should be doing, they’re not experts in AI, they haven’t been following the field. So there needs to be that connection and it’s really encouraging to see how open they are and how much can be produced with honestly not a huge level of effort; Just communication and talking through things I think made a significant impact. I was also happy to see how much support there continues to be for controlling the possibility of lethal autonomous weapons.

The thing we’ve done this year, the lethal autonomous weapons pledge, I felt really good about the success of. So this was an idea that anybody who’s interested, but especially companies who are engaged in developing related technologies, drones, or facial recognition, or robotics, or AI in general — to get them to take that step themselves of saying, “No, we want to develop these technologies for good, and we have no interest in developing things that are going to be weaponized and used in lethal autonomous weapons.”

I think having a large number of people and corporations sign on to a pledge like that is useful not so much because they were planning to do all those things and now they signed a pledge, so they’re not going to do it anymore. I think that’s not really the model so much as it’s creating a social and cultural norm that these are things that people just don’t want to have anything to do with, just like biotech companies don’t really want to be developing biological weapons, they want to be seen as forces for good that are building medicines and therapies and treatments and things. Everybody is happy for biotech companies to be doing those things.

If biotech companies were building biological weapons also, you really start to wonder, “Okay, wait a minute, why are we supporting this? What are they doing with my information? What are they doing with all this genetics that they’re getting? What are they doing with the research that’s funded by the government? Do we really want to be supporting this?” So keeping that distinction in the industry between all the things that we all support — better technologies for helping people — versus the military applications, particularly in this rather destabilizing and destructive way: I think that is more the purpose — to really make clear that there are companies that are going to develop weapons for the military, and that’s part of the reality of the world.

We have militaries; We need, at the moment, militaries. I think I certainly would not advocate that the US should stop defending itself, or shouldn’t develop weapons, and I think it’s good that there are companies that are building those things. But there are very tricky issues when the companies building military weapons are the same companies that are handling all of the data of all of the people in the world or in the country. I think that really requires a lot of thought, how we’re going to handle it. And seeing companies engage with those questions and thinking about how are the technologies that we’re developing, how are they going to be used and for what purposes, and what purposes do we not want them to be used for is really, really heartening. It’s been very positive I think to see at least in certain companies those sort of conversations go on with our pledge or just in other ways.

You know, seeing companies come out with, “This is something that we’re really worried about. We’re developing these technologies, but we see that there could be major problems with them.” That’s very encouraging. I don’t think it’s necessarily a substitute for something happening at the regulatory or policy level, I think that’s probably necessary too, but it’s hugely encouraging to see companies being proactive about thinking about the societal and ethical implications of the technologies they’re developing.

Max: There are four things I’m quite excited about. One of them is that we managed to get so many leading companies and AI researchers and universities to pledge to not build lethal autonomous weapons, also known as killer robots. Second is that we were able to channel two million dollars, thanks to Elon Musk, to 10 research groups around the world to help figure out how to make artificial general intelligence safe and beneficial. Third is that the state of California decided to officially endorse the 23 Asilomar Principles. It’s really cool that these are getting more taken seriously now, even by policy makers. And the fourth is that we were able to track down the children of Stanislav Petrov in Russia, thanks to whom this year is not the 35th anniversary year of World War III, and actually give them the appreciation we feel that they deserve.

I’ll tell you a little more about this one because it’s something I think a lot of people still aren’t that aware of. But September 26th, 35 years ago, Stanislav Petrov was on shift and in charge of his Soviet early warning station, which showed five US nuclear missiles incoming, one after the other. Obviously, not what he was hoping that would happen at work that day and a really horribly scary situation where the natural response is to do what that system was built for: namely, warning the Soviet Union so that they would immediately strike back. And if that had happened, then thousands of mushroom clouds later, you know, you and I, Ariel, would probably not be having this conversation. Instead, he, mostly on gut instinct, came to the conclusion that there was something wrong and said, “This is a false alarm.” And we’re incredibly grateful for that level-headed action of him. He passed away recently.

His two children are living on very modest means outside of Moscow and we felt that when someone does something like this, or in his case abstains from doing something, that future generations really appreciate, we should show our appreciation, so that others in his situation later on know that if they sacrifice themselves for the greater good, they will be appreciated. Or if they’re dead, their loved ones will. So we organized a ceremony in New York City and invited them to it and bought air tickets for them and so on. And in a very darkly humorous illustration of how screwed up their relationships are at the global level now, the US decided that because — that the way to show appreciation for the US not having gotten nuked was to deny a visa to Stanislav’s son. So he could only join by Skype. Fortunately, his daughter was able to get a visa, even though the waiting period to even get a visa point for Moscow was 300 days. We had to fly her to Israel to get her the Visa.

But she came and it was her first time ever outside of Russia. She was super excited to come and see New York. It was very touching for me to see all the affection that the New Yorkers there deemed at her and see her reaction and her husband’s reaction and to get to give her this $50,000 award, which for them was actually a big deal. Although it’s of course nothing compared to the value for the rest of the world of what their father did. And it was a very sobering reminder that we’ve had dozens of near misses where we almost had a nuclear war by mistake. And even though the newspapers usually make us worry about North Korea and Iran, of course by far the most likely way in which we might get killed by a nuclear explosion is because another just stupid malfunction or error causing the US and Russia to start a war by mistake.

I hope that this ceremony and the one we did the year before also, for family of Vasili Arkhipov, can also help to remind people that hey, you know, what we’re doing here, having 14,000 hydrogen bombs and just relying on luck year after year isn’t a sustainable long-term strategy and we should get our act together and reduce nuclear arsenals down to the level needed for deterrence and focus our money on more productive things.

Ariel: So I wanted to just add a quick follow-up to that because I had the privilege of attending the ceremony and I got to meet the Petrovs. And one of the things that I found most touching about meeting them was their own reaction to New York, which was in part just an awe of the freedom that they felt. And I think, especially, this is sort of a US centric version of hope, but it’s easy for us to get distracted by how bad things are because of what we see in the news, but it was a really nice reminder of how good things are too.

Max: Yeah. It’s very helpful to see things through other people’s eyes and in many cases, it’s a reminder of how much we have to lose if we screw up.

Ariel: Yeah.

Max: And how much we have that we should be really grateful for and cherish and preserve. It’s even more striking if you just look at the whole planet, you know, in a broader perspective. It’s a fantastic, fantastic place, this planet. There’s nothing else in the solar system even remotely this nice. So I think we have a lot to win if we can take good care of it and not ruin it. And obviously, the quickest way to ruin it would be to have an accidental nuclear war, which — it would be just by far the most ridiculously pathetic thing humans have ever done, and yet, this isn’t even really a major election issue. Most people don’t think about it. Most people don’t talk about it. This is, of course, the reason that we, with the Future of Life Institute, try to keep focusing on the importance of positive uses of technology, whether it be nuclear technology, AI technology, or biotechnology, because if we use it wisely, we can create such an awesome future, like you said: Take the good things we have, make them even better.

Ariel: So this seems like a good moment to introduce another guest, who just did a whole podcast series exploring existential risks relating to AI, biotech, nanotech, and all of the other technologies that could either destroy society or help us achieve incredible advances if we use them right.

Josh: I’m Josh Clark. I’m a podcaster. And I’m the host of a podcast series called the End of the World with Josh Clark.

Ariel: All right. I am really excited to have you on the show today because I listened to all of the End of the World. And it was great. It was a really, really wonderful introduction to existential risks.

Josh: Thank you.

Ariel: I highly recommend it to anyone who hasn’t listened to it. But now that you’ve just done this whole series about how things can go horribly wrong, I thought it would be fun to bring you on and talk about what you’re still hopeful for after having just done that whole series.

Josh: Yeah, I’d love that, because a lot of people are hesitant to listen to the series because they’re like, well, “it’s got to be such a downer.” And I mean, it is heavy and it is kind of a downer, but there’s also a lot of hope that just kind of emerged naturally from the series just researching this stuff. There is a lot of hope — it’s pretty cool.

Ariel: That’s good. That’s exactly what I want to hear. What prompted you to do that series, The End of the World?

Josh: Originally, it was just intellectual curiosity. I ran across a Bostrom paper in like 2005 or 6, my first one, and just immediately became enamored with the stuff he was talking about — it’s just baldly interesting. Like anyone who hears about this stuff can’t help but be interested in it. And so originally, the point of the podcast was, “Hey, everybody come check this out. Isn’t this interesting? There’s like, people actually thinking about this kind of stuff and talking about it.” And then as I started to interview some of the guys at the Future of Humanity Institute, started to read more and more papers and research further, I realized, wait, this isn’t just like, intellectually interesting. This is real stuff. We’re actually in real danger here.

And so as I was creating the series, I underwent this transition for how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

Ariel: So you have two episodes that go into biotechnology and artificial intelligence, which are two — especially artificial intelligence — they’re both areas that we work on at FLI. And in them, what I thought was nice is that you do get into some of the reasons why we’re still pursuing these technologies, even though we do see these existential risks around them. And so, I was curious, as you were doing your research into the series, what did you learn about, where you were like, “Wow, that’s amazing, that I’m so psyched that we’re doing this, even though there are these risks.”

Josh: Basically everything I learned about. I had to learn particle physics to explain what’s going on in large Hadron Collider. I had to learn a lot about AI. I realized when I came into it, that my grasp of AI was beyond elementary. And it’s not like I could actually put together a AGI myself from scratch or anything like that now, but I definitely know a lot more than I did before. With biotech in particular, there was a lot that I learned that I found particularly jarring with the number of accidents that are reported every year, and then more than that, the fact that not every lab in the world has to report accidents. I found that extraordinarily unsettling.

So kind of from start to finish, I learned a lot more than I knew going into it, which is actually one of the main reasons why it took me well over a year to make the series because I would start to research something and then I’d realized I need to understand the fundamentals of this. So I’d go understand, I’d go learn that, and then there’d be something else I had to learn first, before I could learn something the next level up. So I kept having to kind of regressively research and I ended up learning quite a bit of stuff.

But I think to answer your question, the thing that struck me the most was learning about physics, about particle physics, and how tenuous our understanding of our existence is, but just how much we’ve learned so far in just the last like century or so, when we really dove into quantum physics, particle physics and just what we know about things. One of the things that just knocked my socks off was the idea that there’s no such thing as particles — like particles, as we think of them are just basically like shorthand. But the rest of the world outside of particle physics has said like, “Okay, particles, there’s like protons and neutrons and all that stuff. There’s electrons. And we understand that they kind of all fit into this model, like a solar system. And that’s how atoms work.”

That is not at all how atoms work, like a particle is just a pack of energetic vibrations and everything that we experience and see and feel, and everything that goes on in the universe is just the interaction of these energetic vibrations in force fields that are everywhere at every point in space and time. And just to understand that, like on a really fundamental level, changed my life actually, changed the way that I see the universe and myself and everything actually.

Ariel: I don’t even know where I want to go next with that. I’m going to come back to that because I actually think it connects really nicely to the idea of existential hope. But first I want to ask you a little bit more about this idea of getting people involved more. I mean, I’m coming at this from something of a bubble at this point where I am surrounded by people who are very familiar with the existential risks of artificial intelligence and biotechnology. But like you said, once you start looking at artificial intelligence, if you haven’t been doing it already, you suddenly realize that there’s a lot there that you don’t know.

Josh: Yeah.

Ariel: I guess I’m curious, now that you’ve done that, to what extent do you think everyone needs to? To what extent do you think that’s possible? Do you have ideas for how we can help people understand this more?

Josh: Yeah you know, that really kind of ties into taking on existential risks in general, is just being an interested curious person who dives into the subject and learns as much as you can, but that at this moment in time, as I’m sure you know, that’s easier said than done. Like you really have to dedicate a significant portion of your life to spending time focusing on that one issue whether it’s AI, it’s biotech or particle physics, or nanotech, whatever. You really have to immerse yourself into it because it’s not a general topic of national or global conversation, the existential risks that we’re facing, and certainly not the existential risks we’re facing from all the technology that everybody’s super happy that we’re coming out with.

And I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about it. Groups like yours, talking to the public, educating the public. I’m hoping that my series did something like that, just arousing curiosity in people, but also raising awareness of these things like, these are real things, these aren’t crackpots talking about this stuff. This is real, legitimate issues that are coming down the pike, that are being pointed out by real, legitimate scientists and philosophers and people who have given great thought about this. This isn’t like a chicken little situation; This is quite real. I think if you can pique someone’s curiosity just enough that they listen, stop and listen, do a little research, it sinks in after a minute that this is real. And that, oh, this is something that they want to be a part of doing something about.

And so I think just getting people talking about that just by proxy will interest other people who hear about it, and it will spread further and further out. And I think that that’s step one, is to just make it so it’s an okay thing to talk about, so you’re not nuts to raise this kind of stuff seriously.

Ariel: Well, I definitely appreciate you doing your series for that reason. I’m hopeful that that will help a lot.

Ariel: Now, Allison — you’ve got this website which, my understanding is that you’re trying to get more people involved in this idea that if we focus on these better ideals for the future, we stand a better shot at actually hitting them.

Allison: At ExistentialHope.com, I keep a map of reading, podcasts, organizations, and people that inspire an optimistic long-term vision for the future.

Ariel: You’re clearly doing a lot to try to get more people involved. What is it that you’re trying to do now, and what do you think we all need to be doing more of to get more people thinking this way?

Allison: I do think that it’s up to everyone, really, to try to, again, engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generating common knowledge to catalyze more directed coordination toward beautiful futures. I think that there’s a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on what to influence that. So I think we should try to map the space of both peril and promise which lie before us, but we should really try to aim for that this knowledge can empower each and every one of us to navigate toward the grand future.

For us currently on the website this involves orienting ourselves, so collecting useful models, and relevant broadcasts, and organizations that generate new insights, and then try to synthesize a map of where we came from, and a really kind of long perspective, and where we may go, and then which lenses of science and technology and culture are crucial to consider along the way. Then finally we would like to publish a living document that summarizes those models that are published elsewhere, to outline possible futures, and the idea is that this is a collaborative document. Even already, currently, the website links to a host of different Google docs in which we’re trying to really synthesize the current state of the art in the different focus areas. The idea is that this is collaborative. This is why it’s on Google docs, because everyone can just comment. And people do, and I think this should really be a collaborative effort.

Ariel: What are some of your favorite examples of content that, presumably, you’ve added to your website, that look at these issues?

Allison: There’s quite a host of things on there, I think, that a good start for people to go on the website is just to go on the overview. Because here I list kind of my top 10 lists about short pieces and long pieces, but my personal ones, I think, as a starting ground: I really like the metaethics sequence by Eliezer Yudkowsky. It contains a really good post, like Existential Angst Factory, and Reality as Fixed Computation. For me this is kind of like existentialism 2.0. Have to get your motivations and expectations right. What can I reasonably hope for? Then I think, relatedly, there’s also the Fan Sequence, also by Yudkowsky. But that together with, for example, Letter From Utopia by Nick Bostrom, or Hedonistic Imperative by David Pearce, or Post On Raikoth by Scott Alexander — they are really a nice next step because they actually lay out a few compelling positive versions of utopia.

Then if you want to get into the more nitty gritty there’s a longer section on civilization, its past and its future — so, what’s wrong and how to improve it. Here Nick Bostrom wrote this piece on the future of human evolution, which lays out two suboptimal paths for humanity’s future, and interestingly enough they don’t involve extinction. A similar one, I think, which probably many people are familiar with, is Scott Alexander’s Meditations On Moloch, and then some that people are less familiar with — Growing Children For Bostrom’s Disneyland. They are really interesting, because they are other pieces of this type, which are sketching out competitive and selective pressures that lead toward races to the bottom, as negative futures which don’t involve extinction per se. I think the really interesting thing, then, is that even those features are only bad if you think that the bottom is bad.

Next to them I list books, for example, like Robin Hanson, Age of M, which argues that living at subsistence may not be terrible, and in fact it’s pretty much what most of our past lives outside of the current dream time have always involved. So I think those are two really different lenses to make sense of the same reality, and I personally found this contrast so intriguing that I hosted a salon last year with Paul Christiano, Robin Hanson, Peter Eckersley, and a few others to kind of map out where we may be racing towards, so how bad those competitive equilibria actually are. I also link to those from the website.

To me it’s always interesting to map out one potentially possible future visions, and then try to find one either that contradicts or compliments it. I think having a good idea of an overview of those gives you a good map, or at least a space of possibilities.

Ariel: What do you recommend to people who are interested in trying to do more? How do you suggest they get involved?

Allison: One thing, an obvious thing, would be commenting on the Google Docs, and I really encourage everyone to do that. Another one would be just to join the mailing list. You can kind of indicate whether you want updates on me, or whether you want to collaborate, in which case we may be able to reach out to you. Or if you’re interested in meetups, they would only be in San Francisco so far, but I’m hoping that there may be others. I do think that currently the project is really in its infancy. We are relying on the community to help with this, so there should be a kind of collaborative vision.

I think that one of the main things that I’m hoping that people can get out of it for now is just to give some inspiration on where we may end up if we get it right, and on why work toward better futures, or even work toward preventing existential risks, is both possible and necessary. If you go on the website on the first section — the vision section — that’s what that section is for.

Secondly, then, if you are already opted in, if you’re already committed, I’m hoping that perhaps the project can provide some orientation. If someone would like to help but doesn’t really know where to start, the focus areas are an attempt to map out the different areas that we need to make progress on for better futures. Each area comes with an introductory text, and organizations that are working in that area that one can join or support, and Future of Life is in a lot of those areas.

Then I think finally, just apart from inspiration or orientation, it’s really a place for collaboration. The project is in its infancy and everyone should contribute their favorite pieces to our better futures.

Ariel: I’m really excited to see what develops in the coming year for existentialhope.com. And, naturally, I also want to hear from Max and Anthony about 2019. What are you looking forward to for FLI next year?

Max: For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth. At the nerdy level, I’m looking forward to more collaboration on AI’s safety research and also ways of making the economy, that keeps growing thanks to AI, actually make everybody better off, rather than some people poorer and angrier. And at the most global level really looking forward to working harder to get past this outdated us versus them attitude that we still have between the US and China and Russia and other major powers. Many of our political leaders are so focused on the zero sum game mentality that they will happily risk major risks of nuclear war and AI arms races and other outcomes where everybody would lose, instead of just realizing hey, you know, we’re actually in this together. What does it mean for America to win? It means that all Americans get better off. What does it mean for China to win? It means that the Chinese people all get better off. Those two things can obviously happen at the same time as long as there’s peace, and technology just keeps improving life for everybody.

In practice, I’m very eagerly looking forward to seeing if we can get scientists from around the world — for example, AI researchers — to converge on certain shared goals that are really supported everywhere in the world, including by political leaders and in China and the US and Russia and Europe and so on, instead of just obsessing about the differences. Instead of thinking us versus them, it’s all of us on this planet working together against the common enemy, which is our own stupidity and the tendency to make bad mistakes, so that we can harness this powerful technology to create a future where everybody wins.

Anthony: I would say I’m looking forward to more of what we’re doing now, thinking more about the futures that we do want. What exactly do those look like? Can we really think through pictures of the future that makes sense to us that are attractive, that are plausible, and yet aspirational, and where we can identify things and systems and institutions that we can build now toward the aim of getting us to those futures? I think there’s been a lot of, so far, thinking about what are the major problems that might arise, and I think that’s really, really important, and that project is certainly not over, and it’s not like we’ve avoided all of those pitfalls by any means, but I think it’s important not to just not fall into the pit, but to actually have a destination that we’d like to get to — you know, the resort at the other end of the jungle or whatever.

I find it frustrating a bit when people do what I’m doing now: they talk about talking about what we should and shouldn’t do. But they don’t actually talk about what we should and shouldn’t do. I think the time has come to actually talk about it in the same way that when… there was the first use of CRISPR in a embryo that came to term. So everybody’s talking about, “Well, we need to talk about what we should and shouldn’t do with this. We need to talk about that, we need to talk about it.” Let’s talk about it already.

So I’m excited about upcoming events that FLI will be involved in that are explicitly thinking about: let’s talk about what that future is that we would like to have and let’s debate it, let’s have that discussion about what we do want and don’t want, try to convince each other and persuade each other of different visions for the future. I do think we’re starting to actually build those visions for what institutions and structures in the future might look like. And if we have that vision, then we can think of what are the things we need to put in place to have that.

Ariel: So one of the reasons that I wanted to bring Gaia on is because I’m working on a project with her — and it’s her project — where we’re looking at this process of what’s known as worldbuilding, to sort of look at how we can move towards a better future for all. I was hoping you could describe it, this worldbuilding project that I’m attempting to help you with, or work on with you. What is worldbuilding, and how are you modifying it for your own needs?

Gaia: Yeah. Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series, for example. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So these huge connected systems of systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are these vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice. And in my personal definition of worldbuilding, the way that I’m thinking of it, and using it, is that it unfolds in four main stages. The first stage is: we develop a foundation of shared knowledge that’s grounded in science, and research, and relevant domain expertise. And the second phase is building on that foundation of knowledge. We engage in an exercise where we predict how the interconnected systems that have emerged in this knowledge database — we predict how they will evolve. And we imagine the state of their evolution at a specific point in the future. Then the third phase is really about capturing that state in all its complexity, and making that information useful to the people who need to interface with it. And that can be in the form of interlinked databases and particularly also in the form of visualizations, which help make these sort of abstract ideas feel more present and concrete. And then the fourth and final phase is then utilizing that resulting world as a tool that can be used to support scenario simulation, research, and development in many different areas including public policy, media production, education, and product development.

I mentioned that these techniques are being brought outside of the realm of entertainment. So rather than just designing fantasy worlds for the sole purpose of containing narrative fiction and stories, these techniques are now being used with communities, and Fortune 500 companies, and foundations, and NGOs, and other places, to create plausible future worlds. It’s fascinating to me to see how these are being used. For example, they’re being used to reimagine the mission of an organization. They’re being used to plan for the future, and plan around a collective vision of that future. They’re very powerful for developing new strategies, new programs, and new products. And I think to me one of the most interesting things is really around informing policy work. That’s how I see worldbuilding.

Ariel: Are there any actual examples that you can give or are they proprietary?

Gaia: There are many examples that have created some really incredible outcomes. One of the first examples of worldbuilding that I ever learned about was a project that was done with a native Alaskan tribe. And the comments that came from the tribe and about that experience were what really piqued my interest. Because they said things like, “This enabled us to sort of leap frog over the barriers in our current thinking and imagine possibilities that were sort of beyond what we had considered.” This project brought together several dozen members of the community, again, to engage in this collaborative design exercise, and actually visualize and build out those systems and understand how they would be interconnected. And it ended up resulting in, I think, some really incredible things. Like a partnership with MIT where they brought a digital fabrication lab onto their reservation, and created new education programs around digital design and digital fabrication for their youth. And there’s a lot of other things that are still coming out of that particular worldbuild.

There are other examples where Fortune 500 companies are building out really detailed, long-term worldbuilds that are helping them stay relevant, and imagine how their business model is going to need to transform in order to adapt to really plausible, probable futures that are just around the corner.

Ariel: I want to switch now to what you specifically are working on. The project we’re looking at is looking roughly 20 years into the future. And you’ve sort of started walking through a couple systems yourself while we’ve been working on the project. And I thought that it might be helpful if you could sort of walk through, with us, what those steps are to help understand how this process works.

Gaia: Maybe I’ll just take a quick step back, if that’s okay and just explain the worldbuild that we’re preparing for.

Ariel: Yeah. Please do.

Gaia: This is a project called Augmented Intelligence. The first Augmented Intelligence summit is happening in March in 2019. And our goal with this project is really to engage with and shift the culture, and also our mindset, about the future of artificial intelligence. And to bring together a multidisciplinary group of leaders from government, academia, and industry, and to do a worldbuild that’s focused on this idea of: what does our future world look like with advanced AI deeply integrated into it? And to go through the process of really imagining and predicting that world in a way that’s just a bit further beyond the horizon that we normally see and talk about. And that exercise, that’s really where we’re getting that training for long-term thinking, and for systems level thinking. And the world that results — our hope is that it will allow us to develop better intuitions, to experiment, to simulate scenarios, and really to have a more attuned capacity to engage in many ways with this future. And ultimately explore how we want to evolve our tools and our society to meet that challenge.

Gaia: What will come out of this process — it really is a generative process that will create assets and systems that are interconnected, that inhabit and embody a world. And this world should allow us to experiment, and simulate scenarios, and develop a more attuned capacity to engage with the future. And that means on both an intuitive level and also in a more formal structured way. And ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future. Our goal is to really bootstrap a steering mechanism that allows us to navigate more effectively toward outcomes that support human flourishing.

Ariel: I think that’s really helpful. I think an example to walk us through what that looks like would be helpful.

Gaia: Sure. You know, basically what would happen in a worldbuilding process is that you would have some constraints or some sort of seed information that you think is very likely — based on research, based on the literature, based on sort of the input that you’re getting from domain experts in that area. For example, you might say, “In the future we think that education is all going to happen in a virtual reality system that’s going to cover the planet.” Which I don’t think is actually the case, but just to give an example. You might say something like, “If this were true, then what are the implications of that?” And you would build a set of systems, because it’s very difficult to look at just one thing in isolation.

Because as soon as you start to do that — John Muir says, “As soon as you try to look at just one thing, you find that it is irreversibly connected to everything else in the universe.” And I apologize to John Muir for not getting that quote exactly correct, he says it much more eloquently than that. But the idea is there. And that’s sort of what we leverage in a worldbuilding process: where you take one idea and then you start to unravel all of the implications, and all of the interconnecting systems that would be logical, and also possible, if that thing were true. It really does depend on the quality of the inputs. And that’s something that we’re working really, really hard to make sure that our inputs are believable and plausible, but don’t put too much in terms of constraints on the process that unfolds. Because we really want to tap into the creativity in the minds of this incredible group of people that we’re gathering, and that is where the magic will happen.

Ariel: To make sure that I’m understanding this right: if we use your example of, let’s say all education was being taught virtually, I guess questions that you might ask or you might want to consider would be things like: who teaches it, who’s creating it, how do students ask questions, who would their questions be directed to? What other types of questions would crop up that we’d want to consider? Or what other considerations do you think would crop up?

Gaia: You also want to look at the infrastructure questions, right? So if that’s really something that is true all over the world, what do server farms look like in that future, and what’s the impact on the environment? Is there some complimentary innovation that has happened in the field of computing that has made computing far more efficient? How have we been able to do this? Given the — there are certain physical limitations that just exist on our planet. If X is true in this interconnected system, then how have we shaped, and molded, and adapted everything around it to make that thing true? You can look at infrastructure, you can look at culture, you can look at behavior, you can look at, as you were saying, communication and representation in that system and who is communicating. What are the rules? I mean, I think a lot about the legal framework, and the political structure that exists around this. So who has power and agency? How are decisions made?

Ariel: I don’t know what this says about me, but I was just wondering what detention looks like in a virtual world.

Gaia: Yeah. It’s a good question. I mean, what are the incentives and what are the punishments in that society? And do our ideas of what incentives and punishments look like actually change in that context? There isn’t a place where you can come on a Saturday if there’s no physical school yard. How is detention even enforced when people can log in and out of the system at will?

Ariel: All right, now you have me wondering what recess looks like.

Gaia: So you can see that there are many different fascinating sort of rabbit holes that you could go down. And of course our goal is to really make this process really useful to imagining the way that we want our policies, and our tools, and our education to evolve.

Ariel: I want to ask one more question about … Well, it’s sort of about this but there’s also a broader aspect to it. And that is, I hear a lot of talk — and I’m one of the people saying this because I think it’s absolutely true — that we need to broaden the conversation and get more diverse voices into this discussion about what we want our future to look like. But what I’m finding is that this sounds really nice in theory, but it’s incredibly hard to actually do in practice. I’m under the impression that that is some of what you’re trying to address with this project. I’m wondering if you can talk a little bit about how you envision trying to get more people involved in considering how we want our world to look in the future.

Gaia: Yeah, that’s a really important question. One of the sources of inspiration for me on this point was a conversation with Stuart Russell — an interview with Stuart Russell, I should say — that I listened to. We’ve been really fortunate and we are thrilled that he’s one of our speakers and he’ll be involved in the worldbuilding process. And he kind of talks about this idea that the artificial intelligence researchers, the roboticists, even a few technologists that are building these amplifying tools that are just increasing in potency year over year, are not the only ones who need to have input into the conversation around how they’re utilized and the implications on all of us. And that’s really one of the sort of core philosophies behind this particular project, is that we really want it to be a multidisciplinary group that comes together, and we’re already seeing that. We have a really wonderful set of collaborators who are thinking about ethics in this space, and who are thinking about a broader definition of ethics, and different cultural perspectives on ethics. And how we can create a conversation that allows space for those to simultaneously coexist.

Allison: I recently had a similar kind of question that arose in conversation, which was about: why are we lacking positive future visions so much? Why are we all kind of stuck in a snapshot of the current suboptimal macro situation? I do think it’s our inability to really think in larger terms. If you look at our individual human life, clearly for most of us, it’s pretty incredible — our ability to lead much longer and healthier lives than ever before. If we compare this to how well humans used to live, this difference is really unfathomable. I think Yuval Harari said it right, he said “You wouldn’t want to have lived 100 years ago.” I think that’s correct. On the other hand I also think that we’re not there yet.

I find it, for example, pretty peculiar that we say that we value freedom of choice in everything we do, but in the one thing that’s kind of the basis of all of our freedoms, which is our very existence, we leave it up again to slowly deteriorate according to aging. This would really deteriorate ourselves and everything we value. I think that every day aging is burning libraries. We’ve come a long way, but we’re not safe, and we are definitely not there yet. I think the same holds true for civilization at large. I think thanks to a lot of technologies our living standards have been getting better and better, and I think the decline of poverty and violence are just a few examples.

We can share knowledge much easier, and I think everyone who’s read Enlightenment Now will be kind of tired of those graphs, but again, I also think that we’re not there yet. I think even though we have less wars than ever before, the ability to wipe ourselves out as a species also really exists, and I think in fact this ability is now more available to more people, and with technologies of maturity, it may really only take a small and well-curated group of individuals to cause havoc of catastrophic consequences. If you let that sink in, it’s really absurd that we have no emergency plan for the use of technological weapons. We have no plans to rebuild civilization. We have no plans to back up human life.

I think that current news articles take too much of a short term view. They’re more a snapshot. I think the long-term view, on the one hand, opens up this eye of, “Hey, look how far we’ve come,” but also, “Oh man. We’re here, and we’ve made it so far, but there’s no feasible plan for safety yet.” I do think we need to change that, so I think the long run doesn’t only open up rosy glasses, but also the realization that we ought to do more because we’ve come so far.

Josh: Yeah, one of the things that makes this time so dangerous is we’re at this kind of a fork in the road, where if we go this one way, like say, with figuring out how to develop friendliness in AI, we could have this amazing, astounding future for humanity that stretches for billions and billions and billions of years. One of the things that really opened my eyes was, I always thought that the heat death of the universe will spell the end of humanity. There’s no way we’ll ever make it past that, because that’s just the cessation of everything that makes life happen, right? And we will probably have perished long before that. But let’s say we figured out a way to just make it to the last second and humanity dies at the same time the universe does. There’s still an expiration date on humanity. We still go extinct eventually. But one of the things I ran across when I was doing research for the physics episode is that the concept of growing a universe from seed, basically, in a lab is out there. It’s done. I don’t remember who came up with it. But somebody has sketched out basically how to do this.

It’s 2018. If we think 100 or 200 or 500 or a thousand years down the road and that concept can be built upon and explored, we may very well be able to grow universes from seed in laboratories. Well, when our universe starts to wind down or something goes wrong with it, or we just want to get away, we could conceivably move to another universe. And so we suddenly lose that expiration date for humanity that’s associated with the heat death of the universe, if that is how the universe goes down. And so this idea that we have a future lifetime that spans into at least the multiple billions of years — at least a billion years if we just manage to stay alive on Planet Earth and never spread out but just don’t actually kill ourselves — when you take that into account the stakes become so much higher for what we’re doing today.

Ariel: So, we’re pretty deep into this podcast, and we haven’t heard anything from Anders Sandberg yet, and this idea that Josh brought up ties in with his work. Since we’re starting to talk about imagining future technologies, let’s meet Anders.

Anders: Well, I’m delighted to be on this. I’m Anders Sandberg. I’m a senior research fellow at The Future of Humanity Institute at University of Oxford.

Ariel: One of the things that I love, just looking at your FHI page, you talk about how you try to estimate the capabilities of future technology. I was hoping you could talk a little bit about what that means, what you’ve learned so far, how one even goes about studying the capabilities of future technologies?

Anders: Yeah. It is a really interesting problem because technology is based on ideas. As a general rule, you cannot predict what ideas people will come up with in the future, because if you could, you would already kind of have that idea. So this means that, especially technologies that are strongly dependent on good ideas, are going to be tremendously hard to predict. This is of course why artificial intelligence is a little bit of a nightmare. Similarly, biotechnology is strongly dependent on what we discover in biology and a lot of that is tremendously weird, so again, it’s very unpredictable.

Meanwhile, other domains of life are advancing at a more sedate pace. It’s more like you incrementally improve things. So the ideas are certainly needed, but we don’t really change everything around. If you think about more slower, microprocessors are getting better and a lot of improvements are small, incremental ones. Some of them require a lot of intelligence to come up with, but in the end it all sums together. It’s a lot of small things adding together. So you can see a relatively smooth development in the large.

Ariel: Okay. So what you’re saying is we don’t just have each year some major discovery, and that’s what doubles it. It’s lots of little incremental steps.

Anders: Exactly. But if you look at the performance of some software, quite often it goes up smoothly because the computers are getting better and then somebody has a brilliant idea that can do it not just in 10% less time, but maybe in 10% of the time that it would have taken. For example, the fast Fourier transform that people invented in the 60s and 70s enables the compression we use today for video and audio and enables multimedia on the internet. Without that to speed up, it would not be practical to do, even with current computers. This is true for a lot of things in computing. You get a surprise insight and the problem that previously might be impossible to do efficiently suddenly becomes quite convenient. So the problem is of course: what can we say about the abilities of future technology if these things happen?

One of the nice things you can do is you can lean on the laws of physics. There are good reasons not to think that perpetual motion machines can work, because we understand, actually, energy conservation and the laws of thermodynamics that give very strong reason why this cannot happen. We can be pretty certain that that’s not possible. We can analyze what would then be possible if you had perpetual motion machines or faster than light transport and you can see that some of the consequences are really weird. But it makes you suspect that this is probably not going to happen. So that’s one way of looking at it. But you can do the reverse: You can take laws of physics and engineering that you understand really well and make fictional machines — essentially work out all the details and say “okay, I can’t build this but were I to build it, in that case what properties would it have?” If I wanted to build, let’s say, a machine made out of atoms, could I make it to work? And it turns out that this is possible to do in a rigorous way, and it tells you capabilities about machines that don’t exist yet, and maybe we will never build, but it shows you what’s possible.

This is what Eric Drexler did for nanotechnology in the 80s and 90s. He basically worked out what would be possible if we could put atoms in the right place. He could demonstrate that this would produce machines of tremendous capability. We still haven’t built them, but he proved that these can be built — and we probably should build them because they are so effective, so environmentally friendly, and so on.

Ariel: So you gave the example of what he came up with a while back. What sort of capabilities have you come across that you thought were interesting that you’re looking forward to us someday pursuing?

Anders: I’ve been working a little bit on the questions about “is it possible to settle a large part of the universe?” I have been working out, together with my colleagues, a bit of the physical limitations of that. All in all, we found that a civilization doesn’t need to use an enormous, astronomical amount of matter and energy to settle a very large chunk of the universe. The total amount of matter corresponds with roughly a Mercury-sized planet in a solar system in each of the galaxies. Many people would say if you want to settle the universe you need an enormous spacecraft and you need enormous amount of energy. It looks like you would be able to see that across half of the universe, but we could demonstrate that actually if you essentially use matter from a really big asteroid or a small planet, you can get enough solar collectors to launch small spacecraft to all the stars and all the galaxies within reach and there you’ll use again a bit of asteroids to do it. The laws of physics allow intelligent life to spread across an enormous amount of the universe in a rather quiet way.

Ariel: So does that mean you think it’s possible that there is life out there and it’s reasonable for us not to have found it?

Anders: Yes. If we were looking at the stars, we would probably miss if one or two stars in remote galaxies were covered with solar collectors. It’s rather easy to miss them among the hundreds of billions of other stars. This was actually the reason we did this paper: We demonstrate that much of the thinking about the Fermi paradox — that annoying question that well, there ought to be a lot of intelligent life out in the universe given how large it is and that we tend to think that it’s relatively likely yet we don’t see anything — many of those explanations are based on the possibility of colonizing just the Milky Way. In this paper, we demonstrate that actually you need to care about all the other galaxies too. In a sense, we made the fermi paradox between a million and a billion times worse. Of course, this is all in a day’s work for us in the Philosophy Department, making everybody’s headaches bigger.

Ariel: And now it’s just up to someone else to figure out the actual way to do this technically.

Anders: Yeah, because it might actually be a good idea for us to do.

Ariel: So Josh, you’ve mentioned the future of humanity a couple of times, and humanity in the future, and now Anders has mentioned the possibility of colonizing space. I’m curious how you think that might impact humanity. How do you define humanity in the future?

Josh: I don’t know. That’s a great question. It could take any number of different routes. I think — Robin Hanson is an economist who came up with this, the great filter hypothesis, and I talked to him about that very question. His idea was that — and I’m sure it’s not just his, but it’s probably a pretty popular idea — that once we spread out from Earth and start colonizing further and further out into the galaxy, and then into the universe, we’ll undergo speciation events like, there will be multiple species of humans in the universe again, just like there was like 50,000 years ago, when we shared Earth with multiple species of humans.

The same thing is going to happen as we spread out from Earth. I mean, I guess the question is, which humans are you talking about, in what galaxy? I also think there’s a really good chance — and this could happen among multiple human species — that at least some humans will eventually shed their biological form and upload themselves into some sort of digital format. I think if you just start thinking in efficiencies, that’s just a logical conclusion to life. And then there’s any number of routes we could take and change especially as we merge more with technology or spread out from Earth and separate ourselves from one another. But I think the thing that really kind of struck me as I was learning all this stuff is that we tend to think of ourselves as the pinnacle of evolution, possibly the most intelligent life in the entire universe, right? Certainly the most intelligent on Earth, we’d like to think. But if you step back and look at all the different ways that humans can change, especially like the idea that we might become post-biological, it becomes clear that we’re just a point along a spectrum that keeps on stretching out further and further into the future than it does even into the past.

We’re just at a current situation on that point right now. We’re certainly not like the end-all be-all of evolution. And ultimately, we may take ourselves out of evolution by becoming post-biological. It’s pretty exciting to think about all the different ways that it can happen, all the different routes we can take — there doesn’t have to just be one single one either.

Ariel: Okay, so, I kind of want to go back to some of the space stuff a little bit, and Anders is the perfect person for my questions. I think one of the first things I want to ask is, very broadly, as you’re looking at these different theories about whether or not life might exist out in the universe and that it’s reasonable for us not to have found it, do you connect the possibility that there are other life forms out there with an idea of existential hope for humanity? Or does it cause you concern? Or are they just completely unrelated?

Anders: The existence of extraterrestrial intelligence: if we knew they existed that would in some sense be hopeful because we know the universe allows for more than our kind of intelligence and intelligence might survive over long spans of time. If we just discovered that we’re all alone and a lot of ruins from extinct civilizations, that would be very bad news for us. But we might also have this weird situation that we currently feel, that we don’t see anybody. We don’t notice any ruins; Maybe we’re just really unique and should perhaps feel a bit proud or lucky but also responsible for a whole universe. It’s tricky. It seems like we could learn something very important if we understood how much intelligence there is out there. Generally, I have been trying to figure out: is the absence of aliens evidence for something bad? Or might it actually be evidence for something very hopeful?

Ariel: Have you concluded anything?

Anders: Generally, our conclusion has been that the absence of aliens is not surprising. We tend to think that the Fermi Paradox implies “oh, there’s something strange here.” The universe is so big and if you multiply the number of stars with some reasonable probability, you should get loads of aliens. But actually, the problem here is reasonable probability. We normally tend to think of that as something like bigger than one chance in a million or so, but actually, there is no reason the laws of physics wouldn’t put a probability that’s one in a googol. It actually turns out that we’re uncertain enough about the origin of life and the origins of intelligence and other forms of complexity that it’s not implausible that maybe we are the only life within the visible universe. So we shouldn’t be too surprised about that empty sky.

One possible reason for the great silence is that life is extremely rare. Another possibility might be that life is not rare but it’s very rare that it becomes the kind of life that evolves to complex nervous systems. Another reason might be of course that once you get intelligence, well, it destroys itself relatively quickly, and Robin Hanson has called this the Great Filter. We know that one of the terms in the big equation for the number of civilizations in the universe needs to be very small; otherwise, the sky would be full of aliens. But is that one of the early terms, like the origin of life, or the origin of intelligence — or the late term, how long intelligence survives? Now, if there is an early Great Filter, this is rather good news for us. We are going to be very unique and maybe a bit lonely, but, it doesn’t tell us anything dangerous about our own chances. Of course, we might still flub it and go extinct because our own stupidity but that’s kind of up to us rather than the laws of physics.

On the other hand, if it turns out that there is a late Great Filter, then even though we know the universe might be dangerous, we’re still likely to get wiped out — which is very scary. So, figuring out where the unlikely terms in the big equation are is actually quite important for making a guess about our own chances.

Ariel: Where are we now in terms of that?

Anders: Right now, in my opinion — I have a paper, not published yet but it’s in the review process, where we try to apply proper uncertainty calculations to this. Because many people make guesstimates about the probabilities of various things, admit that they’re guesstimates, and then get a number at the end that we also admit is a bit uncertain. But we haven’t actually done a proper uncertainty calculation so quite a lot of these numbers become surprisingly biased. So instead of saying that maybe there’s one chance in a million that a planet develops life, you should try to have a full range of what’s the lowest probability there could be for life and what’s the highest probability and how do you think it distributes between them. If you use that kind of proper uncertainty range and then multiply it all together and do the maths right, then you get the probability distribution for how many alien species there could be in the universe. Even if you’re starting out as somebody who’s relatively optimistic about the mean value of all of this, you will still find that you get a pretty big chunk of probability that we’re actually pretty alone in the Milky Way or even the observable universe.

In some sense, this is just common sense. But it’s a very nice thing to be able to quantify the common sense, and then start saying: so what happens if we for example discover that there is life on Mars? What will that tell us? How will that update things? You can use the math to calculate that, and this is what we’ve done. Similarly, if we notice that there doesn’t seem to be any alien super civilizations around the visible universe, that’s a very weak update but you can still use that to see that this updates our estimates of the probability of life and intelligence much more than the longevity of civilizations.

Mathematically this gives us a reason to think that the Great Filter might be early. The absence of life might be rather good news for us because it means that once you get intelligence, there’s no reason why it can’t persist for a long time and grow into something very flourishing. That is a really good cause of existential hope. It’s really promising, but we of course need to do our observations. We actually need to look for life, we need to look out in the sky and see. You may find alien civilizations. In the end, any amount of mathematics and armchair astrobiology, that’s always going to be disproven by any single observation.

Ariel: That comes back to a question that came to mind a bit earlier. As you’re looking at all of this stuff and especially as you’re looking at the capabilities of future technologies, once we figure out what possibly could be done, can you talk a little bit about what our limitations are today from actually doing it? How impossible is it?

Anders: Well, impossible is a really tricky word. When I hear somebody say “it’s impossible,” I immediately ask “do you mean against the laws of physics and logic” or “we will not be able to do this for the foreseeable future” or “we can’t do it within the current budget”?

Ariel: I think maybe that’s part of my question. I’m guessing a lot of these things probably are physically possible, which is why you’ve considered them, but yeah, what’s the difference between what we’re technically capable of today and what, for whatever reason, we can’t budget into our research?

Anders: We have a domain of technologies that we already have been able to construct. Some of them are maybe too expensive to be very useful. Some of them still requires a bunch of grad students holding them up and patching them as they are breaking all the time, but we can kind of build them. And then there’s some technology that we are very robustly good at. We have been making cog wheels and combustion engines for decades now and we’re really good at that. Then there are these technologies that we can do exploratory engineering to demonstrate that if we actually had cog wheels made out of pure diamond or the Dyson shell surrounding the sun collecting energy, they could do the following things.

So they don’t exist as practical engineering. You can work out blueprints for them and in some sense of course, once we have a complete enough blueprint, if you asked could you build the thing, you could do it. The problem is of course normally you need the tools and resources for that, and you need to make the tools to make the tools, and the tools to make those tools, and so on. So if we wanted to make atomically precise manufacturing today, we can’t jump straight to it. What we need to make is a tool that allows us to build things that are moving us much closer.

The Wright Brothers’ airplane was really lousy as an airplane but it was flying. It’s a demonstration, but it’s also a tool that allows you to make a slightly better tool. You would want to get through this and you’d probably want to have a roadmap and do experiments and figure out better tools to do that.

This is typically where scientists actually have to give way to engineers. Because engineers care about solving a problem rather than being the most elegant about it. In science, we want to have this beautiful explanation of how everything works; Then we do experiments to test whether it’s true and refine our explanation. But in the end, the paper that gets published is going to be the one that has the most elegant understanding. In engineering, the thing that actually sells and changes the world is not going to be the most elegant thing but the most useful thing. The AK-47 is in many ways not a very precise piece of engineering but that’s the point. It should be possible to repair it in the field.

The reason our computers are working so well was we figured out the growth path where you use photolithography to etch silicon chips, and that allowed us to make a lot of them very cheaply. As we learned more and more about how to do that, they became cheaper and more capable and we developed even better ways of etching them. So in order to build molecular nanotechnology, you would need to go through a somewhat similar chain. It might be that you start out with using biology to make proteins, and then you use the proteins to make some kind of soft machinery, and then you use that soft machinery to make hard machinery, and eventually end up with something like the work of Eric Drexler.

Ariel: I actually want to step back to the present now and you mentioned computers and we’re doing them very well. But computers are also an example of — or maybe software I suppose is more the example — of technology that works today but it often fails. Especially when we’re considering things like AI safety in the future, what should we make of the fact that we’re not designing software to be more robust? I mean, I think especially if we look at something like airplanes which are quite robust, we can see that it could be done but we’re still choosing not to.

Anders: Yeah, nobody would want to fly with an airplane that crashed as often as a word processor.

Ariel: Exactly.

Anders: It’s true that the earliest airplanes were very crash prone — in fact most of them were probably as bad as our current software is. But the main reason we’re not making software better is that most of the time we’re not willing to pay for that quality. Also, that there is some very hard engineering problems with engineering complexity. So making a very hard material is not easy but in some sense, it’s a straightforward problem. If, on the other hand, you have literally billions of moving pieces that all need to fit together, then it gets tricky to make sure that this always works as it should. But it can be done.

People have been working on mathematical proofs that certain pieces of software are correct and secure. It’s just that up until recently, it’s been so expensive and tough that nobody really cared to do it except maybe some military groups. Now it’s starting to become more and more essential because we’ve built our entire civilization on a lot of very complex systems that are unfortunately very insecure, very unstable, and so on. Most of the time we get around it by making backup copies and whenever a laptop crashes, well, we reboot it, swear a bit and hopefully we haven’t lost too much work.

That’s not always a bad solution — a lot of biology is like that too. Cells in our bodies are failing all the time but they’re just getting removed and replaced and then we try again. But this, of course, is not enough for certain sensitive applications. If we ever want to have brain-to-computer interfaces, we certainly want to have good security so we don’t get hacked. If we want to have very powerful AI systems, we want to make sure that their motivations are constrained in such a way that they’re helpful. We also want to make sure that they don’t get hacked or develop weird motivations or behave badly because their owners told them to behave badly. Those are very complex problems: It’s not just like engineering something that’s simply safe. You’re going to need entirely new forms of engineering for that kind of learning system.

This is something we’re learning. We haven’t been building things like software for very long and when you think about the sheer complexity of a normal operating system, even a small one running on a phone, it’s kind of astonishing that it works at all.

Allison: I think that Eliezer Yudkowsky once said that the problem of our complex civilization is its complexity. It does seem that technology is outpacing our ability to make sense of it. But I think we have to remind ourselves again of why we developed those technologies in the first place, and of the tremendous promises if we get it right. Of course on the one hand I think solving problems that are created by technologies, for example, existential risks — or at least some of those, they require a few kind of non-technological aspects, especially human reasoning, sense-making, and coordination.

And  I’m not saying that we have to focus on one conception of the good. There are many conceptions of the good. There’s transhumanist futures, there’s cosmist futures, there’s extropian futures, and many, many more, and I think that’s fine. I don’t think we have to agree on a common conception just yet — in fact we really shouldn’t. But the point is not that we ought to settle soon, but that we have to allow into our lives again the possibility that things can be good, that good things are possible — not guaranteed, but they’re possible. I think to use technologies for good we really need a change of mindset, from pessimism to at least conditional optimism. And we need a plethora of those, right? It’s not going to be one of them.

I do think that in order to use technologies for good purposes, we really have to remind ourselves that they can be used for good, and that there are good outcomes in the first place. I genuinely think that often in our research, we put the cart before the horse in focusing solely on how catastrophic human extinction would be. I think this often misses the point that extinction is really only so bad because the potential value that could be lost is so big.

Josh: If we can just make it to this point — Nick Bostrom, whose ideas a lot of The End of the World is based on, calls it technological maturity. It’s kind of a play on something that Carl Sagan said about the point we’re at now: “technological adolescence” is what Sagan called it, which is this point where we’re starting to develop this really intense, amazingly powerful technology that will one day be able to guarantee a wonderful, amazing existence for humanity, if we can survive to the point where we’ve mastered it safely. That’s what the next hundred or 200 or maybe 300 years stretches out ahead of us. That’s the challenge that we have in front of us. If we can make it to technological maturity, if we figure out how to make an artificial generalized intelligence that is friendly to humans, that basically exists to make sure that humanity is well cared for and taken care of, there’s just no telling what we’ll be able to come up with and just how vastly improved the life of the average human would be in that situation.

We’re talking — honestly, this isn’t like some crazy far out far future idea. This is conceivably something that we could get done as humans in the next century or two or three. Even if you talk out to 1000 years, that sounds far away. But really, that’s not a very long time when you consider just how far of a lifespan humanity could have stretching out ahead of it. The stakes: that makes me, almost gives me a panic attack when I think of just how close that kind of a future is for humankind and just how close to the edge we’re walking right now in developing that very same technology.

Max: The way I see the future of technology as we go towards artificial general intelligence, and perhaps beyond — it could totally make life the master of its own destiny, which makes this a very important time to stop and think what do we want this destiny to be? The more clear and positive vision we can formulate, I think the more likely it is we’re going to get that destiny.

Allison: We often seem to think that rather than optimizing for good outcomes, we should aim for maximizing the probability of an okay outcome, but I think for many people it’s more motivational to act on a positive vision, rather than one that is steered by risks only. To be for something rather than against something. To work toward a grand goal, rather than an outcome in which survival is success. I think a good strategy may be to focus on good outcomes.

Ariel: I think it’s incredibly important to remember all of the things that we are hopeful for for the future, because these are the precise reasons that we’re trying to prevent the existential risks, all of the ways that the future could be wonderful. So let’s talk a little bit about existential hope.

Allison: The term existential hope was coined by Owen Cotton-Barratt and Toby Ord to describe the chance of something extremely good happening, as opposed to an existential risk, which is a chance of something extremely terrible occurring. Kind of like describing a eucatastrophe instead of a catastrophe. I personally really agree with this line, because I think for me really it means that you can ask yourself this question of: do you think you can save the future? I think this question may appear at first pretty grandiose, but I think it’s sometimes useful to ask yourself that question, because I think if your answer is yes then you’ll likely spend your whole life trying, and you won’t rest, and that’s a pretty big decision. So I think it’s good to consider the alternative, because if the answer is no then you perhaps may be able to enjoy the little bit of time that you have on Earth rather than trying to spend it on making a difference. But I am not sure if you could actually enjoy every blissful minute right now if you knew that there was just a slight chance that you could make a difference. I mean, could you actually really enjoy this? I don’t think so, right?

I think perhaps we fail — and we do our best, but at the final moment something comes along that makes us go extinct anyways. But I think if we imagine the opposite scenario, in which we have not tried, and it turns out that we could have done something, an idea that we may have had or a skill we may have given was missing and it’s too late, I think that’s a much worse outcome.

Ariel: Is it fair for me to guess, then, that you think for most people the answer is that yes, there is something that we can do to achieve a more existential hope type future?

Allison: Yeah, I think so. I think that for most people there is at least something that we can be doing if we are not solving the wrong problems. But I do also think that this question is a serious question. If the answer for yourself is no, then I think you can really try to focus on having a life that is as good as it could be right now. But I do think that if the answer is yes, and if you opt in, then I think that there’s no space any more to focus on how terrible everything is. Because we’ve just confessed to how terrible everything is, and we’ve decided that we’re still going to do it. I think that if you opt in, really, then you can take that bottle of existential angst and worries that I think is really pestering us, and put it to the side for a moment. Because that’s an area you’ve dealt with and decided we’re still going to do it.

Ariel: The sentiment that’s been consistent is this idea that the best way to achieve a good future is to actually figure out what we want that future to be like and aim for it.

Max: On one hand, should be a no-brainer because that’s how we think about life as individuals. Right? I often get students walking into my office at MIT for career advice, and I always ask them about their vision for the future, and they always tell me something positive. They don’t walk in there and say, “Well, maybe I’ll get murdered. Maybe I’ll get cancer. Maybe I’ll …” because they know that that’s a really ridiculous approach to career planning. Instead, they envision the positive future, their aspiring things, so that we can constructively think about the challenges, the pitfalls to be avoided, and a good strategy for getting there.

Yet, as a species, we do exactly the opposite. We go to the movies and we watch Terminator, or Blade Runner, or yet another dystopic future vision that just fills us with fear and sometimes paranoia or hypochondria, when what we really need to do, as a species, is the same thing as we need to do as individuals: envision a hopeful, inspiring future that we want to rally around. It’s a well known historical fact, right, that the secret to get more constructive collaboration is to develop a shared positive vision. Why is Silicon Valley in California and not in Uruguay or Mongolia? Well, it’s because in the 60s, JFK articulated this really inspiring vision — going to space — which lead to massive investments in stem research and gave the US the best universities in the world and these amazing high tech companies, ultimately. Came from a positive vision.

Similarly, why is Germany now unified into one country instead of fragmented into many? Or Italy? Because of a positive vision. Why are the US states working together instead of having more civil wars against each other? Because of a positive vision of how much greater we’ll be if we work together. And if we can develop a more positive vision for the future of our planet, where we collaborate and everybody wins by getting richer and better off, we’re again much more likely to get that than if everybody just keeps spending their energy and time thinking about all the ways they can get screwed by their neighbors and all the ways in which things can go wrong — causing some self fulfilling prophecy basically, where we get a future with war and destruction instead of peace and prosperity.

Anders: One of the things I’m envisioning is that you can make a world where everybody’s connected but also connected on their own terms. Right now, we don’t have a choice. My smartphone gives me a lot of things but it also reports my location and a lot of little apps are sending my personal information to companies and institutions I have no clue about and I don’t trust. I think one important technology that might actually be that you do privacy-enhancing technologies. Many of the little near-field microchips we carry around, they also are indiscriminately reporting to nearby antennas what we’re doing. But you could imagine having a little personal firewall that actually blocks signals that you don’t approve of. You could actually have firewalls and ways of controlling the information leaving your smartphone or your personal space. And I think we actually need to develop that, both for security purposes but also to feel that we actually are in charge of our private lives.

Some of that privacy is a social convention. We agree on what is private and not: This is why we have certain rules about what you are allowed to do with a cell phone in a restaurant. You’re not going to have a conversation with somebody — that’s rude. And others are not supposed to listen to your restaurant conversations that you have with people in the restaurant, even though technically of course, it’s trivial. I think we are going to develop new interesting rules and new technologies to help implement these social rules.

Another area I’m really excited about is the ability to capture energy, for example, using solar collectors. Solar collectors are getting exponentially better and are becoming competitive in a lot of domains with traditional energy sources. But the most beautiful things is they can be made small, used in a distributed manner. You don’t need that big central solar farm even though it might be very effective. You can actually have little solar panels on your house or even on gadgets, if they’re energy efficient enough. That means that you both reduce the risk of a collective failure but also that you get a lot of devices that can now function independently of the grid.

Then I think we are probably going to be able to combine this to fight a lot of emergent biological threats. Right now, we still have this problem that it takes a long time to identify a new pathogen. But I think we’re going to see more and more distributed sensors that can help us identify it quickly, global networks that make the medical professional aware that something new has shown up, and hopefully also ways of very quickly brewing up vaccines in an automated manner when something new shows up.

My vision is that within one or two decades, if something nasty shows up, the next morning, everybody could essentially have a little home vaccine machine manufacture those antibodies to make you resistant against that pathogen — whether that was a bio weapon or something nature accidentally brewed up.

Ariel: I never even thought about our own personalized vaccine machines. Is that something people are working on?

Anders: Not that much yet.

Ariel: Oh.

Anders: You need to manufacture antibodies cheaply and effectively. This is going to require some fairly advanced biotechnology or nanotechnology. But it’s very foreseeable. Basically, you want to have a specialized protein printer. This is something we’re moving in the direction of. I don’t think anybody’s right now doing it but I think it’s very clearly in the path where we’re already moving.

So right now in order to make a vaccine, you need to have this very time consuming process: For example in the case of flu vaccine, you identify the virus, you multiply the virus, you inject it into chicken eggs to get the antibodies and the antigens, you develop a vaccine, and if you did it all right, you have a vaccine out in a few months just in time for the winter flu — and hopefully it was for the version of the flu that was actually making the rounds. If you were unlucky, it was a different one.

But what if you could instead take the antigen, you sequence it — that’s just going to take you a few hours — you generate all the proteins, you run it through various software and biological screens to remove the ones that don’t fit, find the ones that are likely to be good targets for immune system, automatically generate the antibodies, automatically test them out so you find which ones might be bad for patients, and then test them out. Then you might be able to make a vaccine within weeks or days.

Ariel: I really like your vision for the near term future. I’m hoping that all of that comes true. Now, to end, as you look further out into the future — which you’ve clearly done a lot of — what are you most hopeful for?

Anders: I’m currently working on writing a book about what I call “Grand Futures.” Assuming humanity survives and gets its act together, however we’re supposed to do that, then what? How big could the future possibly be? It turns out that the laws of physics certainly allow us to do fantastic things. We might be able to spread literally over billions of light years. Settling space is definitely physically possible, but also surviving even as a normal biological species on earth for literally hundreds of millions of years — and that’s already not stretching it. It might be that if we go post-biological, we can survive up until proton decay in somewhere north of 10^30 years in the future. Of course, the amount of intelligence that could be generated, human brains are probably just the start.

We could probably develop ourselves or Artificial Intelligence to think enormously bigger, enormously much more deeply, enormously more profoundly. Again, this is stuff that I can analyze. There are questions about what the meaning of these thoughts would be, how deep the emotions of the future could be, et cetera, that I cannot possibly answer. But it looks like the future could be tremendously grand, enormously much bigger, just like our own current society would strike our stone age ancestors as astonishingly wealthy, astonishingly knowledgeable and interesting.

I’m looking at: what about the stability of civilizations? Historians have been going on a lot about the decline and fall of civilizations. Does that tell us an ultimate limit on what we can plan for? Eventually I got fed up reading historians and did some statistics and got some funny conclusions. But even if our civilization lasts long, it might become something very alien over time, so how do we handle that? How do you even make a backup of your civilization?

And then of course there are questions like “how long can we survive on earth?” And “when the biosphere starts failing in about a billion years, couldn’t we fix that?” What are the environmental ethics issues surrounding that? What about settling the solar system? how do you build and maintain your Dyson sphere? Then of course there’s the stellar settlement, the intergalactic settlement, then the ultimate limits of physics. What can we say about them and in what ways could physics be really different from what we expect and what does that do for our chances?

It all leads back to this question: so, what should we be doing tomorrow? What are the near term issues? Some of them are interesting like, okay, so if the future is super grand, we should probably expect that we need to safeguard ourselves against existential risk. But we might also have risks — not just going extinct, but causing suffering and pain. And maybe there are other categories we don’t know about. I’m looking a little bit at all the unknown super important things that we don’t know about yet. How do we search for them? If we discover something that turns out to be super important, how do we coordinate mankind to handle that?

Right now, this sounds totally utopian. Would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would also sound totally absurd. The future is big — we have a lot of centuries ahead of us, hopefully.

Max: When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, is actually looking pretty dead and wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

This is something I hope we can contribute to, we denizens of this planet, right now, here, in our lifetime. Because I think this is the most important time and place probably in cosmic history. After 13.8 billion years on this particular planet, we’ve actually developed enough technology, almost, to either drive ourselves extinct or to create super intelligence, which can spread out into the cosmos and do either horrible things or fantastic things. More than ever, life has become the master of its own destiny.

Allison: For me this pretty specific vision would really be a voluntary world, in which different entities, whether they’re AI or humans, can cooperate freely with each other to realize their interests. I do think that we don’t know where we want to end up, and we really have — if you look back 100 years, it’s not only that you wouldn’t have wanted to live there, but also many of the things that were regarded as moral back then are not regarded as moral anymore by most of us, and we can expect the same to hold true 100 years from now. I think rather than locking in any specific types of values, we ought to leave the space of possible values open.

Maybe right now you could try to do something like coherent extrapolated volition, which is, in AI safety, coined by Eliezer Yudkowsky to describe a goal function of a superintelligence that would execute your goals if you were more the person you wish you were, if we lived closer together, if we had more time to think and collaborate — so kind of a perfect version of human morality. I think that perhaps we could do something like that for humans, because we all come from the same evolutionary background. We all share a few evolutionary cornerstones, at least, that make us value family, or make us value a few others of those values, and perhaps we could do something like coherent extrapolated volition of some basic, very boiled down values that most humans would agree to. I think that may be possible, I’m not sure.

On the other hand, in a future where we succeed, at least in my version of that, we live not only with humans but with a lot of different mind architectures that don’t share our evolutionary background. For those mind architectures it’s not enough to try to do something like coherent extrapolated volition, because given that they have very different starting conditions, they will also end up valuing very different value sets. In the absence of us knowing what’s in their interests, I think really the only thing we can reasonably do is try to create a framework in which very different mind architectures can cooperate freely with each other, and engage in mutually beneficial relationships.

Ariel: Honestly, I really love that your answer of what you’re looking forward to is that it’s something for everybody. I like that.

Anthony: When you think about what life used to be for most humans, we really have come a long way. I mean, slavery was just fully accepted for a long time. Complete subjugation of women and sexism was just totally accepted for a really long time. Poverty was just the norm. Zero political power was the norm. We are in a place where, although imperfect, many of these things have dramatically changed; even if they’re not fully implemented; Our ideals and our beliefs of human rights and human dignity and equality have completely changed and we’ve implemented a lot of that in our society.

So what I’m hopeful about is that we can continue that process, and that the way that culture and society work 100 years from now, we would look at from now and say, “Oh my God, they really have their shit together. They have figured out how to deal with differences between people, how to strike the right balance between collective desires and individual autonomy, between freedom and constraint, and how people can feel liberated to follow their own path while not trampling on the rights of others.” These are not in principle impossible things to do, and we fail to do them right now in large part, but I would like to see our technological development be leveraged into a cultural and social development that makes all those things happen. I think that really is what it’s about.

I’m much less excited about more fancy gizmos, more financial wealth for everybody, more power to have more stuff and accomplish more and higher and higher GDP. Those are useful things, but I think they’re things toward an end, and that end is the sort of happiness and fulfillment and enlightenment of the conscious living beings that make up our world. So, when I think of a positive future, it’s very much one filled with a culture that honestly will look back on ours now and say, “Boy, they really were screwed up, and I’m glad we’ve gotten better and we still have a ways to go.” And I hope that our technology will be something that will in various ways make that happen, as technology has made possible the cultural improvements we have now.

Ariel: I think as a woman I do often look back at the way technology enabled feminism to happen. We needed technology to sort of get a lot of household chores accomplished — to a certain extent, I think that helped.

Anthony: There are pieces of cultural progress that don’t require technology, as we were talking about earlier, but are just made so much easier by it. Labor-saving devices helped with feminism; Just industrialization I think helped with serfdom and slavery — we didn’t have to have a huge number of people working in abject poverty and total control in order for some to have a decent lifestyle, we could spread that around. I think something similar is probably true of animal suffering and meat. It could happen without that — I mean, I fully believe that 100 years from now, or 200 years from now, people will look back at eating meat as just like a crazy thing that people used to do. It’s just the truth I think of what’s going to happen.

But it’ll be much, much easier if we have technologies that make that economically viable and easy rather than pulling teeth and a huge cultural fight and everything, which I think will be hard and long. We should be thinking about, if we had some technological magic wand, what are the social problems that we would want to solve with it, and then let’s look for that wand once we identify those problems. If we could make some social problem much better if we only had such and such technology, that’s a great thing to know, because technologies are something we’re pretty good at inventing. If they don’t violate the laws of physics, and there’s some motivation, we can often generate those things, so let’s think about what they are, what would it take to solve this sort of political informational mess where nobody knows what’s true and everybody is polarized?

That’s a social problem. It has a social solution. But there might be technologies that would be enormously helpful in making those social solutions easier. So what are those technologies? Let’s think about them. So I don’t think there’s a kind of magic bullet for a lot of these problems. But having that extra boost that makes it easier to solve the social problem I think is something we should be looking for for sure.

And there are lots of technologies that really do help — worth keeping in mind, I guess, as we spend a lot of our time worrying about the ill effects of them, and the dangers and so on. There is a reason we keep pouring all this time and money and energy and creativity into developing new technologies.

Ariel: I’d like to finish with one last question for everyone, and that is: what does existential hope mean for you?

Max: For me, existential hope is hoping for and envisioning a really inspiring future, and then doing everything we can to make it so.

Anthony: It means that we really give ourselves the space and opportunity to continue to progress our human endeavor — our culture, our society — to build a society that really is backstopping everyone’s freedom and actualization, compassion, enlightenment, in a kind of steady, ever-inventive process. I think we don’t often give ourselves as much credit as we should for how much cultural progress we’ve really made in tandem with our technological progress.

Anders: My hope for the future is that we get this enormous open-ended future. It’s going to contain strange and frightening things, but I also believe that most of it is going to be fantastic. It’s going to be roaring onward far, far, far into the long term future of the universe, probably changing a lot of the aspects of the universe.

When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope, to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen. And I think we are going to be surprised by many things in the future and some of them are going to be wonderful surprises. That is the real existential hope.

Gaia: When I think about existential hope, I think it’s sort of an unusual phrase. But to me it’s really about the idea of finding meaning, and the potential that each of us has to experience meaning in our lives. And I think that the idea of existential hope, and I should say, the existential part of that, is the concept that that fundamental capability is something that will continue in the very long-term and will not go away. You know, I think it’s the opposite of nihilism, it’s the opposite of the idea that everything is just meaningless and our lives don’t matter and nothing that we do matters.

If I’m feeling — if I’m questioning that, I like to go and read something like Viktor Frankl’s book Man’s Search for Meaning, which really reconnects me to these incredible, deep truths about the human spirit. That’s a book that tells the story of his time in a concentration camp at Auschwitz. And even in those circumstances, the ability that he found within himself and that he saw within people around him to be kind, and to persevere, and to really give of himself, and others to give of themselves. And there’s just something impossible, I think, to capture in language. Language is a very poor tool, in this case, to try to encapsulate the essence of what that is. I think it’s something that exists on an experiential level.

Allison: For me, existential hope is really trying to choose to make a difference, knowing that success is not guaranteed, but it’s really making a difference because we simply can’t do it any other way. Because not trying is really not an option. It’s the first time in history that we’ve created the technologies for our destruction and for our ascent. I think they’re both within our hands, and we have to decide how to use them. So I think existential hope is transcending existential angst, and transcending our current limitation, rather than trying to create meaning within them, and I think it’s the adequate mindset for the time that we’re in.

Ariel: And I still love this idea that existential hope means that we strive toward everyone’s personal ideal, whatever that may be. On that note, I cannot thank my guests enough for joining the show, and I also hope that this episode has left everyone listening feeling a bit more optimistic about our future. I wish you all a happy holiday and a happy new year!

Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.

As biotechnology and other emerging technologies become more powerful, the dual-use nature of research — that is, research that can have both beneficial and risky outcomes — is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?

On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues.

Topics discussed in this episode include:

  • Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
  • The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
  • The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
  • How scientists can anticipate whether the results of their research could be misused by someone else
  • To what extent does risk stem from technology, and to what extent does it stem from how we govern it?

Books and publications discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. Now I’ve been planning to do something about biotechnology this month anyways since it would go along so nicely with the new resource we just released which highlights the benefits and risks of biotech. I was very pleased when Catherine Rhodes agreed to be on the show. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance, or a lack of it.

But she has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. The timing of Catherine as a guest is also especially fitting given that just this week the science world was shocked to learn that a researcher out of China is claiming to have created the world’s first genetically edited babies.

Now neither she nor I have had much of a chance to look at this case too deeply but I think it provides a very nice jumping-off point to consider regulations, ethics, and risks, as they pertain to biology and all emerging sciences. So Catherine, thank you so much for being here.

Catherine: Thank you.

Ariel: I also want to add that we did have another guest scheduled to join us today who is unfortunately ill, and unable to participate, so Catherine, I am doubly grateful to you for being here today.

Before we get too far into any discussions, I was hoping to just go over some basics to make sure we’re all on the same page. In my readings of your work, you talk a lot about biorisk and biosecurity, and I was hoping you could just quickly define what both of those words mean.

Catherine: Yes, in terms of thinking about both biological risk and biological security, I think about the objects that we’re trying to protect. It’s about the protection of human, animal, and plant life and health, in particular. Some of that extends to protection of the environment. The risks are the risks to those objects and security is securing and protecting those.

Ariel: Okay. I’d like to start this discussion where we’ll talk about ethics and policy, looking first at the example of the gain-of-function experiments that caused another stir in the science community a few years ago. That was research which was made, I believe, on the H5N1 virus, also known as the avian flu, and I believe it made the virus more virulent. First, can you just explain what gain-of-function means? And then I was hoping you could talk a bit about what that research was, and what the scientific community’s reaction to it was.

Catherine: Gain-of-function’s actually quite a controversial term to have selected to describe this work, because a lot of what biologists do is work that would add a function to the organism that they’re working on, without that actually posing any security risk. In this context, it was a gain of a function that would make it perhaps more desirable for use as a biological weapon.

In this case, it was things like an increase in its ability to transmit between mammals, so in particular, they were getting it tracked to be transmittable between ferrets in a laboratory, and ferrets are a model for transmission between humans.

Ariel: You actually bring up an interesting point that I hadn’t thought about. To what extent does our choice of terminology affect how we perceive the ethics of some of these projects?

Catherine: I think it was perhaps in this case, it was more that the use of that term which was more done from perhaps the security and policy community side, made the conversation with scientists more difficult, as it was felt this was mislabeling our research, it’s affecting research that shouldn’t really come into this kind of conversation about security. So I think that was where it maybe caused some difficulties.

But I think also there’s understanding that needs to be the other way as well, that this isn’t not necessarily that all policymakers are going to have that level of detail about what they mean when they’re talking about science.

Ariel: Right. What was the reaction then that we saw from the scientific community and the policymakers when this research was published?

Catherine: There was firstly a stage of debate about whether those papers should be published or not. There was some guidance given by what’s called the National Science Advisory Board for Biosecurity in the US, that those papers should not be published in full. So, actually, the first part of the debate was about that stage of ‘should you publish this sort of research where it might have a high risk of misuse?’

That was something that the security community had been discussing for at least a decade, that there were certain experiments where they felt that they would meet a threshold of risk, where they shouldn’t be openly published or shouldn’t be published with their methodological details in full. I think for the policy and security community, it was expected that these cases would arise, but this hadn’t perhaps been communicated to the scientific community particularly well, and so I think it came as a shock to some of those researchers, particularly because the research had been approved initially, so they were able to conduct the research, but suddenly they would find that they can’t publish the research that they’ve done. I think that was where this initial point of contention came about.

It then became a broader issue. More generally, how do we handle these sorts of cases? Are there times when we should restrict publication? Or, is publication actually open publication, going to be a better way of protecting ourselves, because we’ll all know about the risks as well?

Ariel: Like you said, these scientists had gotten permission to pursue this research, so it’s not like it was questionable, or they had no reason to think it was too questionable to begin with. And yet, I guess there is that issue of how can scientists think about some of these questions more long term and maybe recognize in advance that the public or policymakers might find their research concerning? Is that something that scientists should be trying to do more of?

Catherine: Yes, and I think that’s part of this point about the communication between the scientific and policy communities, so that these things don’t come as a surprise or a shock. Yes, I think there was something in this. If we’re allowed to do the research, should we not have had more conversation at the earlier stages? I think in general I would say that’s where we need to get to, because if you’re trying to intervene at the stage of publication, it’s probably already too late to really contain the risk of publication, because for example, if you’ve submitted a journal article online, that information’s already out there.

So yes, trying to take it further back in the process, so that the beginning stages of designing research projects these things are considered, is important. That has been pushed forward by funders, so there are now some clauses about ‘have you reviewed the potential consequences of your research?’ That is one way of triggering that thinking about it. But I think there’s been a broader question further back about education and awareness.

It’s all right if you’re being asked that question, but do you actually have information that helps you know what would be a security risk? And what elements might you be looking for in your work? So, there’s this case more generally in how do we build awareness amongst the scientific community that these issues might arise, and train them to be able to spot some of the security concerns that may be there?

Ariel: Are we taking steps in that direction to try to help educate both budding scientists and also researchers who have been in the field for a while?

Catherine: Yes, there have been quite a lot of efforts in that area. Again, probably over the last decade or so, done by academic groups in civil society. It’s been something that’s been encouraged by states-parties to the Biological Weapons Convention have been encouraging education and awareness raising, and also the World Health Organization. It’s got a document on responsible life sciences research, and it also encourages education and awareness-raising efforts.

I think that those have further to go, and I think some of the barriers to those being taken up are the familiar things that it’s very hard to find space in a scientific curriculum to have that teaching, that more resources are needed in terms of where are the materials that you would go to. That is being built up.

I think also then talking about the scientific curriculums at maybe the undergraduate, postgraduate level, but how do you extend this throughout scientific careers as well? There needs to be a way of reaching scientists at all levels.

Ariel: We’re talking a lot about the scientists right now, but in your writings, you mention that there are three groups who have responsibility for ensuring that science is safe and ethical. Those are one, obviously the scientists, but then also you mention policymakers, and you mention the public and society. I was hoping you could talk a little bit about how you see the roles for each of those three groups playing out.

Catherine: I think these sorts of issues, they’re never going to be just the responsibility of one group, because there are interactions going on. Some of those interactions are important in terms of maybe incentives. So we talked about publication. Publication is of such importance within the scientific community and within their incentive structures. It’s so important to publish, that again, trying to intervene just at that stage, and suddenly saying, “No, you can’t publish your research” is always going to be a big problem.

It’s to do with the norms and the practices of science, but some of that, again, comes from the outside. Are there ways we can reshape those sorts of structures that would be more useful? Is one way of thinking about it. I think we need clear signals from policymakers as well, about when to take threats seriously or not. If we’re not hearing from policymakers that there are significant security concerns around some forms of research, then why should we expect the scientist to be aware of it?

Yes, also policy does have a control and governance mechanisms within it, so it can be very useful. In forms of deciding what research can be done, that’s often done by funders and government bodies, and not by the research community themselves. Trying to think how more broadly, to bring in the public dimension. I think what I mean there is that it’s about all of us being aware of this. It shouldn’t be isolating one particular community and saying, “Well, if things go wrong, it was you.”

Socially, we’ve got decisions to make about how we feel about certain risks and benefits and how we want to manage them. In the gain-of-function case, the research that was done had the potential for real benefits for understanding avian influenza, which could produce a human pandemic, and therefore there could be great public health benefits associated with some of this research that also poses great risks.

Again, when we’re dealing with something that for society, could bring both risks and benefits, society should play a role in deciding what balance it wants to achieve.

Ariel: I guess I want to touch on this idea of how we can make sure that policymakers and the public – this comes down to a three way communication. I guess my question is, how do we get scientists more involved in policy, so that policymakers are informed and there is more of that communication? I guess maybe part of the reason I’m fumbling over this question is it’s not clear to me how much responsibility we should be putting specifically on scientists for this, versus how much responsibility does go to the other groups.

Catherine: About science, it’s becoming more involved in policy. That’s another part of thinking of the relationship between science and policy, and science and society, is that we’ve got an expectation that part of what policymakers will consider is how to have regulation and governance that’s appropriate to scientific practice, and to emerging technologies, science and technology advances, then they need information from the scientific community about those things. There’s a responsibility of policymakers to seek some of that information, but also for scientists to be willing to engage in the other direction.

I think that’s the main answer to how they could be more informed, and what other ways there could be more communication? I think some of the useful ways that’s done at the moment is by having, say, meetings where there might be a horizon scanning element, so that scientists can have input on where we might see advances going. But if you also have within the participation, policymakers, and maybe people who know more about things like technology transfer, and startups, investments, so they can see what’s going on in terms of where the money’s going. Bringing those groups together to look at where the future might be going is quite a good way of capturing some of those advances.

And it helps inform the whole group, so I think those sorts of processes are good, and there are some examples of those, and there are some examples where the international science academies come together to do some of that sort of work as well, so that they would provide information and reports that can go forward to international policy processes. They do that for meetings at the Biological Weapons Convention, for example.

Ariel: Okay, so I want to come back to this broadly in a little bit, but first I want to touch on biologists and ethics and regulation a little bit more generally. Because I guess I keep thinking of the famous Asilomar meeting from I think it was in the late ’70s, in which biologists got together, recognized some of the risks in their field, and chose to pause the work that they were doing, because there were ethical issues. I tend to credit them with being more ethically aware than a lot of other scientific fields.

But it sounds like maybe that’s not the case. Was that just a special example in which scientists were unusually proactive? I guess, should we be worried about scientists and biosecurity, or is it just a few bad apples like we saw with this recent Chinese researcher?

Catherine: I think in terms of ethical awareness, it’s not that I don’t think biologists are ethically aware, but it is that there can be a lot of different things coming onto their agendas in that, and again, those can be pushed out by other practices within your daily work. So, I think for example, one of the things in biology, often it’s quite close to medicine, and there’s been a lot over the last few decades about how we treat humans and animals in research.

There’s ethics and biomedical ethics, there’s practices to do with consent and participation of human subjects, that people are aware of. It’s just that sometimes you’ve got such an overload of all these different issues you’re supposed to be aware of and responding to, so sustainable development and environmental protection is another one, that I think it’s going to be the case that often things will fall off the agenda or knowing which you should prioritize perhaps can be difficult.

I do think there’s this lack of awareness of the past history of biological warfare programs, and the fact that scientists have always been involved with them, and then looking forward to know how much more easy, because of the trends in technology, it may be for more actors to have access to such technologies and the implications that might have.

I think that picks up on what you were saying about, are we just concerned about the bad apples? Are there some rogue people out there that we should be worried about? I think there’s two parts to that, because there may be some things that are more obvious, where you can spot, “Yeah, that person’s really up to something they shouldn’t be.” I think there are probably mechanisms where people do tend to be aware of what’s going on in their laboratories.

Although, as you mentioned, the recent Chinese case, potentially CRISPR gene edited babies, it seems clear that people within that person’s laboratory didn’t know what was going on, the funders didn’t know what was going on, the government didn’t know what was going on, so yes, there will be some cases where there’s something very obvious that someone is doing bad.

I think that’s probably an easier thing to handle and to conceptualize, but when we’re now getting these questions about you can be doing the stuff, scientific work, and research, that’s for clear benefits, and you’re doing it for those beneficial purposes, but how do you work out whether the results of that could be misused by someone else? How do you frame whether you have any responsibility for how someone else would use it when they may well not be anywhere near you in a laboratory? They may be very remote, you probably have no contact with them at all, so how can you judge and assess how your work may be misused, and then try and make some decision about how you should proceed with it? I think that’s a more complex issue.

That does probably, as you say, speak to ‘are there things in scientific cultures, working practices, that might assist with dealing with that? Or might make it problematic?’ Again, I think I’ve picked up a few times, but there’s a lot going on in terms of the sorts of incentive structures that scientists are working in, which do more broadly meet up with global economic incentives. Again, not knowing the full details of the recent Chinese CRISPR case, there can often be almost racing dynamics between countries to have done some of this research and to be ahead in it.

I think that did happen with the gain-of-function experiments so that when the US had a moratorium on doing them, that China wrapped up its experiments in the same area. There’s all these kind of incentive structures that are going on as well, and I think those do affect wider scientific and societal practices.

Ariel: Okay. Quickly touching on some of what you were talking about, in terms of researchers who are doing things right, in most cases I think what happens is this case of dual use, where the research could go either way. I think I’m going to give scientists the benefit of the doubt and say most of them are actually trying to do good with their research. That doesn’t mean that someone else can’t come along later and then do something bad with it.

This is I think especially a threat with biosecurity, and so I guess, I don’t know that I have a specific question that you haven’t really gotten into already, but I am curious if you have ideas for how scientists can deal with the dual use nature of their research. Maybe to what extent does more open communication help them deal with it, or is open communication possibly bad?

Catherine: Yes. I think yes it’s possibly good and possibly bad. I think again, yeah, it’s a difficult question without putting their practice into context. Again, it shouldn’t be that just the scientist has to think through these issues of dual use and can it be misused. If there’s not really any new information coming out about how serious a threat this might be, so do we know that this is being pursued by any terrorist group? Do we know why that might be of a particular concern?

I think another interesting thing is that you might get combinations of technology that have developed in different areas, so you might get someone who does something that helps with the dispersal of an agent, that’s entirely disconnected from someone who might be working on an agent, that would be useful to disperse. Knowing about the context of what else is going on in technological development, and not just within your own work is also important.

Ariel: Just to clarify, what are you referring to when you say agent here?

Catherine: In this case, again, thinking of biology, so that might be a microorganism. If you were to be developing a biological weapon, you don’t just need to have a nasty pathogen. You would need some way of dispersing, disseminating that, for it to be weaponized. Those components may be for beneficial reasons going on in very different places. How would scientists be able to predict where those might combine and come together, and create a bigger risk than just their own work?

Ariel: Okay. And then I really want to ask you about the idea of the races, but I don’t have a specific question to be honest. It’s a concerning idea, and it’s something that we look at in artificial intelligence, and it’s clearly a problem with nuclear weapons. I guess what are concerns we have when we look at biological races?

Catherine: It may not even be necessarily specific to looking at biological races, but it is this thing, and again, not even thinking of maybe military science uses of technology, but about how we have very strong drivers for economic growth, and that technology advances will be really important to innovation and economic growth.

So, I think this does provide a real barrier to collective state action against some of these threats, because if a country can see an advantage of not regulating an area of technology as strongly, then they’ve got a very strong incentive to go for that. It’s working out how you might maybe overcome some of those economic incentives, and try and slow down some of the development of technology, or application of technology perhaps, to a pace where we can actually start doing these things like working out what’s going on, what the risks might be, how we might manage those risks.

But that is a hugely controversial kind of thing to put forward, because the idea of slowing down technology, which is clearly going to bring us these great benefits and is linked to progress and economic progress is a difficult sell to many states.

Ariel: Yeah, that makes sense. I think I want to turn back to the Chinese case very quickly. I think this is an example of what a lot of people fear, in that you have this scientist who isn’t being open with the university that he’s working with, isn’t being open with his government about the work he’s doing. It sounds like even the people who are working for him in the lab, and possibly even the parents of the babies that are involved may not have been fully aware of what he was doing.

We don’t have all the information, but at the moment, at least what little we have sounds like an example of a scientist gone rogue. How do we deal with that? What policies are in place? What policies should we be considering?

Catherine: I think I share where the concerns in this are coming from, because it looks like there’s multiple failures of the types of layers of systems that should have maybe been able to pick this up and stop it, so yes, we would usually expect that a funder of the research, or the institution the person’s working in, the government through regulation, the colleagues of a scientist would be able to pick up on what’s happening, have some ability to intervene, and that doesn’t seem to have happened.

Knowing that these multiple things can all fall down is worrying. I think actually an interesting thing about how we deal with this that there seems to be a very strong reaction from the scientific community working around those areas of gene editing, to all come together and collectively say, “This was the wrong thing to do, this was irresponsible, this is unethical. You shouldn’t have done this without communicating more openly about what you were doing, what you were thinking of doing.”

I think that’s really interesting to see that community push back which I think in those cases to me, where scientists are working in similar areas, I’d be really put off by that, thinking, “Okay, I should stay in line with what the community expects me to do.” I think that is important.

Where it also is going to kick in from the more top-down regulatory side as well, so whether China will now get some new regulation in place, do some more checks down through the institutional levels, I don’t know. Likewise, I don’t know whether internationally it will bring a further push for coordination on how we want to regulate those experiments.

Ariel: I guess this also brings up the question of international standards. It does look like we’re getting very broad international agreement that this research shouldn’t have happened. But how do we deal with cases where maybe most countries are opposed to some type of research and another country says, “No, we think it could be possibly ethical so we’re going to allow it?”

Catherine: I think this is again, the challenging situation. It’s interesting to me, this picks up, I’m trying to think whether this is maybe 15-20 years ago, but the debates about human cloning internationally, whether there should be a ban on human cloning. There was a declaration made, there’s a UN declaration against human cloning, but it fell down in terms of actually being more than a declaration, having something stronger in terms of an international law on this, because basically in that case, it was the differences between states’ views of the status of the embryo.

Regulating human reproductive research at the international level is very difficult because of some of those issues where like you say, there can be quite significant differences in ethical approaches taken by different countries. Again, in this case, I think what’s been interesting is, “Okay, if we’re going to come across a difficulty in getting an agreement between states and the governmental level, is there things that the scientific community or other groups can do to make sure those debates are happening, and that some common ground is being found to how we should pursue research in these areas, when we should decide it’s maybe safe enough to go down some of these lines?”

I think another point about this case in China was that it’s just not known whether it’s safe to be doing gene editing on humans yet. That’s actually one of the reasons why people shouldn’t be doing it regardless. I hope that gets some way to the answer. I think it is very problematic that we often will find that we can’t get broad international agreement on things, even when there seems to be some level of consensus.

Ariel: We’ve been talking a lot about all of these issues from the perspective of biological sciences, but I want to step back and also look at some of these questions more broadly. There’s two sides that I want to look at. One is just this question of how do we enable scientists to basically get into policy more? I mean, how can we help scientists understand how policymaking works and help them recognize that their voices in policy can actually be helpful? Or, do you think that we are already at a good level there?

Catherine: I would say we’re certainly not at an ideal level yet of science and policy. It does vary across different areas of course, so the thing that was coming up into my mind is in climate change, for example, having the intergovernmental panel doing their reports every few years. There’s a good, collaborative, international evidence base and good science policy process in that area.

But in other areas there’s a big deficit I would say. I’m most familiar with that internationally, but I think some of this scales down to the national level as well. Part of it is going in the other direction almost. When I spoke earlier about needs perhaps for education and awareness raising among scientists about some of these issues around how their research may be used, I think there’s also a need for people in policy to become more informed about science.

That is important. I’m trying to think what are the ways maybe scientists can do that? I think there’s some attempts, so when there’s international negotiations going on, to have … I think I’ve heard them described as mini universities, so maybe a week’s worth of quick updates on where the science is at before a negotiation goes on that’s relevant to that science.

I think one of the key things to say is that there are ways for scientists and the scientific community to have influence both on how policy develops and how it’s implemented, and a lot of this will go through intermediary bodies. In particular, the professional associations and academies that represent scientific communities. They will know, for example, thinking in the UK context, but I think this is similar in the US, there may be a consultation by parliament on how should we address a particular issue?

There was one in the UK a couple of years ago, how should we be regulating genetically modified insects? If a consultation like that’s going on and they’re asking for advice and evidence, there’s often ways of channeling that through academies. They can present statements that represent broader scientific consensus within their communities and input that.

The reason for mentioning them as intermediaries, again, it’s a lot of a burden to put on individual scientists to say, “You should all be getting involved in policy and informing policy. Another part of what you should be doing as part of your role,” but yes, realizing that you can do that as a collective, rather than it just having to be an individual thing I think is valuable.

Ariel: Yeah, there is the issue of, “Hey, in your free time, can you also be doing this?” It’s not like scientists have lots of free time. But one of the things that I get the impression is that scientists are sometimes a little concerned about getting involved with policymaking because they fear overregulation, and that it could harm their research and the good that they’re trying to do with their research. Is this fear justified? Are scientists hampered by policies? Are they helped by policies?

Catherine: Yeah, so it’s both. It’s important to know that the mechanisms of policy can play facilitative roles, they can promote science, as well as setting constraints and limits on it. Again, most governments are recognizing that the life sciences and biology and artificial intelligence and other emerging technologies are going to be really key for their economic growth.

They are doing things to facilitate and support that, and fund it, so it isn’t only about the constraints. However, I guess for a lot of scientists, the way you come across regulation, you’re coming across the bits that are the constraints on your work, or there are things that make you fill in a lot of forms, so it can just be perceived as something that’s burdensome.

But I would also say that certainly something I’ve noticed in recent years is that we shouldn’t think that scientists and technology communities aren’t sometimes asking for areas to be regulated, asking for some guidance on how they should be managing risks. Switching back to a biology example, but with gene drive technologies, the communities working on those have been quite proactive in asking for some forms of, “How do we govern the risks? How should we be assessing things?” Saying, “These don’t quite fit with the current regulatory arrangements, we’d like some further guidance on what we should be doing.”

I can understand that there might be this fear about regulation, but I also think something you said, could this be the source of the reluctance to engage with policy, and I think an important thing to say there is that actually if you’re not engaging with policy, it’s more likely that the regulation is going to be working in ways that are not intentionally, but could be restricting scientific practice. I think that’s really important as well, that maybe the regulation is created in a very well intended way, and it just doesn’t match up with scientific practice.

I think at the moment, internationally this is becoming a discussion around how we might handle the digital nature of biology now, when most regulation is to do with materials. But if we’re going to start regulating the digital versions of biology, so gene sequencing information, that sort of thing, then we need to have a good understanding of what the flows of information are, in which ways they have value within the scientific community, whether it’s fundamentally important to have some of that information open, and we should be very wary of new rules that might enclose it.

I think that’s something again, if you’re not engaging with the processes of regulation and policymaking, things are more likely to go wrong.

Ariel: Okay. We’ve been looking a lot about how scientists deal with the risks of their research, how policymakers can help scientists deal with the risks of their research, et cetera, but it’s all about the risks coming from the research and from the technology, and from the advances. Something that you brought up in a separate conversation before the podcast is to what extent does risk stem from technology, and to what extent can it stem from how we govern it? I was hoping we could end with that question.

Catherine: That’s a really interesting question to me, and I’m trying to work that out in my own research. One of the interesting and perhaps obvious things to say is it’s never down to the technology. It’s down to how we develop it, use it, implement it. The human is always playing a big role in this anyway.

But yes, I think a lot of the time governance mechanisms are perhaps lagging behind the development of science and technology, and I think some of the risk is coming from the fact that we may just not be governing something properly. I think this comes down to things we’ve been mentioning earlier. We need collectively both in policy, in the science communities, technology communities, and society, just to be able to get a better grasp on what is happening in the directions of emerging technologies that could have both these very beneficial and very destructive potentials, and what is it we might need to do in terms of really rethinking how we govern these things?

Yeah, I don’t have any answer for where the sources of risk are coming from, but I think it’s an interesting place to look, is that intersection between the technology development, and the development of regulation and governance.

Ariel: All right, well yeah, I agree. I think that is a really great question to end on, for the audience to start considering as well. Catherine, thank you so much for joining us today. This has been a really interesting conversation.

Catherine: Thank you.

Ariel: As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us on your preferred podcast platform.

[end of recorded material]

Podcast: Can We Avoid the Worst of Climate Change? with Alexander Verbeek and John Moorhead

“There are basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” – Alexander Verbeek

On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.

Topics discussed in this episode include:

  • Why the difference between 1.5 and 2 degrees C of global warming is so important, and why we can’t exceed 2 degrees C of warming
  • Why the economy needs to fundamentally change to save the planet
  • The inequality of climate change
  • Climate change’s relation to international security problems
  • How we can avoid the most dangerous impacts of climate change: runaway climate change and a “Hothouse Earth”
  • Drawdown’s 80 existing technologies and practices to solve climate change
  • “Trickle up” climate solutions — why individual action is just as important as national and international action
  • What all listeners can start doing today to address climate change

Publications and initiatives discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hi everyone, Ariel Conn here with the Future of Life Institute. Now, this month’s podcast is going live on Halloween, so I thought what better way to terrify our listeners than with this month’s IPCC report. If you’ve been keeping up with the news this month, you’re well aware that the report made very dire predictions about what a future warmer world will look like if we don’t keep global temperatures from rising more than 1.5 degrees Celsius. Then of course there were all of the scientists’ warnings that came out after the report about how the report underestimated just how bad things could get.

It was certainly enough to leave me awake at night in a cold sweat. Yet the report wasn’t completely without hope. The authors seem to still think that we can take action in time to keep global warming to 1.5 degrees Celsius. So to consider this report, the current state of our understanding of climate change, and how we can ensure global warming is kept to a minimum, I’m excited to have Alexander Verbeek and John Moorhead join me today.

Alexander is a Dutch environmentalist, diplomat, and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. Over the past 28 years, he has worked on international security, humanitarian, and geopolitical risk issues, and the linkage to the Earth’s accelerating environmental crisis. He created the Planetary Security Initiative held at The Hague’s Peace Palace where representatives from 75 countries meet annually on the climate change-security relationship. He spends most of his time speaking and advising on planetary change to academia, global NGOs, private firms, and international organizations.

John is President of Drawdown Switzerland in addition to being a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com. He advises and informs on climate solutions that are economy, society, and environment positive. He affects change by engaging on the solutions to global warming with youth, business, policy makers, investors, civil society, government leaders, et cetera. Drawdown Switzerland an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming in Switzerland and internationally by investment at scale in Drawdown Solutions. So John and Alexander, thank you both so much for joining me today.

Alexander: It’s a pleasure.

John: Hi Ariel.

Ariel: All right, so before we get too far into any details, I want to just look first at the overall message of the IPCC report. That was essentially: two degrees warming is a lot worse than 1.5 degrees warming. So, I guess my very first question is why did the IPCC look at that distinction as opposed to anything else?

Alexander: Well, I think it’s a direct follow up from the negotiations in the Paris Agreement, where in a very late stage after the talk for all the time about two degrees, at a very late stage the text included the reference to aiming for 1.5 degrees. At that moment, it invited the IPCC to produce a report by 2018 about what the difference actually is between 1.5 and 2 degrees. Another major conclusion is that it is still possible to stay below 1.5 degrees, but then we have to really urgently really do a lot, and that is basically cut in the next 12 years our carbon pollution with 45%. So that means we have no day to lose, and governments, basically everybody, business and people, everybody should get in action. The house is on fire. We need to do something right now.

John: In addition to that, we’re seeing a whole body of scientific study that’s showing just how difficult it would be if we were to get to 2 degrees and what the differences are. That was also very important. Just for your US listeners, I just wanted to clarify because we’re going to be talking in degrees centigrade, so for the sake of argument, if you just multiply by two, every time you hear one, it’s two degrees Fahrenheit. I just wanted to add that.

Ariel: Okay great, thank you. So before we talk about how to address the problem, I want to get more into what the problem actually is. And so first, what is the difference between 1.5 degrees Celsius and 2 degrees Celsius in terms of what impact that will have on the planet?

John: So far we’ve already seen a one degree C increase. The impacts that we’re seeing, they were all predicted by the science, but in many cases we’ve really been quite shocked at just how quickly global warming is happening and the impacts it’s having. I live here in Switzerland, and we’re just now actually experiencing another drought, but in the summer we had the worst drought in eastern Switzerland since 1847. Of course we’ve seen the terrible hurricanes hitting the United States this year and last. That’s one degree. So 1.5 degrees increase, I like to use the analogy of our body temperature: If you’re increasing your body temperature by two degrees Fahrenheit, that’s already quite bad, but if you then increase it by three degrees Fahrenheit, or four, or five, or six, then you’re really ill. That’s really what happens with global warming. It’s not a straight line.

For instance, the difference between 1.5 degrees and two degrees is that heat waves are forecast to increase by over 40%. There was another study that showed that fresh water supply would decrease by 9% in the Mediterranean for 1.5 degrees, but it would decrease by 17% if we got to two degrees. So that’s practically doubling the impact for a change of 1.5 degrees. I can go on. If you look at wheat production, the difference between two and 1.5 degrees is a 70% loss in yield. Sea level rise would be 50 centimeters versus 40 centimeters, and 10 centimeters doesn’t sound like that much, but it’s a huge amount in terms of increase.

Alexander: Just to illustrate that a bit, if you have just a 10 centimeters increase, that means that 10 million people extra will be on the move. Or to formulate it another way, I remember when Hurricane Sandy hit New York and the subway flooded. At that moment we had, and that’s where we now are more or less, we have had some 20 centimeters of sea level rise since the industrial revolution. If we didn’t have those 20 centimeters, the subways would not have flooded. So it sounds like nothing, but it has a lot of impacts. I think another one that I saw that was really striking is the impact on nature, the impact on insects or on coral reefs. So if you have two degrees, there’s hardly any coral reef left in the world, whereas if it would be 1.5 degrees, we would still lose 70-90%, but there could still be some coral reefs left.

John: That’s a great example I would say, because currently it’s 50% of coral reefs at one degree increase have already died off. So at 1.5, we could reach 90%, and two degrees we will have practically wiped off all coral reefs.

Alexander: And the humanitarian aspects are massive. I mean John just mentioned water. I think one of these things we will see in the next decade or next two decades is a lot of water related problems. The amount of people that will not have access to water is increasing rapidly. It may double in the next decade. So any indication here that we have in the report on how much more problems we will see with water if we have that half degree extra is a very good warning. If you see the impact of not enough water on the quality of life of people, on people going on the move, increased urbanization, more tensions in the city because there they also have problems with having enough water, and of course water is related to energy and especially food production. So its humanitarian impacts of just that half degree extra is massive.

Then last thing here, we’re talking about global average. In some areas, if let’s say globally it gets two degrees warmer, in landlocked countries for instance, it will go much faster, or in the Arctic, it goes like twice as fast with enormous impacts and potential positive feedback loops that might end up with.

Ariel: That was something interesting for me to read. I’ve heard about how the global average will increase 1.5 to two degrees, but I hadn’t heard until I read this particular report that that can mean up to 3.5 degrees Celsius in certain places, that it’s not going to be equally distributed, that some places will get significantly hotter. Have models been able to predict where that’s likely to happen?

John: Yeah, and not only that, it’s already happening. That’s also one of the problems we face when we describe global warming in terms of one number, an average number, is that it doesn’t portray the big differences that we’re seeing in terms of global warming. For instance, in the case of Switzerland we’re already at a two degree centigrade increase, and that’s had huge implications for Switzerland already. We’re a landlocked country. We have beautiful mountains as you know, and beautiful lakes as well, but we’re currently seeing things that we hadn’t seen before, which is some of our lakes are starting to dry out in this current drought period. Lake levels have dropped very significantly. Not the major ones that are fed by glaciers, but the glaciers themselves, out of 80 glaciers that are tracked in Switzerland, 79 are retreating. They’re losing mass.

That’s having impacts, and in terms of extreme weather, just this last summer we saw these incredible – what Al Gore calls water bombs – that happened in Lausanne and Eschenz, two of our cities, where we saw centimeters, months worth of rain, fall in the space of just a few minutes. This is caused all sorts of damages as well.

Just a last point about temperature differences is that, for instance, northern Europe this last summer, we saw four, five degrees, much warmer, which caused so much drying out that we saw forest fires that we hadn’t seen in places like Sweden or Finland and so on. We also saw in February of this year what the scientists call a temperature anomaly of 20 degrees, which meant that for a few days it was warmer in the North Pole than it was in Poland because of this temperature anomaly. Averages help us understand the overall trends, but they also hide differences that are important to consider as well.

Alexander: Maybe the word global warming is, let’s say for a general public, not the right word because it sounds a bit like “a little bit warmer,” and if it’s now two degrees warmer than yesterday, I don’t care so much. Maybe “climate weirding” or “climate chaos” are better because we will just get more extremes. Let’s say you follow for instance how the jet stream is moving, it used to have rather quick pulls going around the planet at the height where the jets like to fly at about 10 kilometers. It is now, because there’s less temperature difference between the equator and the poles, it’s getting slower. It’s getting a bit lazy.

That means two things. It means on the one hand that you see that once you have a certain weather pattern, it sticks longer, but the other thing is by this lazy jet stream to compare it a bit like a river that enters the flood lands and starts to meander, is that the waves are getting bigger. Let’s say if it used to be that the jet stream brought cold air from Iceland to the Netherlands where I’m from, since it is now wavier, it brings now cold weather all the way from Greenland, and same with warm weather. It comes from further down south and it sticks longer in that pattern so you get longer droughts, you get longer periods of rain, it all gets more extreme. So a country like the Netherlands which is a delta where we always deal with too much water, and like many other countries in the world, we experience drought now which is something that we’re not used to. We have to ask foreign experts how do you deal with drought, because we always tried to pump the water out.

John: Yeah I think the French, as often is the case, have the best term for it. It’s called dérèglement climatique which is this idea of climate disruption.

Ariel: I’d like to come back to some of the humanitarian impacts because one of the things that I see a lot is this idea that it’s the richer, mostly western but not completely western countries that are causing most of the problems, and yet it’s the poorer countries that are going to suffer the most. I was wondering if you guys could touch on that a little bit?

Alexander: Well I think everything related to climate change is about that it is unfair. It is created by countries that generally are less impacted by now, so we started let’s say in western Europe with the industrial revolution and came followed by the US that took over. Historically the US produced the most. Then you have a different groups of countries. Let’s take a country in Sahel like Burkina Faso for instance. They contributed practically zero to the whole problem, but the impact is much more on their sides. Then there’s kind of a group of countries in between. Let’s say a country like China that for a long time did not contribute much to the problem and is now rapidly catching up. Then you get this difficult “tragedy of the commons” behavior that everybody points at somebody else for their part, what they have done, and either because they did it in past or because they do it now, everybody can use the statistics in their advantage, apart from these really really poor countries that are getting the worst.

I mean a country like Tuvalu is just disappearing. That’s one of those low-lying natural states in the Pacific. They contributed absolutely zero and their country is drowning. They can point at everybody else and nobody will point at them. So there is a huge call for that this is an absolutely globalized problem that you can only solve by respecting each other, by cooperating together, and by understanding that if you help other countries, it’s not only your moral obligation but it’s also in your own interest to help the others to solve this.

John: Yeah. Your listeners would most likely also be aware of the sustainable development goals, which are the objectives the UN set for 2030. There are 17 of them. They include things like no poverty, zero hunger, health, education, gender equality, et cetera. If you look at who is being impacted by a 2 degree and a 1.5 degree world, then you can see that it’s particularly in the developing and the least developed countries that the impact is felt the most, and that these SDGs are much more difficult if not impossible to reach in a 2 degree world. Which again is why it’s so important for us to stay within 1.5 degrees.

Ariel: And so looking at this from more of a geopolitical perspective, in terms of trying to govern and address… I guess this is going to be a couple questions. In terms of trying to prevent climate change from getting too bad, what do countries broadly need to be doing? I want to get into specifics about that question later, but broadly for now what do they need to be doing? And then, how do we deal with a lot of the humanitarian impacts at a government level if we don’t keep it below 1.5 degrees?

Alexander: A broad answer would be two things: get rid of the carbon pollution that we’re producing every day as soon as possible. So phase out fossil fuels. The other that’s a broad answer would be a parallel to what John was just talking about. We have the agenda 2030. We have those 17 sustainable development goals. If we would all really follow that and live up to that, we’d actually get a much better world because all of these things are integrated. If you just look at climate change in isolation you are not going to get there. It’s highly integrated to all those related problems.

John: Yeah, just in terms of what needs to be done broadly speaking, it’s the adoption of renewable energy, scaling up massively the way we produce electricity using renewables. The IPCC suggested there should be 85% and there are others that say we can even get to 100% renewables by 2050. The other side is everything to do with land use and food, our diet has a huge impact as well. On the one hand as Alexander has said very well, we need to cut down on emissions that are caused by industry and fossil fuel use, but on the other hand what’s really important is to preserve our natural ecosystems that protect us, and add forest, not deforest. We need to naturally scale up the capture of carbon dioxide. Those are the two pieces of the puzzle.

Alexander: Don’t want to go too much into details, but all together it ultimately asks for a different kind of economy. In our latest elections when I looked at the election programs, every party whether left or right or in the middle, they all promise something like, “when we’re in government, they’ll be something like 3% of economic growth every year.” But if you grow 3% every year, that means that every 20 years you double your economy. That means every 40 years you quadruple your economy, which might be nice if it will be only the services industry, but if you talk about production we can not let everything grow in the amount of resources that we use and the amount of waste we produce, when the Earth itself is not growing. So apart from moving to renewables, it is also changing the way how we use everything around and how we consume.

You don’t have to grow when you have it this good already, but it’s so much in the system that we have used the past 200, 250 years. Everything is based on growth. And as the Club of Romes said in the early ’70s, there’s limits to growth unless our planet would be something like a balloon that somebody would blow air in and it would be growing, then you would have different system. But as long as that is not the case and as long as there’s no other planets where we can fly to, that is the question where it’s very hard to find an answer. You can conclude that we can not grow, but how do we change that? That’s probably a completely different podcast debate, but it’s something I wanted to flag here because at the end of today you always end up with this question.

Ariel: This is actually, this is very much something that I wanted to come back to, especially in terms of what individuals can do, I think consuming less is one of the things that we can do to help. So I want to come back to that idea. I want to talk a little bit more though about some of the problems that we face if we don’t address the problem, and then come back to that. So, first going back to the geopolitics of addressing climate change if it happens, I think, again, we’ve talked about some of the problems that can arise as a result of climate change, but climate change is also thought of as a threat multiplier. So it could trigger other problems. I was hoping you could talk a little bit about some of the threats that governments need to be aware of if they don’t address climate change, both in terms of what climate change could directly cause and what it could indirectly cause.

Alexander: There’s so much we can cover here. Let’s start with security, it’s maybe the first one you think of. You’ll read in the paper about climate wars and water wars and those kind of popular words, which of course is too simplified. But, there is a clear correlation between changing climates and security.

We’ve seen it in many places. You see it in the place where we’re seeing more extreme weather now, so let’s say in the Sahel area, or in the Middle East, there’s a lot of examples where you just see that because of rising temperatures and because of less rainfall which is consistently going on now, it’s getting worse now. The combination is worse. You get more periods of drought, so people are going on the move. Where are they going to? Well normally, unlike many populists like to claim in some countries, they’re not immediately going to the western countries. They don’t go too far. People don’t want to move too far so they go to an area not too far away, which is a little bit less hit by this drought, but by the fact that they arrived there, they increased pressures on the little water and food and other resources that they have. That creates, of course, tensions with the people that are already there.

So think for instance about the Nomadic herdsman and the more agricultural farmers that you have and the kind of tension. They all need a little bit of water, so you see a lot of examples. There’s this well known graph where you see the world’s food prices over the past 10 years. There were two big spikes where suddenly the food prices as well as the energy prices rapidly went up. The most well known is in late 2010. Then if you plot on that graph the revolutions and uprisings and unrest in the world, you see that as soon as the world’s food price gets above, let’s say, 200, you see that there is so much more unrest. The 2010 one led soon after to the Arab Spring, which is not an automatic connection. In some countries there was no unrest, and they had the same drought, so it’s not a one on one connection.

So I think you used the right word of saying a threat multiplier. On top of all the other problems they have with bad governance and fragile economies and all kinds of other development aspects that you find back in those same SDGs that were mentioned, if you add to that the climate change problem, you will get a lot of unrest.

But let me add one last thing here. It’s not just about security. There’s also, there’s an example for instance, when Bangkok was flooding, the factory that produced chips was flooded. The chip prices worldwide suddenly rose like 10%, but there was this factory in the UK that produced perfectly ready cars to sell. The only thing they missed was this few-centimeters big electronic chip that needed to be in the car. So they had to close the factory for like 6 weeks because of a flooding in Bangkok. That just shows that this interconnected worldwide economy that we have, you’re nowhere in the world safe from the impacts of climate change.

Ariel: I’m not sure if it was the same flood, but I think Apple had a similar problem, didn’t they? Where they had a backlog of problems with hard drives or something because the manufacturer, I think in Thailand, I don’t remember, flooded.

But anyway, one more problem that I want to bring up, and that is: at the moment we’re talking about actually taking action. I mean even if we only see global temperatures rise to two degrees Celsius, that will be because we took action. But my understanding is, on our current path we will exceed two degrees Celsius. In fact, the US National Highway Traffic Safety Administration Report that came out recently basically says that a 4 degree increase is inevitable. So I want to talk about what the world looks like at that level, and then also what runaway climate change is and whether you think we’re on a path towards runaway climate change, or if that’s still an extreme that hopefully won’t happen.

John: There’s a very important discussion that’s going on around at what point we will reach that tipping point where because of positive feedback loops, it’s just going to get worse and worse and worse. There’s been some very interesting publications lately that were trying to understand at what level that would happen. It turns out that the assessment is that it’s probably around 2 degrees. At the moment, if you look at the Paris Agreement and what all the countries have committed to and you basically take all those commitments which, you were mentioning the actions that already have been started, and you basically play them out until 2030, we would be on a track that would take us to 3 degrees increase, ultimately.

Ariel: And to clarify, that’s still with us taking some level of action, right? I mean, when you talk about that, that’s still us having done something?

John: Yeah, if you add up all the countries’ plans that they committed to and they fully implement them, it’s not sufficient. We would get to 3 degrees. But that’s just to say just how much action is required, we really need to step up the effort dramatically. That’s basically what the 1.5 degrees IPCC report tells us. If we were to get already to 2 degrees, let’s not talk about 3 degrees in the moment. But what could happen is that we would reach this tipping point into what scientists are describing a “Hothouse Earth.” What that means is that you get so much ice melting — now, the ice and snow serve an important protective function. They reflect back out, because it’s white it reflects back out a lot of the heat. If all that melts and is replaced by much darker land mass or ocean, then that heat is gonna be absorbed, not reflected. So that’s one positive feedback loop that constantly makes it even warmer, and that melts more ice, et cetera.

Another one is the permafrost, where the permafrost, as its name suggests, is frozen in the northern latitudes. The risk is that it starts to melt. It’s not the permafrost itself, it’s all the methane that it contains, which is a very powerful greenhouse gas which would then get released. That leads to warmer temperatures which melts even more of the permafrost et cetera.

That’s the whole idea of runaway, then we completely lose control, all the natural cooling systems, the trees and so on start to die back as well, and so we get four, five, six … But as I mentioned earlier, 4 could be 7 in some parts of the world and it could be 2 or 3 in others. It would make large parts of the world basically uninhabitable if you take it to the extreme of where it could all go.

Ariel: Do we have ideas of how long that could take? Is that something that we think could happen in the next 100 years or is that something that would still take a couple hundred years?

John: Whenever we talk about the temperature increases, we’re looking at the end of the century, so that’s 2100, but that’s less than 100 years.

Ariel: Okay.

Alexander: The problem is looking to, at the end of the century, this always come back to “end of the century.” It sounds so far away, it’s just 82 years. I mean if you flip back, you’re in 1936. My father was a boy of 10 years old and it’s not that far away. My daughter might still live in 2100, but by that time she’ll have children and maybe grandchildren that have to live through the next century. It’s not that once we are at the year 2100 that the problem suddenly stops. We talk about an accelerating problem. If you stay on the business-as-usual scenario and you mitigate hardly anything, then it’s 4 degrees at the end of the century, but the temperatures keep rising.

As we already said, 4 degrees at the end of the century, that is kind of average. In the worst case scenario, it might as well be 6. It could also be less. And in the Arctic it could be anywhere between let’s say 6 or maybe even 11. It’s typically the Arctic where you have this methane, what John was just talking about, so we don’t want to get some kind of Venus, you know. This is typically the world we do not want. That makes it why it’s so extremely important to take measures now because anything you do now is a fantastic investment in the future.

If you look at risks on other things, Dick Cheney a couple of years ago said, if there’s only 1% chance that terrorists will get weapons of mass destruction we should act as if they have them. Why don’t we do it in this case? If there’s only 1% chance that we would get complete destruction of the planet as we know it, we have to take urgent action. So why do it on the one risk that hardly kills people if you look on big numbers, however bad terrorism is, and now we talk something about a potential massive killer of millions of people and we just say, “Yeah, well you know, only 50% chance that we get in this scenario or that scenario.”

What would you do if you were sitting in a plane and at takeoff the pilot says, “Hi guys. Happy to be on board. This is how you buckle and unbuckle your belt. And oh by the way, we have 50% chance that we’re gonna make it today. Hooray, we’re going to take off.” Well you would get out of the plane. But you can’t get out of this planet. So we have to take action urgently, and I think the report that came out is excellent.

The problem is, if you’re reading it a bit too much and everybody is focusing on it now, you get into this energetic mood like, “Hey. We can do it!” We only talk about corals. We only talk about this because suddenly we’re not talking about the three or four or five degree scenarios, which is good for a change because it gives hope. I know that in talks like this I always try to give as much hope as I can and show the possibilities, but we shouldn’t forget about how serious the thing is that we’re actually talking about. So now we go back to the positive side.

Ariel: Well I am all for switching to the positive side. I find myself getting increasingly cynical about our odds of success, so let’s try to fix that in whatever time we have left.

John: Can I just add just briefly, Alex, because I think that’s a great comment. It’s something that I’m also confronted with sometimes by fellow climate change folk, is that they come up to me, and this is after they’ve heard me talk about what the solutions are. They tell me, “Don’t make it sound too easy either.” But I think it’s a question of balance and I think that when we do talk about the solutions and we’ll hear about them, but do bear in mind just how much change is involved. I mean it is really very significant change that we need to embark on to avoid 1.5 or beyond.

Alexander: There’s basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.

It is only because those that have so much political power are so closely connected to the big corporations that look for short-term profits, and certainly not all of them, but the ones that are really influential, and I’m certainly thinking about the country of our host today. They have so much impact on the policies that are made and their sole interest is just the next quarterly financial report that comes out. That is not in the interest of the people of this planet.

Ariel: So this is actually a good transition to a couple of questions that I have. I actually did start looking at the book Drawdown, which talks about, what is it, 80 solutions? Is that what they discuss?

John: Yeah, 80 existing solutions or technologies or practices, and then there’s 20 what they call coming attractions which would be in addition to that. But it’s the 80 we’re talking about, yeah.

Ariel: Okay, so I started reading that and I read the introduction and the first chapter and felt very, very hopeful. I started reading about some of the technologies and I still felt hopeful. Then as I continued reading it and began to fully appreciate just how many technologies have to be implemented, I started to feel less hopeful. And so, going back, before we talk too much about the specific technologies, I think as someone who’s in the US, one of the questions that I have is even if our federal government isn’t going to take action, is it still possible for those of us who do believe that climate change is an issue to take enough action that we can counter that?

John: That’s an excellent question and it’s a very apropos question as well. My take on this is I had the privilege of being at the Global Climate Action Summit in San Francisco. You’re living it, but I think it’s two worlds basically in the United States at the moment, at least two worlds. What really impressed me, however, was that you had people of all political persuasions, you had indigenous people, you had the head of the union, you had mayors, city leaders. You also had some country leaders as well who were there, particularly those who are gonna be most impacted by climate change. What really excited me was the number of commitments that were coming at us throughout the days of, one city that’s gonna go completely renewable and so on.

We had so many examples of those. And in particular, if you’re talking about the US, California, which actually if it was its own country would be the fifth economy I believe — they’re committed to achieving 100% renewable energy by 2050. There was also the mayor of Houston, for instance, who explained how quickly he wanted to also achieve 100% renewables. That’s very exciting and that movement I think is very important. It would be of course much much better to have nations’ leaders as well to fully back this, but I think that there’s a trickle-up aspect, and I don’t know if this is the right time to talk about exponential growth that can happen. Maybe when we talk about the specific solutions we can talk about just how quickly they can go, particularly when you have a popular movement around saving the climate.

A couple of weeks ago I was in Geneva. There was a protest there. Geneva is quite a conservative city actually. I mean you’ve got some wonderful chocolate as you know, but also a lot of banks and so on. At the march, there were, according to the organizers, 7000 people. It was really impressive to see that in Geneva which is not that big a city. The year before at the same march there were 500. So we’re more than increasing the numbers by 10, and I think that there’s a lot of communities and citizens that are being affected that are saying, “I don’t care what the federal government’s doing. I’m gonna put a solar panel on my roof. I’m going to change my diet, because it’s cheaper, it saves me money, and it also is much healthier to do that and with much more resilience,” when a hurricane comes around for instance.

Ariel: I think now is a good time to start talking about what some of the solutions are. I wanna come back to the idea of trickle up, because I’m still gonna ask you guys more questions about individual action as well, but first let’s talk about some of the things that we can be doing now. What are some of the technological developments that exist today that have the most promise that we should be investing more in and using more?

John: What I perhaps wanted to do is just take a little step back, because the IPCC does talk about some very unpleasant things that could happen to our planet, but they also talk about what the steps are to stay within 1.5 degrees. Then there’s some other plans we can discuss that also achieve that. So what does the IPCC tell us? You mentioned it earlier. First of all, we need to significantly cut, every decade actually, by half, the carbon dioxide emission and greenhouse gas emissions. That’s something called the Carbon Law. It’s very convenient because you can imagine defining what your objective is and say okay, every 10 years I need to cut in half the emissions. That’s number one.

Number two is that we need to go dramatically to renewables. There’s no other way, because of the emissions that fossil fuels produce, they will no longer be an option. We have to go renewable as quickly as possible. It can be done by 2050. There’s a professor at Stanford called Mark Jacobson who with an international team has mapped out the way to get to 100% renewables for 139 countries. It’s called The Solutions Project. Number Three has to do with fossil fuels. What the IPCC says is that there should be practically no coal being used in 2050. That’s where there are some differences.

Basically, as I mentioned earlier, on the one hand you have your emissions and on the other hand you have this capture, the sequestration of carbon by soils and by vegetation. They’re both in balance. One is putting CO2 into the air, and the other is taking it out. So we need to favor obviously the sequestration. It’s an area under the curve problem. You have a certain budget that’s associated with that temperature increase. If you emit more, you need to absorb more. There’s just no two ways about it.

The IPCC is actually in that respect quite conservative, because they’re saying there still will be coal around. Whereas there are other plans such as Drawdown and the Exponential Climate Action Roadmap, as well as The Solutions Project which I just mentioned, which get us to 100% renewables by 2050, and so zero emissions for sake of argument.

The other difference I would say with the IPCC is that because you are faced with this tremendous problem of all this carbon dioxide we need to take out of the atmosphere, which is where Drawdown comes from. The term means to draw out of the atmosphere the carbon dioxide. There’s this technology which is around, it’s basically called energy crops. You basically grow crops for energy. That gives us a little bit of an issue because it encourages politicians to think that there’s a magic wand that we’ll be able to use in the future to all of a sudden be able to remove the carbon dioxide. I’m not saying that we may very well have to get there, what I am saying is that we can, with for instance Drawdown’s 80 solutions, get there.

Now in terms of the promise, the thing that I think is important is that the thinking has to evolve from the magic bullet syndrome that we all live every day, we always want to find that magic solution that’ll solve everything, to thinking more holistically about the whole of the Earth’s planetary system and how they interact and how we can achieve solutions that way.

Alexander: Can I ask something John? Can you summarize that Drawdown relies with its 80 technologies, completely on proven technology whereas in the recent 1.5 report, I have the impression that they practically, for every solution that they come up with, they rely on still unproven technologies that are still on the drawing table or maybe tested on a very small scale? Is there a difference between those two approaches?

John: Not exactly. I think there’s actually a lot of overlap. There’s a lot of the same solutions that are in Drawdown are in all climate solutions, so we come back to the same set which is actually very reassuring because that’s the way science works. It empirically tests and models all the different solutions. So what I always find very reassuring is whenever I read different approaches, I always look back at Drawdown and I say, “Okay yes, that’s in the 80 solutions.” So I think there is actually a lot of over overlap. A lot of IPCC is Drawdown solutions, but the IPCC works a bit differently because the scientists have to work with governments in terms of coming up with proposals, so there is a process of negotiation of how far can we take this which scientists such as the Project Drawdown scientists are unfettered by that.

They just go out and they look for what’s best. They don’t care if it’s politically sensitive or not, they will say what they need to say. But I think the big area of concern is this famous bio-energy carbon capture and storage (BECCS), which are these energy crops that you grow and then you capture the carbon dioxide. So you actually are capturing carbon dioxide. There’s both moral hazard because politicians will say, “Okay. I’m just going to wait until BECCS comes round and that will solve all our problems,” on the one hand. On the other hand it does pose us with some serious questions about competition of land for producing crops versus producing crops for energy.

Ariel: I actually want to follow up with Alexander’s question really quickly because I’ve gotten a similar impression that some of the stuff in the IPCC report is for technologies that are still in development. But my understanding is that the Drawdown solutions are in theory at least, if not in practice, ready to scale up.

John: They’re existing technologies, yeah.

Ariel: So when you say there’s a lot of overlap, is that me or us misunderstanding the IPCC report or are there solutions in the IPCC report that aren’t ready to be scaled up?

John: The approaches are a bit different. The approaches that Drawdown takes is a bottom up approach. They basically unleashed 65 scientists to go out and look for the best solutions. So they go out and they look at all the literature. And it just so happens that nuclear energy is one of them. It doesn’t produce greenhouse gas emissions. It is a way of producing energy that doesn’t cause climate change. A lot of people don’t like that of course, because of all the other problems we have with nuclear. But let me just reassure you very quickly that there are three scenarios for Drawdown. It goes from so-called “Plausible,” which I don’t like as a name because it suggests that the other ones might not be plausible, but it’s the most conservative one. Then the second one is “Drawdown.” Then the third one is “Optimum.”

Optimum doesn’t include solutions that are called with regrets, such as nuclear. So when you go optimum, basically it’s 100% renewable. There’s no nuclear energy in there either in the mix. That’s very positive. But in terms of the solutions, what they look at, what IPCC looks at is the trajectory that you could achieve given the existing technologies. So they talk about renewables, they talk about fossil fuels going down to net zero, they talk about natural climate solutions, but perhaps they don’t talk about, for instance, educating girls, which is one of the most important Drawdown solutions because of the approach that Drawdown takes where they look at everything. Sorry, that’s a bit of a long answer to your question.

Alexander: That’s actually part of the beauty of Drawdown, that they look so broadly, that educating girls… So a girl leaving school at 12 got on average like five children and a girl that you educate leaving school at the age of 18 on average has about two children, and they will have a better quality of life. They will put much less pressure on the planet. So this more holistic approach of Drawdown I like very much and I think it’s good to see so much overlap between Drawdown and IPCC. But I was struck by IPCC that it relies so heavily on still unproven technologies. I guess we have to bet on all our horses and treat this a bit as a kind of wartime economy. If you see the creativity and the innovation that we saw during the second World War in the field of technology as well as government by the way, and if you see, let’s say, the race to the moon, the amazing technology that was developed in such a short time.

Once you really dedicate all your knowledge and your creativity and your finances and your political will into solving this, we can solve this. That is what Drawdown is saying and that is also what the IPCC 1.5 is saying. We can do it, but we need the political will and we need to mobilize the strengths that we have. Unfortunately, when I look around worldwide, the trend is in many countries exactly the opposite. I think Brazil might soon be the latest one that we should be worried about.

John: Yeah.

Ariel: So this is, I guess where I’m most interested in what we can do and also possibly the most cynical, and this comes back to this trickle up idea that you were talking about. That is, we don’t have the political will right now. So what do those of us who do have the will do? How do we make that transition of people caring to governments caring? Because I do, maybe this is me being optimistic, but I do think if we can get enough people taking individual action, that will force governments to start taking action.

John: So trickle up, grassroots, I think we’re in the same sort of idea. I think it’s really important to talk a little bit, and then we will get into the solutions, but to talk about not just as the solutions to global warming, but to a lot of other problems as well such as air pollution, our health, the pollution that we see in the environment. And actually Alexander you were talking earlier about the huge transformation. But transformation does not necessarily always have to mean sacrifice. It doesn’t also have to mean that we necessarily, although it’s certainly a good idea, for instance, I think you were gonna ask a question also about flying, to fly less there’s no doubt about that. To perhaps not buy the 15th set of clothes and so on so forth.

So there certainly is an element of that, although the positive side of that is the circular economy. In fact, these solutions, it’s not a question of no growth or less growth, but it’s a question of different growth. I think in terms of the discussion in climate change, one mistake that we have made is emphasized too much the “don’t do this.” I think that’s also what’s really interesting about Drawdown, is that there’s no real judgments in there. They’re basically saying, “These are the facts.” If you have a plant-based diet, you will have a huge impact on the climate versus if you eat steak every day, right? But it’s not making a judgment. Rather than don’t eat meat it’s saying eat plant-based foods.

Ariel: So instead of saying don’t drive your car, try to make it a competition to see who can bike the furthest each week or bike the most miles?

John: For example, yeah. Or consider buying an electric car if you absolutely have to have a car. I mean in the US it’s more indispensable than in Europe.

Alexander: It means in the US that when you build new cities, try to build them in a more clever way than the US has been doing up until now because if you’re in America and you want to buy whatever, a new toothbrush, you have to get in your car to go there. When I’m in Europe, I just walk out of the door and within 100 meters I can buy a toothbrush somewhere. I walk or I go on a bicycle.

John: That might be a longer-term solution.

Alexander: Well actually it’s not. I mean in the next 30 years, the amount of investment they can place new cities is an amount of 90 trillion dollars. The city patterns that we have in Europe were developed in the Middle Ages in the centers of cities, so although it is urgent and we have to do a lot of things, you should also think about the investments that you make now that will be followed for hundreds of years. We shouldn’t keep repeating the mistakes from the past. These are the kinds of things we should also talk about. But to come back to your question on what we can do individually, I think there is so much that you can do that helps the planet.

Of course, you’re only one out of seven billion people, although if you listen to this podcast it is likely that you are in that elite out of that seven billion that is consuming much more of the planet, let’s say, than your quota that you should be allowed to. But it means, for instance, changing your diet, and then if you go to a plant-based diet, the perks are not only that it is good for the planet, it is good for yourself as well. You live longer. You have less chance of developing cancer or heart disease or all kinds of other things you don’t want to have. You will live longer. You will have for a longer time a healthier life.

It means actually that you discover all kinds of wonderful recipes that you had never heard of before when you were still eating steak every day, and it is actually a fantastic contribution for the animals that are daily on an unimaginable scale tortured all over the world, locked up in small cages. You don’t see it when you buy it at a butcher, but you are responsible because they do that because you are the consumer. So stop doing that. Better for the planet. Better for the animals. Better for yourself. Same with use your bicycle, walk more. I still have a car. It is 21 years old. It’s the only car I ever bought in my life, and I use it maximum 20 minutes per month. I’m not even buying an electrical vehicle because I still got an old one. There’s a lot that you can do and it has more advantages than just to the planet.

John: Absolutely. Actually, walkable cities is one of the Drawdown solutions. Maybe I can just mention very quickly. I’ll just list out of the 80 solutions, there was a very interesting study that showed that there are 30 of them that we could put into place today, and that that added up to about 40% of the greenhouse gases that we’ll be able to remove.

I’ll just list them quickly. The ones at the end, they’re more, if you are in an agricultural setting, which of course is probably not the case for many of your listeners. But: reduced food waste, plant-rich diets, clean cookstoves, composting, electric vehicles we talked about, ride sharing, mass transit, telepresence (basically video conferencing, and there’s a lot of progress being made there which means we perhaps don’t need to take that airplane.) Hybrid cars, bicycle infrastructure, walkable cities, electric bicycles, rooftop solar, solar water (so that’s heating your hot water using solar.) Methane digesters (it’s more in an agricultural setting where you use biomass to produce methane.) Then you have LED lighting, which is a 90% gain compared to incandescent. Household water saving, smart thermostats, household recycling and recyclable paper, micro wind (there are some people that are putting a little wind turbine on their roof.)

Now these have to do with agriculture, so they’re things like civil pasture, tropical staple trees, tree intercropping, regenerative agriculture, farmland restoration, managed grazing, farmland irrigation and so on. If you add all those up it’s already 37% of the solution. I suspect that the 20 is probably a good 20%. Those are things you can do tomorrow — today.

Ariel: Those are helpful, and we can find those all at drawdown.org; that’ll also list all 80. So you’ve brought this up a couple times, so let’s talk about flying. This was one of those things that really hit home for me. I’ve done the carbon footprint thing and I have an excellent carbon footprint right up until I fly and then it just explodes. As soon as I start adding the footprint from my flights it’s just awful. I found it frustrating that one, so many scientists especially have … I mean it’s not even that they’re flying, it’s that they have to fly if they want to develop their careers. They have to go to conferences. They have to go speak places. I don’t even know where the responsibility should lie, but it seems like maybe we need to try to be cutting back on all of this in some way, that people need to be trying to do more. I’m curious what you guys think about that.

Alexander: Well start by paying tax, for instance. Why is it — well I know why it is — but it’s absurd that when you fly an airplane you don’t pay tax. You can fly all across Europe for like 50 euros or 50 dollars. That is crazy. If you would do the same by your car, you pay tax on the petrol that you buy, and worse, you are not charged for the pollution that you cause. We know that airplanes are heavily polluting. It’s not only the CO2 that they produce, but where they produce, how they produce. It works three to four times faster than all the CO2 that you produce if you drive your car. So we know how bad it is, then make people pay for it. Just make flying more expensive. Pay for the carbon you produce. When I produce waste at home, I pay to my municipality because they pick it up and they have to take care of my garbage, but if I put garbage in the atmosphere, somehow I don’t go there. Actually, it is by all sorts of strange ways, it’s actually subsidized because you don’t pay a tax for it, so there’s worldwide like five or six times as much subsidies on fossil fuels than there is on renewables.

We completely have to change the system. Give people a budget maybe. I don’t know, there could be many solutions. You could say that everybody has the right to search a budget for flying or for carbon, and you can maybe trade that or swap it or whatever. There’s some NGOs that do it. They say to, I think the World Wildlife Fund, but correct me if I’m wrong. All the people working there, they get not only a budget for the projects, they also get a carbon budget. You just have to choose, am I going to this conference or going to that conference, or should I take the train, and you just keep track of what you are doing. That’s something we should maybe roll out on a much bigger scale and make it more expensive.

John: Yeah, the whole idea of a carbon tax, I think is key. I think that’s really important. Some other thoughts: Definitely reduce, do you really absolutely need to make that trip, think about it. Now with webcasting and video conferencing, we can do a lot more without flying. The other thing I suggest is that when you at some point you absolutely do have to travel, try to combine it with as many other things as possible that are perhaps not directly professional. If you are already in the climate change field, then at least you’re traveling for a reason. Then it’s a question of the offsets. Using calculators you can see what the emissions were and pay for what’s called an offset. That’s another option as well.

Ariel: I’ve heard mixed things about offsets. In some cases I see that yes, you should absolutely buy them, and you should. If you fly, you should get them. But that in a lot of cases they’re a bandaid or they might be making it seem like it’s okay to do this when it’s still not the solution. I’m curious what your thoughts on that are.

John: For me, something like an offset, as much as possible should be a last resort. You absolutely have to make the trip, it’s really important, and you offset your trip. You pay for some trees to be planted in the rainforest for instance. There are loads of different possibilities to do so. It’s not a good idea. Unfortunately Switzerland’s plan, for instance, includes a lot of getting others to reduce emissions. That’s really, you can argue that it’s cheaper to do it that way and somebody else might do it more cheaply for you so to speak. So cheaper to plant a tree and it’ll have more impact in the rainforest than in Switzerland. But on the other hand, it’s something which I think we really have to avoid, also because in the end the green economy is where the future lies and where we need to transform to. So if we’re constantly getting others to do the decarbonization for us, then we’ll be stuck with an industry which is ultimately will become very expensive. That’s not a good idea either.

Alexander: I think also the prices are absolutely unrealistic. If you fly, let’s say, from London to New York, your personal, just the fact that you were in the plane, not all the other people, the fact you were in the plane is responsible for three square meters of the Arctic that is melting. You can offset that by paying something like, what is it, 15 or 20 dollars for offsetting that flight. That makes ice in the Arctic extremely cheap. A square meter would be worth something like seven dollars. Well I personally would believe that it’s worth much more.

Then the thing is, then they’re going to plant a tree that takes a lot of time to grow. By the time it’s big, it’s getting CO2 out of the air, are they going to cut it and make newspapers out of it which you then burn in a fireplace, the carbon is still back to where it was. So you need to really carefully think what you’re doing. I feel it is very much a bit like going to a priest and say like, “I have flown. Oh, I have sinned, but I can now do a few prayers and I pay these $20 and now it’s fine. I can book my next flight.” That is not the way it should be. Punish people up front to pay the tickets. Pay the price for the pollution and for the harm that you are causing to this planet and to your fellow citizens on this planet.

John: Couldn’t agree more. But there are offset providers in the US, look them up. See which one you like the best and perhaps buy more offsets. Economy is half the carbon than Business class, I hate to say.

Alexander: Something for me which you mentioned there, I decided long ago, six, seven years ago, that I would never ever in my life fly Business again. I’m not, as somebody who had a thrombosis and the doctors advised me that I should take business, I don’t. I still fly. I’m very much like Ariel that my footprint is okay until the moment that I start adding flying because I do that a lot for my job. Let’s say in the next few weeks, I have a meeting in the Netherlands. I have only 20 days later a meeting in England. I stay in the Netherlands. In between I do all my travel to Belgium and France and the UK, I do everything by train. It’s only that by plane I’m going back from London to Stockholm, because I couldn’t find any reasonable way to go back. I wonder why don’t we have high speed train connections all the way up to Stockholm here.

Ariel: We talked a lot about taxing carbon. I had an interesting experience last week where I’m doing what I can to try to not drive if I’m in town. I’m trying to either bike or take the bus. What often happens is that works great until I’m running late for something, and then I just drive because it’s easier. But the other week, I was giving a little talk on the campus at CU Boulder, and the parking on CU Boulder is just awful. There is absolutely no way that, no matter how late I’m running, it’s more convenient for me to take my car. It never even once dawned on me to take the car. I took a bus. It’s that much easier. I thought that was really interesting because I don’t care how expensive you make gas or parking, if I’m running late I’m probably gonna pay for it. Whereas if you make it so inconvenient that it just makes me later, I won’t do that. I was wondering if you have any other, how can we do things like that where there’s also this inconvenience factor?

Alexander: Have a look at Europe. Well coincidentally I know CU Boulder and I know how difficult the parking is. That’s the brilliance of Boulder where I see a lot of brilliant things. It’s what we do in Europe. I mean one of the reasons why I never ever use a car in Stockholm is that I have no clue how or where to park it, nor can I read the signs because my Swedish is so bad. I’m afraid of a ticket. I never use the car here. Also because we have such perfect public transport. The latest thing they have here is the VOI that just came out like last month, which is, I don’t know the word, we call it “step” in Dutch. I don’t know what you call that in English, whether it’s the same word or not, but it’s like these two-wheeled things that kids normally have. You know?

They are now here electric, so you download an app on your mobile phone and you see one of them in the street because they’re everywhere now. Type in a code and then it unlocks. Then it starts using your time. So for every minute, you pay like 15 cents. So all these electric little things that are everywhere for free, you just drive all around town and you just drop them wherever you like. When you need one, you look on your app and the app shows you where the nearest one is. It’s an amazing way of transport and it’s just, a month ago you saw just one or two. Now they are everywhere. You’re on the streets, you see one. It’s an amazing new way of transport. It’s very popular. It just works on electricity. It makes things so much more easy to reach everywhere in the city because you go at least twice as fast as walking.

John: There was a really interesting article in The Economist about parking. Do you know how many parking spots The Shard, the brand new building in London, the skyscraper has? Eight. The point that’s being made in terms of what you were just asking about in terms of inconvenience, in Europe it just really, in most cases it really doesn’t make any sense at all to take a car into the city. It’s a nightmare.

Before we talk more about personal solutions, I did want to make some points about the economics of all these solutions because what’s really interesting about Drawdown as well is that they looked at both what you would save and what it would cost you to save that over the 30 years that you would put in place those solutions. They came up with some things which at first sight are really quite surprising, because you would save 74.4 trillion dollars for an investment or a net cost of 29.6 trillion.

Now that’s not for all the solutions, so it’s not exactly that. In some of the solutions it’s very difficult to estimate. For instance, the value of educating girls. I mean it’s inestimable. But the point that’s also made is that if you look at The Solutions Project, Professor Jacobson, they also looked at savings, but they looked at other savings that I think are much more interesting and much more important as well. You would basically see a net increase of over 24 million long-term jobs that you would see an annual decrease in four to seven million air pollution deaths per year.

You would also see the stabilization of energy prices, because think of the price of oil where it goes from one day to the next, and annual savings of over 20 trillion in health and climate costs. Which comes back to, when you’re doing those solutions, you are also saving money, but you are also saving more importantly peoples’ lives, the tragedy of the commons, right? So I think it’s really important to think about those solutions. I mean we know very well why we are still using fossil fuels, it’s because of the massive subsidies and support that they get and the fact that vested interests are going to defend their interests.

I think that’s really important to think about in terms of those solutions. They are becoming more and more possible. Which leads me to the other point that I’m always asked about, which is, it’s not going fast enough. We’re not seeing enough renewables. Why is that? Because even though we don’t tax fuel, as you mentioned Alexander, because we’ve produced now so many solar panels, the cost is getting to be much cheaper. It’ll get cheaper and cheaper. That’s linked to this whole idea of exponential growth or tipping points, where all of a sudden all of us start to have a solar panel on our roof, where more and more of us become vegetarians.

I’ll just tell you a quick anecdote on that. We had some out of town guests who absolutely wanted to go to actually a very good steakhouse in Geneva. So along we went. We didn’t want to offend them and say “No, no, no. We’re certainly not gonna go to a steakhouse.” So we went along. It was a group of seven of us. Imagine the surprise when they came to take our orders and three out of seven of us said, “I’m afraid we’re vegetarians.” It was a bit of a shock. I think those types of things start to make others think as well, “Oh, why are you vegetarian,” and so on and so forth.

That sort of reflection means that certain business models are gonna go out of business, perhaps much faster than we think. On the more positive side, there are gonna be many more vegetarian restaurants, you can be sure, in the future.

Ariel: I want to ask about what we’re all doing individually to address climate change. But Alexander, one of the things that you’ve done that’s probably not what just a normal person would do, is start the Planetary Security Initiative. So before we get into what individuals can do, I was hoping you could talk a little bit about what that is.

Alexander: That was not so much as an individual. I was at Yale University for half a year when I started this, but then when I came back in the Ministry of Foreign Affairs for one more year, I had some ideas and I got support from the ministers of doing that, on bringing the experts in the world together that work in the field of the impact that climate change will have on security. So the idea to start was creating an annual meeting where all these experts in the world come together because that didn’t exist yet, and to make more scientists and researchers in the world energetic to study more in the field of how this relationship works. But more importantly, the idea was also to connect the knowledge and the insights of these experts on how the changing climate and the impacts impacts has on water and food, and our changing planetary conditions, how they are impacting the geopolitics.

I have a background, both in security as well as environment. That used to be two completely different tracks that weren’t really interacting. The more I was working on those two things, the more that I saw that the changing environment is actually directly impacting our security situation. It’s already happening and you can be pretty sure that the impact is going to be much more in the future. So what we then started was a meeting in the Peace Palace in the Hague. There were some 75 countries the first time that we were present there, and then the key experts in the world. It’s now an annual meeting that always takes place. For anybody that’s interested, contact me and then I will provide you with the right contact. It is growing now into all kinds of other initiatives and other involvement and more studies that are taking place.

So the issue is really taking off, and that is mainly because more and more people see the need of getting better insights into the impact that all of these changes that we’ve been discussing, that it’ll have on security whether that’s individual security, human security of individuals, that’s also geopolitical security. Imagine that when so much is changing, when the economies are changing so rapidly, when interests of people change and when people start going on the move, tensions will rise for a number of reasons, partly related to climate change, but it’s very much a situation where climate change is already in an existing fragile situation, it’s making it worse. So that is the Planetary Security Initiative. The government of the Netherlands has been very strong on this, working closely together with something other governments. Sweden, for instance, where I’m living, Sweden has in the past year been focusing very much on strengthening the United Nations, that you would have experts at the relevant high level in New York that can connect the dots and connect to people and the issues to not just raise awareness for the issue, but make sure that in the policies that are made, these issues are also taken into account because you better do it up front than repair damage afterwards if you haven’t taken care of these issues.

It’s a rapidly developing field. There is a new thing as, for instance, using AI and data, I think the World Resources Institute in Washington is very good at that, where they combine let’s say, the geophysical data, let’s say satellite and other data on increasing drought in the world, but also deforestation and other resource issues. They are connecting that now with the geopolitical impacts with AI and with combining all these completely different databases. You get much better insight on where the risks really are, and I believe that in the years to come, WRI in combination with several other think tanks can do brilliant work where the world is really waiting for the kind of insights. International policies will be so much more effective if you know much better where the problems are really going to hit first.

Ariel: Thank you. All right, so we are starting to get a little bit short on time, and I want to finish the discussion with things that we’ve personally been doing. I’m gonna include myself in this one because I think the more examples the better. So what we’ve personally been doing to change our lifestyles for the better, not sacrifice, but for the better, to address climate change. And also, to keep us all human, where we’re failing that we wish we were doing better.

I can go ahead and start. I am trying to not use my car in town. I’m trying to stick to biking or taking public transportation. I have dropped the temperature in our house by another degree, so I’m wearing more sweaters. I’m going to try to be stricter about flying, only if I feel that I will actually be having a good impact on the world will I fly, or a family emergency, things like that.

I’m pretty sure our house is on wind power. I work remotely, so I work from home. I don’t have to travel for work. I those are some of the big things, and as I said, flying is still a problem for me so that’s something I’m working on. Food is also an issue for me. I have lots of food issues so cutting out meat isn’t something that I can do. But I have tried to buy most of my food from local farms, I’m trying to buy most of my meat from local farms where they’re taking better care of the animals as well. So hopefully that helps a little bit. I’m also just trying to cut back on my consumption in general. I’m trying to not buy as many things, and if I do buy things I’m trying to get them from companies that are more environmentally-conscious. So I think food and flying are sort of where I’m failing a little bit, but I think that’s everything on my end.

Alexander: I think one of the big changes I made is I became years ago already vegetarian for a number of good reasons. I am now practically vegan. Sometimes when I travel it’s a bit too difficult. I hardly ever use the car. I guess it’s just five or six times a year that I actually use my car. I use bicycles and public transport. The electricity at our home is all wind power. In the Netherlands, that’s relatively easy to arrange nowadays. There’s a lot of offers for it, so I deliberately buy wind power, including in the times when wind power was still more expensive than other power. I think about in consumption, when I buy food, I try to buy more local food. There’s the occasional kiwi, which I always wonder it’s arrives in Europe, but that’s another thing that you can think of. Apart from flying, I really do my best with my footprint. Then flying is the difficult thing because with my work, I need to fly. It is about personal contacts. It is about meeting a lot of people. It’s about teaching.

I do teaching online. I use Skype for teaching to classrooms. I do many Skype conferences all the time, but yes I’m still flying. I refuse flying business class. I started that some six, seven years ago. Just today business class ticket was offered to me for a very long flight and I refused it. I say I will fly economy. But yes, the flying is what adds to my footprint. I still, I try to combine trips. I try to stay longer at a certain place, combining it, and then by train go to all kinds of other places. But when you’re stuck here in Stockholm, it’s quite difficult to get here by other means than flying. Once I’m, let’s say, in the Netherlands or Brussels or Paris or London or Geneva, you can do all those things by train, but it gets a bit more difficult out here.

John: Pretty much in Alexander’s case, except that I’m very local. I travel actually very little and I keep the travel down. If I do have to travel, I have managed to do seven hour trips by train. That’s a possibility in Europe, but that sort of gets you to the middle of Germany. Then the other thing is I’ve become vegetarian recently. I’m pretty close to vegan, although it’s difficult with such good cheese we have in this country. But the way it came about is interesting as well. It’s not just me. It’s myself, my wife, my daughter, and my son. The third child is never gonna become vegetarian I don’t think. But that’s not bad, four out of five.

In terms of what I think you can do and also points to things that we perhaps don’t think about contributing, being a voice, vis a vis others in our own communities and explaining why you do what you do in terms of biking and so on so forth. I think that really encourages others to do the same. It can grow a lot like that. In that vein, I teach as much as I can to high school students. I talk to them about Drawdown. I talk to them about solutions and so on. They get it. They are very very switched on about this. I really enjoy that. You really see, it’s their future, it’s their generation. They don’t have very much choice unfortunately. On a more positive note, I think they can really take it away in terms of a lot of actions which we haven’t done enough of.

Ariel: Well I wanted to mention this stuff because going back to your idea, this trickle up, I’m still hopeful that if people take action that that will start to force governments to. One final question on that note, did you guys find yourselves struggling with any of these changes or did you find them pretty easy to make?

Alexander: I think all of them were easy. Switching your energy to wind power, et cetera. Buying more consciously. It comes naturally. I was already vegetarian, and then moving to vegan, just go online and read it about it and how to do it. I remember when I was a kid that hardly anybody was vegetarian. Then I once discussed it with my mother and she said, “Oh it’s really difficult because then you need to totally balance your food and be in touch with your doctor, whatever.” I’ve never spoken to any doctor. I just stopped eating meat and now I … Years ago I swore out all dairy. I’ve never been ill. I don’t feel ill. Actually I feel better. It is not complicated. The rather complicated thing is flying, there are sometimes I have to make difficult choices like being for a long time away from home, I saved quite a bit on that part. That’s sometimes more complicated or, like soon I’ll be in a nearly eight hour train ride in something I could have flown in an hour.

John: I totally agree. I mean I enjoy being in a train, being able to work and not be worried about some truck running into you or the other foibles of driving which I find very very … I’ve got to a point where I’m becoming actually quite a bad driver. I drive so little that, I hope not, but I might have an accident.

Ariel: Well fingers crossed that doesn’t happen. Amd good. That’s been my experience so far too. The changes that I’ve been trying to make haven’t been difficult. I hope that’s an important point for people to realize. Anything else you want to add either of you?

Alexander: I think there’s just one thing that we didn’t touch on, on what you can do individually. That’s perhaps the most important one for us in democratic countries. That is vote. Vote for the best party that actually takes care of our long-term future, a party that aims for taking rapidly the right climate change measures. A party that wants to invest in a new economy that sees that if you invest now, you can be a leader later.

There is, in some countries, you have a lot of parties and there is all kinds of nuances. In other countries you have to deal with basically two parties, where just the one part is absolutely denying science and is doing exactly the wrong things and are basically aiming to ruin the planet as soon as possible, whereas the other party is actually looking for solutions. Well if you live in a country like that, and there are coincidentally soon elections coming up, vote for the party that takes the best positions on this because it is about the future of your children. It is the single most important influential thing that you can do, certainly if you live in a country where the emissions that the country produces are still among the highest in the world. Vote. Take people with you to do it.

Ariel: Yeah, so to be more specific about that, as I mentioned at the start this podcast, it’s coming out on Halloween, which means in the US, elections are next week. Please vote.

John: Yeah. Perhaps something else is how you invest, where your money is going. That’s one that can have a lot of impact as well. All I can say is, I hate to come back to Drawdown, but go through the Drawdown and think about your investments and say, okay, renewables whether it’s LEDs or whatever technology it is, if it’s in Drawdown, make sure it’s in your investment portfolio. If it’s not, you might want to get out of it, particularly the ones that we already know are causing the problem in the first place.

Ariel: That’s actually, that’s a good reminder. That’s something that has been on my list of things to do. I know I’m guilty of not investing in the proper companies at the moment. That’s something I’ve been wanting to fix.

Alexander: And tell your pension funds: divest from fossil fuels and invest in renewables and all kinds of good things that we need in the new economy.

John: But not necessarily because you’re doing it as a charitable cause, but really because these are the businesses of the future. We talked earlier about growth that these different businesses can take. Another factor that’s really important is efficiency. For instance, I’m sure you have heard of The Impossible Burger. It’s a plant-based burger. Now what do you think is the difference in terms of the amount of crop land required to produce a beef burger versus an impossible burger?

Alexander: I would say one in 25 or one in 35, but at range.

John: Yeah, so it’s one in 20. The thing is that when you look at that type of gain in efficiency, it’s just a question of time. A cow simply can’t compete. You have to cut down the trees to grow the animal feed that you ship to the cow, that the cow then eats. Then you have to wait a number of years, and that’s that 20 factor difference in efficiency. Now our capitalist economic system doesn’t like inefficient systems. You can try to make that cow as efficient as possible, you’re never going to be able to compete with a plant-based burger. Anybody who thinks that that plant-based burger isn’t going to displace the meat burger should really think again.

Ariel: All right, I think we’re ending on a nice hopeful note. So I want to thank you both for coming on today and talking about all of these issues.

Alexander: Thanks Ariel. It was nice to talk.

John: Thank you very much.

Ariel: If you enjoyed this podcast, please take a moment to like it and share it, and maybe even leave a positive review. And o f course, if you haven’t already, please follow us. You can find the FLI podcast on iTunes, Google Play, SoundCloud, and Stitcher.

[end of recorded material]

Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks?

In this special podcast episode, Ariel speaks with Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Martin is a cosmologist and space scientist based in the University of Cambridge. He is director of The Institute of Astronomy and Master of Trinity College, and he was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords.

Topics discussed in this episode include:

  • Why Martin remains a technical optimist even as he focuses on existential risks
  • The economics and ethics of climate change
  • How AI and automation will make it harder for Africa and the Middle East to economically develop
  • How high expectations for health care and quality of life also put society at risk
  • Why growing inequality could be our most underappreciated global risk
  • Martin’s view that biotechnology poses greater risk than AI
  • Earth’s carrying capacity and the dangers of overpopulation
  • Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
  • The ethics of artificial meat, life extension, and cryogenics
  • How intelligent life could expand into the galaxy
  • Why humans might be unable to answer fundamental questions about the universe

Books and resources discussed in this episode include

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloudiTunesGooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with The Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, and with a few focusing on nuclear weapons, but FLI is really an organization about existential risks, and especially x-risks that are the result of human action. These cover a much broader field than just artificial intelligence.

I’m excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He’s based in Cambridge where he has been director of The Institute of Astronomy, and Master of Trinity College. He was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including The National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.

He’s on the board of The Princeton Institute for Advanced Study, and has served on many bodies connected with international collaboration and science, especially threats stemming from humanity’s ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He’s written seven books for the general public, and his most recent book is about these threats. It’s the reason that I’ve asked him to join us today. First, Martin thank you so much for talking with me today.

Martin: Good to be in touch.

Ariel: Your new book is called On the Future: Prospects for Humanity. In his endorsement of the book Neil deGrasse Tyson says, “From climate change, to biotech, to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.”

I really liked this quote, because I felt like it sums up what your book is about. Basically science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. First, do you agree with that sentiment? Am I accurate in that description?

Martin: No, I certainly agree, and that’s truer of this century than ever before because of greater scientific knowledge we have, and the greater power to use it for good or ill, because the technologies allow tremendously advanced technologies which could be misused by a small number of people.

Ariel: You’ve written in the past about how you think we have essentially a 50/50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe what the overall message you’re hoping that people take away is?

Martin: Well, I describe myself as a technical optimist, but political pessimist because it is clear that we couldn’t be living such good lives today with seven and a half billion people on the planet if we didn’t have the technology which has been developed in the last 100 years, and clearly there’s a tremendous prospect of better technology in the future. But on the other hand what is depressing is the very big gap between the way the world could be, and the way the world actually is. In particular, even though we have the power to give everyone a decent life, the lot of the bottom billion people in the world is pretty miserable and could be alleviated a lot simply by the money owned by the 1,000 richest people in the world.

We have a very unjust society, and the politics is not optimizing the way technology is used for human benefit. My view is that it’s the politics which is an impediment to the best use of technology, and the reason this is important is that as time goes on we’re going to have a growing population which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed.

That’s the problem of the collective effect of us on the planet, but there’s another effect, which is that these new technologies, especially bio, cyber, and AI allow small groups of even individuals to have an effect by error or by design, which could cascade very broadly, even globally. This, I think, makes our society very brittle. We’re very interdependent, and on the other hand it’s easy for there to be a breakdown. That’s what depresses me, the gap between the way things could be, and the downsides if we collectively overreach ourselves, or if individuals cause disruption.

Ariel: You mentioned actually quite a few things that I’m hoping to touch on as we continue to talk. I’m almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have. It’s connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if we heard that there was 10% chance that an asteroid would strike in 2100 people would do something about it.

We wouldn’t say, “Oh, technology will be better in the future so let’s not worry about it now.” Apparently I’m very cynical, because I think that’s exactly what we would do. And I’m curious, what makes you feel more hopeful that even with something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation?

Martin: Well, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit, whereas the problem with climate change is really that it’s first of all, the effect is mainly going to be several decades in the future. It’s started to happen, but the really severe consequences are decades away. But also there’s an uncertainty, and it’s not a sort of sudden event we can easily visualize. It’s not at all clear therefore, how we are actually going to do something about it.

In the case of the asteroid, it would be clear what the strategy would be to try and deal with it, whereas in the case of climate there are lots of ways, and the problem is that the consequences are decades away, and they’re global. Most of the political focus obviously is on short-term worry, short-term problems, and on national or more local problems. Anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now, and it’s hard to keep those issues up the agenda when there are so many urgent things to worry about.

I think you’re maybe right that even if there was a threat of an asteroid, there may be the same sort of torpor, and we’d fail to deal with it, but I thought that’s an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climate it’s not so obviously going to be a catastrophe that people are motivated now to start thinking about it.

Ariel: I’ve heard it go both ways that either climate change is yes, obviously going to be bad but it’s not an existential risk so therefore those of us who are worried about existential risk don’t need to worry about it, but then I’ve also heard people say, “No, this could absolutely be an existential risk if we don’t prevent runaway climate change.” I was wondering if you could talk a bit about what worries you most regarding climate.

Martin: First of all, I don’t think it is an existential risk, but it’s something we should worry about. One point I make in my book is that I think the debate, which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science — although of course there are some complete deniers — but differences about ethics and economics. There’s some people of course who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there’s quite a big uncertainty, matter of fact a true uncertainty about how much warmer you get for a given increase in CO2.

But even among those who accept the IPCC projections of climate change, and the uncertainties therein, I think there’s a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to a rate of, say 5%, and those who think we shouldn’t do it in this context. If you apply a 5% discount rate as you would if you were deciding whether it’s worth putting up an office building or something like that, then of course you don’t give any weight to what happens after about, say 2050.

As Bjorn Lomborg, the well-known environmentalist argues, we should therefore give a lower priority to dealing with climate change than to helping the world’s poor in other more immediate ways. He is consistent given his assumptions about the discount rate. But many of us would say that in this context we should not discount the future so heavily. We should care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won’t be alive at the end of the century. We should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios.

I think the debates about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies, or are we prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life?

Ariel: Do you think the risks are only going to be showing up that much later? We are already seeing these really heavy storms striking. We’ve got Florence in North Carolina right now. There’s a super typhoon hit southern China and the Philippines. We had Maria, and I’m losing track of all the hurricanes that we’ve had. We’ve had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year. Do you think we really need to wait that long?

Martin: I think it’s generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean, and that this will become a more serious trend, but by the end of the century of course it could be very serious indeed. And the main threat is of course to people in the disadvantaged parts of the world. If you take these recent events, it’s been far worse in the Philippines than in the United States because they’re not prepared for it. Their houses are more fragile, etc.

Ariel: I don’t suppose you have any thoughts on how we get people to care more about others? Because it does seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it’s the poorer countries who seem to be suffering more. Then of course there’s the issue of the people who are alive now versus the people in the future.

Martin: That’s right, yes. Well, I think most people do care about their children and grandchildren, and so to that extent they do care about what things will be like at the end of the century, but as you say, the extra-political problem is that the cause of the CO2 emissions is mainly what’s happened in the advanced countries, and the downside is going to be more seriously felt by those in remote parts of the world. It’s easy to overlook them, and hard to persuade people that we ought to make a sacrifice which will be mainly for their benefit.

I think incidentally that’s one of the other things that we have to ensure happens, is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced parts of the world. I think that’s going to be in everyone’s interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there’s going to be massive and well-justified discontent, because unlike in the earlier generations, they’re aware of what they’re missing. They all have mobile phones, they all know what it’s like, and I think there’s going to be embitterment leading to conflict if we don’t narrow this gap, and this requires I think a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.

Ariel: That sort of ties into another question that I had for you, and that is, what do you think is the most underappreciated threat that maybe isn’t quite as obvious? You mentioned the fact that we have these people in poorer countries who are able to more easily see what they’re missing out on. Inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are underappreciated?

Martin: Yes. Just to go back, that threat is of course very serious because by the end of the century there might be 10 times as many people in Africa as in Europe, and of course they would then have every justification in migrating towards Europe with the result of huge disruption. We do have to care about those sorts of issues. I think there are all kinds of reasons apart from straight ethics why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap.

Incidentally, one thing which is a handicap for them is that they won’t have the route to prosperity followed by the so called “Asian tigers,” which were able to have high economic growth by undercutting the labor cost in the west. Now what’s happening is that with robotics it’s possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries, and so Africa and the Middle East won’t have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the west.

This is another reason why it’s going to be a big challenge. That’s something which I think we don’t worry about enough, and need to worry about, because if the inequalities persist when everyone is able to move easily and knows exactly what they’re missing, then that’s a recipe for a very dangerous and disruptive world. I would say that is an underappreciated threat.

Another thing I would count as important is that we are as a society very brittle, and very unstable because of high expectations. I’d like to give you another example. Suppose there were to be a pandemic, not necessarily a genetically engineered terrorist one, but a natural one. Then in contrast to what happened in the 14th century when the Bubonic Plague, the Black Death, occurred and killed nearly half the people in certain towns and the rest went on fatalistically. If we had some sort of plague which affected even 1% of the population of the United States, there’d be complete social breakdown, because that would overwhelm the capacity of hospitals, and people, unless they are wealthy, would feel they weren’t getting their entitlement of healthcare. And if that was a matter of life and death, that’s a recipe for social breakdown. I think given the high expectations of people in the developed world, then we are far more vulnerable to the consequences of these breakdowns, and pandemics, and the failures of electricity grids, et cetera, than in the past when people were more robust and more fatalistic.

Ariel: That’s really interesting. Is it essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong?

Martin: That’s right. And of course, if we know that there are cures available to some disease and there’s not the hospital capacity to offer it to all the people who are afflicted with the disease, then naturally that’s a matter of life and death, and that is going to promote social breakdown. This is a new threat which is of course a downside of the fact that we can at least cure some people.

Ariel: There’s two directions that I want to go with this. I’m going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit, but first I want to touch on biotech threats.

One of the things that’s been a little bit interesting for me is that when I first started at FLI three years ago we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene. Now, three years later I’m not hearing quite as much about the biotech threats, and I’m not sure if that’s because something has actually changed, or if it’s just because at FLI I’ve become more focused on AI and therefore stuff is happening but I’m not keeping up with it. I was wondering if you could talk a bit about what some of the risks you see today are with respect to biotech?

Martin: Well, let me say I think we should worry far more about bio threats than about AI in my opinion. I think as far as the bio threats are concerned, then there are these new techniques. CRISPR, of course, is a very benign technique if it’s used to remove a single damaging gene that gives you a particular disease, and also it’s less objectionable than traditional GM because it doesn’t cross the species barrier in the same way, but it does allow things like a gene drive where you make a species extinct by making it sterile.

That’s good if you’re wiping out a mosquito that carries a deadly virus, but there’s a risk of some effect which distorts the ecology and has a cascading consequence. There are risks of that kind, but more important I think there is a risk of the misuse of these techniques, and not just CRISPR, but for instance the the gain of function techniques that we used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible, things like that which can be done in a more advanced way now I’m sure.

These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape, and of course they are the kinds of things which could be misused. There have, of course, been lots of meetings, you have been at some, to discuss among scientists what the guidelines should be. How can we ensure responsible innovation in these technologies? These are modeled on the famous Conference in Asilomar in the 1970s when recombinant DNA was first being discussed, and the academics who worked in that area, they agreed on a sort of cautious stance, and a moratorium on some kinds of experiments.

But now they’re trying to do the same thing, and there’s a big difference. One is that these scientists are now more global. It’s not just a few people in North America and Europe. They’re global, and there is strong commercial pressures, and they’re far more widely understood. Bio-hacking is almost a student recreation. This means, in my view, that there’s a big danger, because even if we have regulations about certain things that can’t be done because they’re dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws, or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say, and I think this is very scary. Consequences could cascade globally.

Ariel: Do you think that the threat is more likely to come from something happening accidentally, or intentionally?

Martin: I don’t know. I think it could be either. Certainly it could be something accidental from gene drive, or releasing some dangerous virus, but I think if we can imagine it happening intentionally, then we’ve got to ask what sort of people might do it? Governments don’t use biological weapons because you can’t predict how they will spread and who they’d actually kill, and that would be an inhibiting factor for any terrorist group that had well-defined aims.

But my worst nightmare is some person, and there are some, who think that there are too many human beings on the planet, and if they combine that view with the mindset of extreme animal rights people, etc, they might think it would be a good thing for Gaia, for Mother Earth, to get rid of a lot of human beings. They’re the kind of people who, with access to this technology, might have no compunction in releasing a dangerous pathogen. This is the kind of thing that worries me.

Ariel: I find that interesting because it ties into the other question that I wanted to ask you about, and that is the idea of overpopulation. I’ve read it both ways, that overpopulation is in and of itself something of an existential risk, or a catastrophic risk, because we just don’t have enough resources on the planet. You actually made an interesting point, I thought, in your book where you point out that we’ve been thinking that there aren’t enough resources for a long time, and yet we keep getting more people and we still have plenty of resources. I thought that was sort of interesting and reassuring.

But I do think at some point that does become an issue. At then at the same time we’re seeing this huge push, understandably, for improved healthcare, and expanding life spans, and trying to save as many lives as possible, and making those lives last as long as possible. How do you resolve those two sides of the issue?

Martin: It’s true, of course, as you imply, that the population has risen double in the last 50 years, and there were doomsters who in the 1960s and ’70s thought that mass starvation by now, and there hasn’t been because food production has more than kept pace. If there are famines today, as of course there are, it’s not because of overall food shortages. It’s because of wars, or mal-distribution of money to buy the food. Up until now things have gone fairly well, but clearly there are limits to the food that can be produced on the earth.

All I would say is that we can’t really say what the carrying capacity of the earth is, because it depends so much on the lifestyle of people. As I say in the book, the world couldn’t sustainably have 2 billion people if they all lived like present day Americans, using as much energy, and burning as much fossil fuels, and eating as much beef. On the other hand you could imagine lifestyles which are very sort of austere, where the earth could carry 10, or even 20 billion people. We can’t set an upper limit, but all we can say is that given that it’s fairly clear that the population is going to rise to about 9 billion by 2050, and it may go on rising still more after that, we’ve got to ensure that the way in which the average person lives is less profligate in terms of energy and resources, otherwise there will be problems.

I think we also do what we can to ensure that after 2050 the population turns around and goes down. The base scenario is when it goes on rising as it may if people choose to have large families even when they have the choice. That could happen, and of course as you say, life extension is going to have an affect on society generally, but obviously on the overall population too. I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.

And it’s not hopeless, because the actual number of births per year has already started going down. The reason the population is still going up is because more babies survive, and most of the people in the developing world are still young, and if they live as long as people in advanced countries do, then of course that’s going to increase the population even for a steady birth rate. That’s why, unless there’s a real disaster, we can’t avoid the population rising to about 9 billion.

But I think policies can have an affect on what happens after that. I think we do have to try to make people realize that having large numbers of children has negative externalities, as it were in economic jargon, and it is going to be something to put extra pressure on the world, and affects our environment in a detrimental way.

Ariel: As I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023, and the timing was just entertaining to me, because like I said you have a section in your book about why you don’t actually agree with Elon Musk’s plan for some of this stuff.

Martin: That’s right.

Ariel: I was hoping you could talk a little bit about why you’re not as big a plan of space tourism, and what you think of humanity expanding into the rest of the solar system and universe?

Martin: Well, let me say that I think it’s a dangerous delusion to think we can solve the earth’s problems by escaping to Mars or elsewhere. Mass emigration is not feasible. There’s nowhere in the solar system which is as comfortable to live in as the top of Everest or the South Pole. I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world’s problems have to be solved here, dealing with climate change is a dawdle compared to terraforming Mars. SoI don’t think that’s true.

Now, two other things about space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity Probe trundling across the surface of Mars, and maybe missing things that a geologist would notice, but future robots will be able to do much of what a human will do, and to manufacture large structures in space, et cetera, so the practical need to send people to space is going down.

On the other hand, some people may want to go simply as an adventure. It’s not really tourism, because tourism implies it’s safe and routine. It’ll be an adventure like Steve Fossett or the guy who fell supersonically from an altitude balloon. It’d be crazy people like that, and maybe this Japanese tourist is in the same style, who want to have a thrill, and I think we should cheer them on.

I think it would be good to imagine that there are a few people living on Mars, but it’s never going to be as comfortable as our Earth, and we should just cheer on people like this.

And I personally think it should be left to private money. If I was an American, I would not support the NASA space program. It’s very expensive, and it could be undercut by private companies which can afford to take higher risks than NASA could inflict on publicly funded civilians. I don’t think NASA should be doing manned space flight at all. Of course, some people would say, “Well, it’s a national aspiration, a national goal to show superpower pre-eminence by a massive space project.” That was, of course, what drove the Apollo program, and the Apollo program cost about 4% of The US federal budget. Now NASA has .6% or thereabouts. I’m old enough to remember the Apollo moon landings, and of course if you would have asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years at that time.

There would have been, had the program been funded, but of course there was no motive, because the Apollo program was driven by superpower rivalry. And having beaten the Russians, it wasn’t pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program, and leapfrog what the Americans did by going to Mars. That could happen. Otherwise I think the only manned space flight will, and indeed should, be privately funded by adventurers prepared to go on cut price and very risky missions.

But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life’s long-term future, because whereas we are, as humans, fairly well adapted to the earth, they will be in a place, Mars, or an asteroid, or somewhere, for which they are badly adapted. Therefore they would have every incentive to use all the techniques of genetic modification, and cyber technology to adapt to this hostile environment.

A new species, perhaps quite different from humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. They, of course, may download themselves to be electronic. We don’t know how it’ll happen. We all know about the possibilities of advanced intelligence in electronic form. But I think this’ll happen on Mars, or in space, and of course if we think about going further and exploring beyond our solar system, then of course that’s not really a human enterprise because of human life times being limited, but it is a goal that would be feasible if you were a near immortal electronic entity. That’s a way in which our remote descendants will perhaps penetrate beyond our solar system.

Ariel: As you’re looking towards these longer term futures, what are you hopeful that we’ll be able to achieve?

Martin: You say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it’s not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. Of course, one thing which is very important is to be aware of different time scales.

Sometimes you hear people talk about humans watching the death of the sun in five billion years. That’s nonsense, because the timescale for biological evolution by Darwinian selection is about a million years, thousands of times shorter than the lifetime of the sun, but more importantly the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.

It would only take one, or two, or three centuries before we have entities which are very different from human beings if they are created by genetic modification, or downloading to electronic entities. They won’t be normal humans. I think this will happen, and this of course will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by Darwinian selection, to something quite new. This century is very special, which is a century where we might be triggering or jump starting a new kind of technological evolution which could spread from our solar system far beyond, on the timescale very short compared to the time scale for Darwinian evolution and the time scale for astronomical evolution.

Ariel: All right. In the book you spend a lot of time also talking about current physics theories and how those could evolve. You spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future?

Martin: Well, it’s only peripherally linked to it. I put that in the book because I was thinking about, what are the challenges, not just challenges of a practical kind, but intellectual challenges? One point I make is that there are some scientific challenges which we are now confronting which may be beyond human capacity to solve, because there’s no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory.

It’s possible that there be some fundamental aspects of nature that humans will never understand, and they will be a challenge for post-humans. I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain for instance, than in the context of cosmology, although there are challenges in cosmology which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera, and we need a theory like that in order to decide whether our big bang was the only one, or whether there were other big bangs and a kind of multiverse.

It’s possible that in 50 years from now we will have such a theory, we’ll know the answers to those questions. But it could be that there is such a theory and it’s just too hard for anyone to actually understand and make predictions from. I think these issues are relevant to the intellectual constraints on humans.

Ariel: Is that something that you think, or hope, that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow “us” to understand some of these more complex ideas?

Martin: Well, I think it’s certainly possible that machines could actually, in a sense, create entities based on physics which we can’t understand. This is perfectly possible, because obviously we know they can vastly out-compute us at the moment, so it could very well be, for instance, that there is a variant of string theory which is correct, and it’s just too difficult for any human mathematician to work out. But it could be that computers could work it out, so we get some answers.

But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. That’s a separate question which may not affect what it can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things.

I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, “Isn’t it great there’ll be these even deeper intellects than human beings out there,” but others who thought these might just be zombies thought it was very sad if there was no entity which could actually appreciate the beauties and wonders of nature in the way we can. It does matter, in a sense, to our perception of this far future, if we think that these entities which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we’ve emerged. I think that’s a very important question.

Ariel: I want to pull things back to a little bit more shorter term I guess, but still considering this idea of how technology will evolve. You mentioned that you don’t think it’s a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars. I think in general we have this tendency to say, “Oh, well in the future we’ll have technology that can fix whatever issue we’re dealing with now, so we don’t need to worry about it.”

I was wondering if you could sort of comment on that approach. To what extent can we say, “Well, most likely technology will have improved and can help us solve these problems,” and to what extent is that a dangerous approach to take?

Martin: Well, clearly technology has allowed us to live much better, more complex lives than we could in the past, and on the whole the net benefits outweigh the downsides, but of course there are downsides, and they stem from the fact that we have some people who are disruptive, and some people who can’t be trusted. If we had a world where everyone could trust everyone else, we could get rid of about a third of the economy I would guess, but I think the main point is that we are very vulnerable.

We have huge advances, clearly, in networking via the Internet, and computers, et cetera, and we may have the Internet of Things within a decade, but of course people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. That’s just one example, and ditto for biotech which may allow the development of pathogens which kill people of particular races, or have other effects.

There are these technologies which are developing fast, and they can be used to great benefit, but they can be misused in ways that will provide new kinds of horrors that were not available in the past. It’s by no means obvious which way things will go. Will there be a continued net benefit of technology, as I think we’ve said there as been up ’til now despite nuclear weapons, et cetera, or will at some stage the downside run ahead of the benefits.

I do worry about the latter being a possibility, particularly because of this amplification factor, the fact that it only takes a few people in order to cause disruption that could cascade globally. The world is so interconnected that we can’t really have a disaster in one region without its affecting the whole world. Jared Diamond has this book called Collapse where he discusses five collapses of particular civilizations, whereas other parts of the world were unaffected.

I think if we really had some catastrophe, it would affect the whole world. It wouldn’t just affect parts. That’s something which is a new downside. The stakes are getting higher as technology advances, and my book is really aimed to say that these developments are very exciting, but they pose new challenges, and I think particularly they pose challenges because a few dissidents can cause more trouble, and I think it’ll make the world harder to govern. It’ll make cities and countries harder to govern, and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that’s going to be a challenge for all future governments.

Ariel: Reading your book I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. I was curious: what do you hope that people will do after reading the book, or learning more about these issues in general?

Martin: Well, first of all I hope that people can be persuaded to think long term. I mentioned that religious groups, for instance, tend to think long term, and the papal encyclical in 2015 I think had a very important effect on the opinion in Latin America, Africa, and East Asia in the lead up to the Paris Climate Conference, for instance. That’s an example where someone from outside traditional politics would have an effect.

What’s very important is that politicians will only respond to an issue if it’s prominent in the press, and prominent in their inbox, and so we’ve got to ensure that people are concerned about this. Of course, I ended the book saying, “What are the special responsibilities of scientists,” because scientists clearly have a special responsibility to ensure that their work is safe, and that the public and politicians are made aware of the implications of any discovery they make.

I think that’s important, even though they should be mindful that their expertise doesn’t extend beyond their special area. That’s a reason why scientific understanding, in a general sense, is something which really has to be universal. This is important for education, because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera, which have scientific aspects, then everyone has to have enough feel for those aspects to participate in a debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments.

I think an educated population is essential for proper democracy. Obviously that’s a platitude. But the education needs to include, to a greater extent, an understanding of the scope and limits of science and technology. I make this point at the end and hope that it will lead to a greater awareness of these issues, and of course for people in universities, we have a responsibility because we can influence the younger generation. It’s certainly the case that students and people under 30 may be alive towards the end of the century are more mindful of these concerns than the middle aged and old.

It’s very important that these activities like the Effective Altruism movement, 80,000 Hours, and these other movements among students should be encouraged, because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to drunk driving and things like that, which have happened over a few decades, and I think perhaps we can have a more environmental sensitivity so to become regarded as sort of rather naff or tacky to waste energy, and to be extravagant in consumption.

I’m hopeful that attitudes will change in a positive way, but I’m concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light, and small groups can have a global effect. This makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That’s something which is new, and I think is becoming more and more serious.

Ariel: We’ve been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of, that aren’t as big of risks?

Martin: Well, I think we need to ensure responsible innovation in all new technologies. We’ve talked a lot about bio, and we are very concerned about the misuse of cyber technology. As regards AI, of course there are a whole lot of concerns to be had. I personally think that the takeover AI would be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimized by some algorithm which they can’t have explained to them.

I think there is an awareness to this, and I think that what’s being done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI, and also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that’s bad news, whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of the misuse of biotech, et cetera.

The order in which things happen is going to be very important, but I must say I don’t completely share these concerns about machines running away and taking over, ’cause I think there’s a difference in that, for biological evolution there’s been a drive toward intelligence being favored, but so is aggression. In the case of computers, they may drive towards greater intelligence, but it’s not obvious that that is going to be combined with aggression, because they are going to be evolving by intelligent design, not the struggle of the fittest, which is the way that we evolved.

Ariel: What about concerns regarding AI just in terms of being mis-programmed, and AI just being extremely competent? Poor design on our part, poor intelligent design?

Martin: Well, I think in the short term obviously there are concerns about AI making decisions that affect people, and I think most of us would say that we shouldn’t be deprived of our credit rating, or put in prison on the basis of some AI algorithm which can’t be explained to us. We are entitled to have an explanation if something is done to us against our will. That is why it is worrying if too much is going to be delegated to AI.

I also think that constraint on the development of self-driving cars, and things of that kind, is going to be constrained by the fact that these become vulnerable to hacking of various kinds. I think it’ll be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it’s not clear that we will ever accept a driverless car. I think I’m frankly less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted, that we can trust ourselves to them.

Ariel: As I mentioned at the start, and as you mentioned at the start, you are a techno optimist, for as much as the book is about things that could go wrong it did feel to me like it was also sort of an optimistic look at the future. What are you most optimistic about? What are you most hopeful for looking at both short term and long term, however you feel like answering that?

Martin: I’m hopeful that biotech will have huge benefits for health, will perhaps extend human life spans a bit, but that’s something about which we should feel a bit ambivalent. So, I think health, and also food. If you asked me, what is one of the most benign technologies, it’s to make artificial meat, for instance. It’s clear that we can more easily feed a population of 9 billion on a vegetarian diet than on a traditional diet like Americans consume today.

To take one benign technology, I would say artificial meat is one, and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. I’m optimistic about that. If we think about very long term trends then life extension is something which obviously if it happens too quickly is going to be hugely disruptive, multi-generation families, et cetera.

Also, even though we will have the capability within a century to change human beings, I think we should constrain that on earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen, then as I say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn’t changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2,000 years ago and resonate with the people, and their characters, and their attitudes and emotions.

It’s not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. That will be a fundamental, and very fast change in the nature of humanity. The question is, can we do something to at least constrain the rate at which that happens, or at least constrain the way in which it happens? But it is going to be almost certainly possible to completely change human mentality, and maybe even human physique over that time scale. One has only to listen to listen to people like George Church to realize that it’s not crazy to imagine this happening.

Ariel: You mentioned in the book that there’s lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative effects of cryogenics, and the burden that it puts on the future. I was wondering if you could talk really quickly about that?

Martin: There are some people, I know some, who have a medallion around their neck which is an injunction of, if they drop dead they should be immediately frozen, and their blood drained and replaced by liquid nitrogen, and that they should then be stored — there’s a company called Alcor in Arizona that does this — and allegedly revived at some stage when technology advanced. I find it hard to take these seriously, but they say that, well the chance may be small, but if they don’t invest this way then the chance is zero that they have a resurrection.

But I actually think that even if it worked, even if the company didn’t go bust, and sincerely maintained them for centuries and they could then be revived, I still think that what they’re doing is selfish, because they’d be revived into a world that was very different. They’d be refugees from the past, and they’d therefore be imposing an obligation on the future.

We obviously feel an obligation to look after some asylum seeker or refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest for instance, and had to find a new home, but these refugees from the past, as it were, they’re imposing a burden on future generations. I’m not sure that what they’re doing is ethical. I think it’s rather selfish.

Ariel: I hadn’t thought of that aspect of it. I’m a little bit skeptical of our ability to come back.

Martin: I agree. I think the chances are almost zero, even if they were stored and et cetera, one would like to see this technology tried on some animal first to see if they could freeze animals at liquid nitrogen temperatures and then revive it. I think it’s pretty crazy. Then of course, the number of people doing it is fairly small, and some of the companies doing it, there’s one in Russia, which are real ripoffs I think, and won’t survive. But as I say, even if these companies did keep going for a couple of centuries, or however long is necessary, then it’s not clear to me that it’s doing good. I also quoted this nice statement about, “What happens if we clone, and create a neanderthal? Do we put him in a zoo or send him to Harvard,” said the professor from Stanford.

Ariel: Those are ethical considerations that I don’t see very often. We’re so focused on what we can do that sometimes we forget. “Okay, once we’ve done this, what happens next?”

I appreciate you being here today. Those were my questions. Was there anything else that you wanted to mention that we didn’t get into?

Martin: One thing we didn’t discuss, which was a serious issue, is the limits of medical treatment, because you can make extraordinary efforts to keep people alive long before they’d have died naturally, and to keep alive babies that will never live a normal life, et cetera. Well, I certainly feel that that’s gone too far at both ends of life.

One should not devote so much effort to extreme premature babies, and allow people to die more naturally. Actually, if you asked me about predictions I’d make about the next 30 or 40 years, first more vegetarianism, secondly more euthanasia.

Ariel: I support both, vegetarianism, and I think euthanasia should be allowed. I think it’s a little bit barbaric that it’s not.

Martin: Yes.

I think we’ve covered quite a lot, haven’t we?

Ariel: I tried to.

Martin: I’d just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It’s not really a science book, although it emphasizes how scientific ideas are what’s going to determine how our civilization evolves. I’d also like to say that for those in universities, we know it’s only interim for students, but we have universities like MIT, and my University of Cambridge, we have convening power to gather people together to address these questions.

I think the value of the centers which we have in Cambridge, and you have in MIT, are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high that if our efforts can really reduce the risk of a disaster by one part in 10,000, we’ve more than earned our keep. I’m very supportive of our Centre for Existential Risk in Cambridge, and also the Future of Life Institute which you have at MIT.

Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic, and the threats of low radiation doses, et cetera, it’s not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable threats which could affect the whole future of humanity. I think it’s very important that these groups should be encouraged and fostered, and I’m privileged to be part of them.

Ariel: All right. Again, the book is On the Future: Prospects for Humanity by Martin Rees. I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics that you covered, and I don’t think I got all of them, but there’s climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics, and as you mentioned the role that scientists need to play in ensuring a safe future.

I highly recommend the book as a really great introduction to the potential risks, and the hopefully much greater potential benefits that science and technology can pose for the future. Martin, thank you again for joining me today.

Martin: Thank you, Ariel, for talking to me.

[end of recorded material]

Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is much trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • How the US and Russia perceive first-strike capabilities
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
  • The perceived obstacles to reducing nuclear arsenals

Publications discussed in this episode include:

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.

September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.

From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.

Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.

It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers which have not signed the treaty are taking steps to modernize their nuclear arsenals.

This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.

Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.

Paul and Mike first, thank you so much for joining me today.

Paul: Thank you, thanks for having us.

Mike: Yeah, excited for the conversation.

Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.

Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?

The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.

But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?

If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.

For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.

We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.

In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.

Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.

You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.

Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.

The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.

Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?

Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.

I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.

In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.

Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.

The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.

The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.

Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.

Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.

Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.

If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.

Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.

I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.

And that’s a lesson the United States by the way has taught the world over and over, over the last few decades you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.

Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.

I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.

Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.

I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?

Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.

Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.

The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.

I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.

Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and  nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.

There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?

We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”

I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.

When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.

Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.

Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side the American and Soviet side during the Cold War and immediately afterwards.

If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.

But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.

Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.

Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.

Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?

Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.

For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.

Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.

Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.

But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.

But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.

Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.

Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?

Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.

I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.

Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.

Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.

If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.

I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.

Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.

That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.

I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?

Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.

To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.

You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.

During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.

In the case of INF Treaty, Russia looking at what China is building not a signatory to INF and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.

Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging what specifically do you worry about if you’re thinking about destabilization? What does that look like?

Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.

But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.

Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.

That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.

For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.

While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-

Mike: Right, the US has actively moved to not do this, basically.

Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.

I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.

Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.

Paul: I knew you were going to say that.

Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.

Paul: Right.

Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.

Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.

Mike: That’s totally fair.

Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.

Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.

Mike: Now the world is safer. Major conventional war.

Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.

Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.

Paul: Right.

Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.

Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.

I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.

Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.

There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.

From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.

I do worry that that’s not a major feature when countries are looking at modernization that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.

Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.

Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.

Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.

In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.

That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.

Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.

Paul: Thank you.

Mike: Thanks a lot. Happy to do it and happy to come back anytime.

Paul: Yeah, thanks for having us. We really appreciate it.

[end of recorded material]

Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction?

On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance.

Topics discussed in this episode include:

  • Three lenses through which to view AI’s transformative power
  • Emerging international and national AI governance strategies
  • The risks and benefits of regulating artificial intelligence
  • The importance of public trust in AI systems
  • The dangers of an AI race
  • How AI will change the nature of wealth and power

Papers and books discussed in this episode include:

You can listen to the podcast above and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

 

Ariel: Hi there, I’m Ariel Conn with the Future of Life Institute. As we record and publish this podcast, diplomats from around the world are meeting in Geneva to consider whether to negotiate a ban on lethal autonomous weapons. As a technology that’s designed to kill people, it’s no surprise that countries would consider regulating or banning these weapons, but what about all other aspects of AI? While, most, if not all AI researchers, are designing the technology to improve health, ease strenuous or tedious labor, and generally improve our well-being, most researchers also acknowledge that AI will be transformative, and if we don’t plan ahead, those transformations could be more harmful than helpful.

We’re already seeing instances in which bias and discrimination have been enhanced by AI programs. Social media algorithms are being blamed for impacting elections; it’s unclear how society will deal with the mass unemployment that many fear will be a result of AI developments, and that’s just the tip of the iceberg. These are the problems that we already anticipate and will likely arrive with the relatively narrow AI we have today. But what happens as AI becomes even more advanced? How can people, municipalities, states, and countries prepare for the changes ahead?

Joining us to discuss these questions are Allan Dafoe and Jessica Cussins. Allan is the Director of the Governance of AI program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. His research seeks to understand the causes of world peace, particularly in the age of advanced artificial intelligence.

Jessica is an AI Policy Specialist with the Future of Life Institute, where she explores AI policy considerations for near and far term. She’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. Jessica and Allan, thank you so much for joining us today.

Allan: Pleasure.

Jessica: Thank you, Ariel.

Ariel: I want to start with a quote, Allan, that’s on your website and also on a paper that you’re working on that we’ll get to later, where it says, “AI will transform the nature of wealth and power.” And I think that’s sort of at the core of a lot of the issues that we’re concerned about in terms of what the future will look like and how we need to think about what impact AI will have on us and how we deal with that. And more specifically, how governments need to deal with it, how corporations need to deal with it. So, I was hoping you could talk a little bit about the quote first and just sort of how it’s influencing your own research.

Allan: I would be happy to. So, we can think of this as a proposition that may or may not be true, and I think we could easily spend the entire time talking about the reasons why we might think it’s true and the character of it. One way to motivate it, as I think has been the case for people, is to consider that it’s plausible that artificial intelligence would at some point be human-level in a general sense, and to recognize that that would have profound implications. So, you can start there, as, for example, if you were to read Superintelligence by Nick Bostrom, you sort of start at some point in the future and reflect on how profound this technology would be. But I think you can also motivate this with much more near-term perspective and thinking of AI more in a narrow sense.

So, I will offer three lenses for thinking about AI and then I’m happy to discuss it more. The first lens is that of general purpose technology. Economists and others have looked at AI and seen that it seems to fit the category of general purpose technology, which are classes of technologies that provide a crucial input to many important processes, economic, political, and military, social, and are likely to generate these complementary innovations in other areas. And general purpose technologies are also often used as a concept to explain economic growth, so you have things like the railroad or steam power or electricity or the motor vehicle or the airplane or the computer, which seem to change these processes that are important, again, for the economy or for society or for politics in really profound ways. And I think it’s very plausible that artificial intelligence not only is a general purpose technology, but is perhaps the quintessential general purpose technology.

And so in a way that sounds like a mundane statement. General purpose, it will sort of infuse throughout the economy and political systems, but it’s also quite profound because when you think about it, it’s like saying it’s this core innovation that generates a technological revolution. So, we could say a lot about that, and maybe I should just to sort of give a bit more color, I think Kevin Kelly has a nice quote where he says, “Everything that we formally electrified, we will now cognitize. There’s almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.” We could say a lot more about general purpose technologies and why they’re so transformative to wealth and power, but I’ll move on to the other two lenses.

The second lens is to think about AI as an information and communication technology. You might think this is a subset of general purpose technologies. So, other technologies in that reference class would include the printing press, the internet, and the telegraph. And these are important because they change, again, sort of all of society and the economy. They make possible new forms of military, new forms of political order, new forms of business enterprise, and so forth. So we could say more about that, and those have important properties related to inequality and some other characteristics that we care about.

But I’ll just move on to the third lens, which is that of intelligence. So, unlike every other general purpose technology, which applied to energy, production, or communication or transportation, AI is a new kind of general purpose technology. It changes the nature of our cognitive processes, it enhances them, it makes them more autonomous, generates new cognitive capabilities. And I think it’s that lens that makes it seem especially transformative. In part because the key role that humans play in the economy is increasingly as cognitive agents, so we are now building powerful complements to us, but also substitutes to us, and so that gives rise to the concerns about labor displacement and so forth. But also innovations in intelligence are hard things to forecast how they will work and what those implications will be for everything, and so that makes it especially hard to sort of see what’s through the mist of the future and what it will bring.

I think there’s a lot of interesting insights that come from those three lenses, but that gives you a sense of why AI could be so transformative.

Ariel: That’s a really nice introduction to what we want to talk about, which is, I guess, okay so then what? If we have this transformative technology that’s already in progress, how does society prepare for that? I’ve brought you both on because you deal with looking at the prospect of AI governance and AI policy, and so first, let’s just look at some definitions, and that is, what is the difference between AI governance and AI policy?

Jessica: So, I think that there are no firm boundaries between these terms. There’s certainly a lot of overlap. AI policy tends to be a little bit more operational, a little bit more finite. We can think of direct government intervention more for the sake of public service. I think governance tends to be a slightly broader term, can relate to industry norms and principles, for example, as well as government-led initiatives or regulations. So, it could be really useful as a kind of multi-stakeholder lens in bringing different groups to the table, but I don’t think there’s firm boundaries between these. I think there’s a lot of interesting work happening under the framework of both, and depending on what the audience is and the goals of the conversation, it’s useful to think about both issues together.

Allan: Yeah, and to that I might just add that governance has a slightly broader meaning, so whereas policy often sort of connotes policies that companies or governments develop intentionally and deploy, governance refers to those, but also sort of unintended policies or institutions or norms and just latent processes that shape how the phenomenon develops. So how AI develops and how it’s deployed, so everything from public opinion to the norms we set up around artificial intelligence and sort of emergent policies or regulatory environments. All of that you can group within governance.

Ariel: One more term that I want to throw in here is the word regulation, because a lot of times, as soon as you start talking a