FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that’s stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford’s Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. 

Topics discussed include:

  • The psychology of existential risk, longtermism, effective altruism, and speciesism
  • Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”
  • Various works and studies Stefan Schubert has co-authored in these spaces
  • How this enables us to be more altruistic

Timestamps:

0:00 Intro

2:31 Stefan’s academic and intellectual journey

5:20 How large is this field?

7:49 Why study the psychology of X-risk and EA?

16:54 What does a better understanding of psychology here enable?

21:10 What are the cognitive limitations psychology helps to elucidate?

23:12 Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”

34:45 Messaging on existential risk

37:30 Further areas of study

43:29 Speciesism

49:18 Further studies and work by Stefan

Works Cited 

Understanding cause-neutrality

Considering Considerateness: Why communities of do-gooders should be exceptionally considerate

On Caring by Nate Soares

Against Empathy: The Case for Rational Compassion

Eliezer Yudkowsky’s Sequences

Whether and Where to Give

A Person-Centered Approach to Moral Judgment

Moral Aspirations and Psychological Limitations

Robin Hanson on Near and Far Mode 

Construal-Level Theory of Psychological Distance

The Puzzle of Ineffective Giving (Under Review) 

Impediments to Effective Altruism

The Many Obstacles to Effective Giving (Under Review) 

Moral Aspirations and Psychological Limitations

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Hello everyone and welcome to the Future of Life Institute Podcast. I’m Lucas Perry.  Today, we’re speaking with Stefan Schubert about the psychology of existential risk, longtermism, and effective altruism more broadly. This episode focuses on Stefan’s reasons for exploring psychology in this space, how large this space of study currently is, the usefulness of studying psychology as it pertains to these areas, the central questions which motivate his research, a recent publication that he co-authored which motivated this interview called The Psychology of Existential Risks: Moral Judgements about Human Extinction, as well as other related work of his. 

This podcast often ranks in the top 100 of technology podcasts on Apple Music. This is a big help for increasing our audience and informing the public about existential and technological risks, as well as what we can do about them. So, if this podcast is valuable to you, consider sharing it with friends and leaving us a good review. It really helps. 

Stefan Schubert is a researcher at the the Social Behaviour and Ethics Lab at the University of Oxford, working in the intersection of moral psychology and philosophy. He focuses on psychological questions of relevance to effective altruism, such as why our altruistic actions are often ineffective, and why we don’t invest more in safe-guarding our common future. He was previously a researcher at Centre for Effective Altruism and a postdoc in philosophy at the London School of Economics. 

We can all be more altruistic and effective in our service of others. Expanding our moral circles of compassion farther into space and deeper into time, as well as across species, and possibly even eventually to machines, while mitigating our own tendencies towards selfishness and myopia is no easy task and requires deep self-knowledge and far more advanced psychology than I believe we have today. 

This conversation explores the first steps that researchers like Stefan are taking to better understand this space in service of doing the most good we can. 

So, here is my conversation with Stefan Schubert 

Can you take us through your intellectual and academic journey in the space of EA and longtermism and in general, and how that brought you to what you’re working on now?

Stefan Schubert: I started range of different subjects. I guess I had a little bit of hard time deciding what I wanted to do. So I got a masters in political science. But then in the end, I ended up doing a PhD in philosophy at Lund University in Sweden, specifically in epistemology, the theory of knowledge. And then I went to London School of Economics to do a post doc. And during that time, I discovered effective altruism and I got more and more involved with that.

So then I applied to Centre for Effective Altruism, here in Oxford, to work as a researcher. And I worked there as a researcher for two years. At first, I did policy work, including reports on catastrophic risk and x-risk for a foundation and for a government. But then I also did some work, which was general and foundational or theoretical nature, including work on the notion of cause neutrality, how we should understand that. And also on how EAs should think about everyday norms like norms of friendliness and honesty.

And I guess that even though I, at the time I didn’t do sort of psychological empirical research, that sort of relates to my current work on psychology because for the last two years, I’ve worked on the psychology of effective altruism at the Social Behavior and Ethics Lab here at Oxford. This lab is headed by Nadira Farber and I also work closely with Lucius Caviola, who did his PhD here at Oxford and recently moved to Harvard to do a postdoc.

So we have three strands of research. The first one is sort of the psychology of effective altruism in general. So why is it that people aren’t effectively altruistic? This is a bit of a puzzle because generally people, they are at least somewhat effective when they working for their own interest. To be sure they are not maximally effective, but when they try to buy a home or save for retirement, they do some research and sort of try to find good value for money.

But they don’t seem to do the same when they donate to charity. They aren’t as concerned with effectiveness. So this is a bit of a puzzle. And then there are two strands of research, which have to do with specific EA causes. So one is the psychology of longtermism and existential risk, and the other is the psychology of speciesism, human-animal relations. So out of these three strands of research, I focused the most on the psychology of effective altruism in general and the psychology of longtermism and existential risk.

Lucas Perry: How large is the body of work regarding the psychology of existential risk and effective altruism in general? How many people are working on this? If you give us more insight into the state of the field and the amount of interest there.

Stefan Schubert: It’s somewhat difficult to answer because it sort of depends on how do you define these domains. There’s research, which is of some relevance to ineffective altruism, but it’s not exactly on that. But I will say that there may be around 10 researchers or so who are sort of EAs and work on these topics for EA reasons. So you definitely want to count them. And then when we thinking about non EA researchers, like other academics, there hasn’t been that much research I would say on the psychology of X-risk and longtermism

There’s research on the psychology of climate change, that’s a fairly large topic. But more specifically on X-risk and longtermism, there’s less. Effective altruism in general. That’s a fairly large topic. There’s lots of research on biases like the identifiable victim effect: people’s tendency to donate to identifiable victims over larger number of known unidentifiable statistical victims. Maybe the order of a few hundred papers.

And then the last topic, speciesism; human-animals relations: that’s fairly large. I know less of that literature, but my impression is that it’s fairly large.

Lucas Perry: Going back into the 20th century, much of what philosophers have done, like Peter Singer is constructing thought experiments, which isolate the morally relevant aspects of a situation, which is intended in the end to subvert psychological issues and biases in people.

So I guess I’m just reflecting here on how philosophical thought experiments are sort of the beginnings of elucidating a project of the psychology of EA or existential risk or whatever else.

Stefan Schubert: The vast majority of these papers are not directly inspired by philosophical thought experiments. It’s more like psychologists who run some experiments because there’s some theory that some other psychologist has devised. Most don’t look that much at philosophy I would say. But I think effective altruism and the fact that people are ineffectively altruistic, that’s fairly theoretically interesting for psychologists, and also for economists.

Lucas Perry: So why study psychological questions as they relate to effective altruism, and as they pertain to longtermism and longterm future considerations?

Stefan Schubert: It’s maybe easiest to answer that question in the context of effective altruism in general. I should also mention that when we studied this topic of sort of effectively altruistic actions in general, what we concretely study is effective and ineffective giving. And that is because firstly, that’s what other people have studied, so it’s easier to put our research into context.

The other thing is that it’s quite easy to study in a lab setting, right? So you might ask people, where would you donate to the effective or the ineffective charity? You might think that career choice is actually more important than giving, or some people would argue that, but that seems more difficult to study in a lab setting. So with regards to what motivates our research on effective altruism in general and effective giving, what ultimately motivates our research is that we want to make people improve their decisions. We want to make them donate more effectively, be more effectively altruistic in general.

So how can you then do that? Well, I want to make one distinction here, which I think might be important to think about. And that is the distinction between what I call a behavioral strategy and an intellectual strategy. And the behavioral strategy is that you come up with certain framings or setups to decision problems, such that people behave in a more desirable way. So there’s literature on nudging for instance, where you sort of want to nudge people into desirable options.

So for instance, in a cafeteria where you have healthier foods at eye level and the unhealthy food is harder to reach people will eat healthier than if it’s the other way round. You could come up with interventions that similarly make people donate more effectively. So for instance, the default option could be an effective charity. We know that in general, people tend often to go with the default option because of some kind of cognitive inertia. So that might lead to more effective donations.

I think it has some limitations. For instance, nudging might be interesting for the government because the government has a lot of power, right? It might frame the decision on whether you want to donate your organs after you’re dead. The other thing is that just creating an implementing these kinds of behavior interventions can often be very time consuming and costly.

So one might think that this sort of intellectual strategy should be emphasized and it shouldn’t be forgotten. So with respect to the intellectual strategy, you’re not trying to change people’s behavior solely, you are trying to do that as well, but you’re also trying to change their underlying way of thinking. So in a sense it has a lot in common with philosophical argumentation. But the difference is that you start with descriptions of people’s default way of thinking.

You describe that your default way of thinking, that leads you to prioritize an identifiable victim over larger numbers of statistical victims. And then you sort of provide an argument that that’s wrong. Statistical victims, they are just as real individuals as the identifiable victims. So you get people to accept that their own default way of thinking about identifiable versus statistical victims is wrong, and that they shouldn’t trust the default way of thinking but instead think in a different way.

I think that this strategy is actually often used, but we don’t often think about it as a strategy. So for instance, Nate Soares has this blog post “On Caring” where he argues that we shouldn’t trust our internal care-o-meter. And this is because we can’t increase how much we feel about more people dying with the number of people that die or with the badness of those increasing numbers. So it’s sort of an intellectual argument that takes psychological insight as a starting point and other people have done as well.

So the psychologist Paul Bloom has this book Against Empathy where he argues for similar conclusions. And I think Eliezer Yudkowsky uses his strategy a lot in his sequences. I think it’s often an effective strategy that should be used more.

Lucas Perry: So there’s the extent to which we can know about underlying, problematic cognition in persons and we can then change the world in ways. As you said, this is framed as nudging, where you sort of manipulate the environment in such a way without explicitly changing their cognition, in order to produce desired behaviors. Now, my initial reaction to this is, how are you going to deal with the problem when they find out that you’re doing this to them?

Now the second one here is the extent to which we can use our insights from psychological and analysis and studies to change implicit and explicit models and cognition in order to effectively be better decision makers. If a million deaths is a statistic and a dozen deaths is a tragedy, then there is some kind of failure of empathy and compassion in the human mind. We’re not evolved or set up to deal with these kinds of moral calculations.

So maybe you could do nudging by setting up the world in such a way that people are more likely to donate to charities that are likely to help out statistically large, difficult to empathize with numbers of people, or you can teach them how to think better and better act on statistically large numbers of people.

Stefan Schubert: That’s a good analysis actually. On the second approach: what I call the intellectual strategy, you are sort of teaching them to think differently. Whereas on this behavioral or nudging approach, you’re changing the world. I also think that this comment about “they might not like the way you nudged them” is a good comment. Yes, that has been discussed. I guess in some cases of nudging, it might be sort of cases of weakness of will. People might not actually want the chocolate but they fall prey to their impulses. And the same might be true with saving for retirement.

So whereas with ineffective giving, yeah there it’s much less clear. Is it really the case that people really want to donate effectively and therefore sort of are happy to be nudged in this way, that doesn’t seem to clear at all? So that’s absolutely a reason against that approach.

And then with respect to arguing for certain conclusions, in the sense that it is argument or argumentation, it’s more akin to philosophical argumentation. But it’s different from standard analytic philosophical argumentation in that it discusses human psychology. You discuss how our psychological dispositions mislead us at length and that’s not how analytic philosophers normally do it. And of course you can argue for instance, effective giving in the standard philosophical vein.

And some people have done that, like this EA philosopher Theron Pummer, he has an interesting paper called Whether and Where to Give on this question of whether it is an obligation to donate effectively. So I think that’s interesting, but one worries that there might not be that much to say about these issues because everything else equal is maybe sort of trivial that the more effectiveness the better. Of course everything isn’t always equal. But in general, it might not be too much interesting stuff you can say about that from a normative or philosophical point of view.

But there are tons of interesting psychological things you can say because there are tons of ways in which people aren’t effective. The other related issue is that this form of psychology might have a substantial readership. So it seems to me based on the success of Kahneman and Haidt and others, that people love to read about how their own and others’ thoughts by default go wrong. Whereas in contrast, standard analytic philosophy, it’s not as widely read, even among the educated public.

So for those reasons, I think that the sort of more psychology based augmentation may in some respects be more promising than purely abstract philosophical arguments for why we should be effectively altruistic.

Lucas Perry: My view or insight here is that the analytic philosopher is more so trying on the many different perspectives in his or her own head, whereas the psychologist is empirically studying what is happening in the heads of many different people. So clarifying what a perfected science of psychology in this field is useful for illustrating the end goals and what we’re attempting to do here. This isn’t to say that this will necessarily happen in our lifetimes or anything like that, but what does a full understanding of psychology as it relates to existential risk and longtermism and effective altruism enable for human beings?

Stefan Schubert: One thing I might want to say is that psychological insights might help us to formulate a vision of how we ought to behave or what mindset we ought to have and what we ought to be like as people, which is not the only normatively valid, which is what philosophers talk about, but also sort of persuasive. So one idea there that Lucius and I have discussed quite extensively recently is that some moral psychologists suggest that when we think about morality, we think to a large degree, not in terms of whether a particular act was good or bad, but rather about whether the person who performed that act is good or bad or whether they are virtuous or vicious.

So this is called the person centered approach to moral judgment. Based on that idea, we’ve been thinking about what lists of virtues people would need, in order to make the world better, more effectively. And ideally these should be virtues that both are appealing to common sense, or which can at least be made appealing to common sense, and which also make the world better when applied.

So we’ve been thinking about which such virtues one would want to have on such a list. We’re not sure exactly what we’ll include, but some examples might be prioritization, that you need to make sure that you prioritize the best ways of helping. And then we have another which we call Science: That you do proper research and how to help effectively or that you rely on others who do. And then collaboration, that you’re willing to collaborate on moral issues, potentially even with your moral opponents.

So the details of this virtues aren’t too important, but the idea is that it hopefully should seem like a moral ideal to some people, to be a person who lives these virtues. I think that to many people philosophical arguments about the importance of being more effective and putting more emphasis on consequences, if you read them in a book of analytic philosophy, that might seem pretty uninspiring. So people don’t read that and think “that’s what I would want to be like.”

But hopefully, they could read about these kinds of virtues and think, “that’s what I would want to be like.” So to return to your question, ideally we could use psychology to sort of create such visions of some kind of moral ideal that would not just be normatively correct, but also sort of appealing and persuasive.

Lucas Perry: It’s like a science, which is attempting to contribute to the project of human and personal growth and evolution and enlightenment in so far as that as possible.

Stefan Schubert: We see this as part of the larger EA project of using evidence and reason and research to make the world a better place. EA has this prioritization research where you try to find the best ways of doing good. I gave this talk at EAGx Nordics earlier this year on “Moral Aspirations and Psychological Limitations.” And in that talk I said, well what EAs normally do when they prioritize ways of doing good, is as it were, they look into the world and they think: what ways of doing good are there? What different courses are there? What sort of levers can we pull to make the world better?

So should we reduce existential risk from specific sources like advanced AI or bio risk, or is rather global poverty or animal welfare the best thing to work on? But then the other approach is to rather sort of look inside yourself and think, well I am not perfectly effectively altruistic, and that is because of my psychological limitations. So then we want to find out which of those psychological limitations are most impactful to work on because, for instance, they are more tractable or because it makes a bigger difference if we remove them. That’s one way of thinking about this research, that we sort of take this prioritization research and turn it inwards.

Lucas Perry: Can you clarify the kinds of things that psychology is really pointing out about the human mind? Part of this is clearly about biases and poor aspects of human thinking, but what does it mean for human beings to have these bugs and human cognition? What are the kinds of things that we’re discovering about the person and how he or she thinks that fail to be in alignment with the truth.

Stefan Schubert: I mean, there are many different sources of error, one might say. One thing that some people have discussed is that people are not that interested in being effectively altruistic. Why is that? Some people say that’s just because they get more warm glow out of giving someone who’s suffering more saliently and then the question arises, why do they get more warm glow out of that? Maybe that’s because they just want to signal their empathy. That’s sort of one perspective, which is maybe a bit cynical, then ,that the ultimate source of lots of ineffectiveness is just this preference for signaling and maybe a lack of genuine altruism.

Another approach would be to just say, the world is very complex and it’s very difficult to understand it and we’re just computationally constrained, so we’re not good enough at understanding it. Another approach would be to say that because the world is so complex, we evolved various broad-brushed heuristics, which generally work not too badly, but then, when we are put in some evolutionarily novel context and so on, they don’t guide us too well. That might be another source of error. In general, what I would want to emphasize is that there are likely many different sources of human errors.

Lucas Perry: You’ve discussed here how you focus and work on these problems. You mentioned that you are primarily interested in the psychology of effective altruism in so far as we can become better effective givers and understand why people are not effective givers. And then, there is the psychology of longtermism. Can you enumerate some central questions that are motivating you and your research?

Stefan Schubert: To some extent, we need more research just in order to figure out what further research we and others should do so I would say that we’re in a pre-paradigmatic stage with respect to that. There are numerous questions one can discuss with respect to psychology of longtermism and existential risks. One is just people’s empirical beliefs on how good the future will be if we don’t go extinct, what the risk of extinction is and so on. This could potentially be useful when presenting arguments for the importance of work on existential risks. Maybe it turns out that people underestimate the risk of extinction and the potential quality of the future and so on. Another issue which is interesting is moral judgments, people’s moral judgements about how bad extinction would be, and the value of a good future, and so on.

Moral judgements about human extinction, that’s exactly what we studied in a recent paper that we published, which is called “The Psychology of Existential Risks: Moral Judgements about Human Extinction.” In that paper, we test this thought experiment by philosopher Derek Parfit. He has this thought experiment where he discusses three different outcomes. First, peace, the second, a nuclear war that kills 99% of the world’s existing population and three, a nuclear war that kills everyone. Parfit says, then, that a war that kills everyone, that’s the worst outcome. Near-extinction is the next worst and peace is the best. Maybe no surprises there, but the more interesting part of the discussion, that concerns the relative differences between these outcomes in terms of badness. Parfit effectively made an empirical prediction, saying that most people would find a difference in terms of badness between peace and near-extinction to be greater, but he himself thought that the difference between near-extinction and extinction, that’s the greater difference. That’s because only extinction would lead to the future forever being lost and Parfit thought that if humanity didn’t go extinct, the future could be very long and good and therefore, it would be a unique disaster if the future was lost.

On this view, extinction is uniquely bad, as we put it. It’s not just bad because it would mean that many people would die, but also because it would mean that we would lose a potentially long and grand future. We tested this hypothesis in the paper, then. First, we had a preliminary study, which didn’t actually pertain directly to Parfit’s hypothesis. We just studied whether people would find extinction a very bad event in the first place and we found that, yes, they do and they that the government should invest substantially to prevent it.

Then, we moved on to the main topic, which was Parfit’s hypothesis. We made some slight changes. In the middle outcome, Parfit had 99% dying. We reduced that number to 80%. We also talked about catastrophes in general rather than nuclear wars and we didn’t want to talk about peace because we thought that you might have an emotional association with the word “peace;” we just talked about no catastrophe instead. Using this paradigm, we found that Parfit was right. First, most people, just like him, thought that extinction was the worst outcome, near extinction the next, and no catastrophe was the best. But second, we find, then, that most people find the difference in terms of badness, between no one dying and 80% dying, that’s greater than the difference between 80% dying and 100% dying.

Our interpretation, then, is that this is presumably because they focus most on the immediate harm that the catastrophes cause and in terms of the immediate harm, the difference between no one dying and 80% dying, it’s obviously greater than that between 80% dying and 100% dying. That was a control condition in some of our experiments, but we also had other conditions where we would slightly tweak the question. We had one condition which we call the salience condition, where we made the longterm consequences of the three outcomes salient. We told participants to remember the longterm consequences of the outcomes. Here, we didn’t actually add any information that they don’t have access to, but we just made some information more salient and that made significantly more participants find the difference between 80% dying and 100% dying the greater one.

Then, we had yet another condition which we call the utopia condition, where we told participants that if humanity doesn’t go extinct, then the future will be extremely long and extremely good and it was said that if 80% die, then, obviously, at first, things are not so good, but after a recovery period, we would go on to this rosy future. We included this condition partly because such scenarios have been discussed to some extent by futurists, but partly also because we wanted to know, if we ramp up this goodness of the future to the maximum and maximize the opportunity costs of extinction, how many people would then find the difference between near extinction and extinction the greater one. Indeed, we found, then, that given such a scenario, a large majority found the difference between 80% dying and 100% dying the larger one so then, they did find extinction uniquely bad given this enormous opportunity cost of a utopian future.

Lucas Perry: What’s going on in my head right now is we were discussing earlier the role or not of these philosophical thought experiments in psychological analysis. You’ve done a great study here that helps to empirically concretize the biases and remedies for the issues that Derek Parfit had exposed and pointed to in his initial thought experiment. That was popularized by Nick Bostrom and it’s one of the key thought experiments for much of the existential risk community and people committed to longtermism because it helps to elucidate this deep and rich amount of value in the deep future and how we don’t normally consider that. Your discussion here just seems to be opening up for me tons of possibilities in terms of how far and deep this can go in general. The point of Peter Singer’s child drowning in a shallow pond was to isolate the bias of proximity and Derek Parfit’s thought experiment isolates the bias of familiarity, temporal bias and continuing into the future, it’s making me think, we also have biases about identity.

Derek Parfit also has thought experiments about identity, like with his teleportation machine where, say, you stepped into a teleportation machine and it annihilated all of your atoms but before it did so, it scanned all of your information and once it scanned you, it destroyed you and then re-assembled you on the other side of the room, or you can change the thought experiment and say on the other side of the universe. Is that really you? What does it mean to die? Those are the kinds of questions that are elicited. Listening to what you’ve developed and learned and reflecting on the possibilities here, it seems like you’re at the beginning of a potentially extremely important and meaningful field that helps to inform decision-making on these morally crucial and philosophically interesting questions and points of view. How do you feel about that or what I’m saying?

Stefan Schubert: Okay, thank you very much and thank you also for putting this Parfit thought experiment a bit in context. What you’re saying is absolutely right, that this has been used a lot, including by Nick Bostrom and others in the longtermist community and that was indeed one reason why we wanted to test it. I also agree that there are tons of interesting philosophical thought experiments there and they should be tested more. There’s also this other field of experimental philosophy where philosophers test philosophical thought experiments themselves, but in general, I think there’s absolutely more room for empirical testing of them.

With respect to temporal bias, I guess it depends a bit what one means by that, because we actually did get an effect from just mentioning that they should consider the longterm consequences, so I might think that to some extent it’s not only that people are biased in favor of the present, but it’s also that they don’t really consider the longterm future. They sort of neglect it and it’s not something that’s generally discussed among most people. I think this is also something that Parfit’s thought experiment highlights. You have to think about the really longterm consequences here and if you do think about them, then, your intuitions about these thought experiment should reverse.

Lucas Perry: People’s cognitive time horizons are really short.

Stefan Schubert: Yes.

Lucas Perry: People probably have the opposite discounting of future persons that I do. Just because I think that the kinds of experiences that Earth-originating intelligent life forms will be having in the near to 100 to 200 years will be much more deep and profound than what humans are capable of, that I would value them more than I value persons today. Most people don’t think about that. They probably just think there’ll be more humans and short of their bias towards present day humans, they don’t even consider a time horizon long enough to really have the bias kick in, is what you’re saying?

Stefan Schubert: Yeah, exactly. Thanks for that, also, for mentioning that. First of all, my view is that people don’t even think so much about the longterm future unless prompted to do so. Second, in this first study I mentioned, which was sort of a pre-study, we asked, “How good do you think that the future’s going to be?” On the average, I think they said, “It’s going to be slightly better than the present” and that would be very different from your view, then, that the future’s going to be much better. You could argue that this view that the future is going to be about as good as present is somewhat unlikely. I think it’s going to be much better or maybe it’s going to be much worse. There’s several different biases or errors that are present here.

Merely making the longterm consequences of the three outcomes salient, that already makes people more inclined to find a difference between 80% dying and 100% dying the greater one, so then you don’t add any information. Also ,specifying that the longterm outcomes are going to be extremely good, that makes a further difference that make most people find the difference between 80% dying and 100% dying the greater one.

Lucas Perry: I’m sure you and I, and listeners as well, have the hilarious problem of trying to explain this stuff to friends or family members or people that you meet that are curious about it and the difficulty of communicating it and imparting the moral saliency. I’m just curious to know if you have explicit messaging recommendations that you have extracted or learned from the study that you’ve done.

Stefan Schubert: You want to make the future more salient if you want people to care more about existential risk. With respect to explicit messaging more generally, like I said, there haven’t been that many studies on this topic, so I can’t refer to any specific study that says that this is how you should work with the messaging on this topic but just thinking more generally, one thing I’ve been thinking about is that maybe, with many of these issues, it’s just that it takes a while for people to get habituated with them. At first, if someone hears a very surprising statement that has very far reaching conclusions, they might be intuitively a bit skeptical about it, independently of how reasonable that argument would be for someone who would be completely unbiased. Their prior is that, probably, this is not right and to some extent, this might even be reasonable. Maybe people should be a bit skeptical of people who say such things.

But then, what happens is that most such people who make such claims that seem to people very weird and very far-reaching, they get discarded after some time because people poke holes in their arguments and so on. But then, a small subset of all such people, they actually stick around and they get more and more recognition and you could argue that that’s what’s now happening with people who work on longtermism and X-risk. And then, people slowly get habituated to this and they say, “Well, maybe there is something to it.” It’s not a fully rational process. I think this doesn’t just relate to longtermism an X-risk but maybe also specifically to AI risk, where it takes time for people to accept that message.

I’m sure there are some things that you can do to speed up that process and some of them would be fairly obvious like have smart, prestigious, reasonable people talk about this stuff and not people who don’t seem as credible.

Lucas Perry: What are further areas of the psychology of longtermism or existential risk that you think would be valuable to study? And let’s also touched upon other interesting areas for effective altruism as well.

Stefan Schubert: I mentioned previously people’s empirical beliefs, that could be valuable. One thing I should mention there is that I think that people’s empirical beliefs about the distant future are massively affected by framing effects, so depending on how you ask these questions, you are going to get very different answers so that’s important to remember that it’s not like people have these stable beliefs and they will always say that. The other thing I mentioned is moral judgments, and I said we stated moral judgements about human extinction, but there’s a lot of other stuff to do, like people’s views on population ethics could obviously be useful. Views on whether creating happy people is morally valuable. Whether it’s more valuable to bring large number of people whose life is barely worth living into existence than to bring in a small number of very happy people into existence and so on.

Those questions obviously have relevance for the moral value of the future. One thing I would want to say is that if you’re rational, then, obviously, your view on what and how much we should do to affect the distant future, that should arguably be a function of your moral views, including on population ethics, on the one hand, and also your empirical views of how the future’s likely to pan out. But then, I also think that people obviously aren’t completely rational and I think, in practice, their views on the longterm future will also be influenced by other factors. I think that their view on whether helping the longterm future seems like an inspiring project, that might depend massively on how the issue is framed. I think these aspects could be worth studying because if we find these kinds of aspects, then we might want to emphasize the positive aspects and we might want to adjust our behavior to avoid the negative. The goal should be to formulate a vision of longtermism that feels inspiring to people, including to people who haven’t put a lot of thought into, for instance, population ethics and related matters.

There are also some other specific issues which I think could be useful to study. One is the psychology of predictions about the distant future and the implications of the so-called construal level theory for the psychology or the longterm future. Many effective altruists would know construal level theory under another name: near mode and far mode. This is Robin Hanson’s terminology. Construal level theory is a theory about psychological distance and how it relates to how abstractly we construe things. It says that we conceive of different forms of distance – spatial, temporal, social – similarly. The second claim is that we conceive of items and events at greater psychological distance. More abstractly, we focus more on big picture features and less on details. So, Robin Hanson, he’s discussed this theory very extensively including with respect to the long term future. And he argues that the great psychological distance to the distant future causes us to reason in overly abstract ways, to be overconfident to have poor epistemics in general about the distant future.

I find this very interesting, and these kinds of ideas are mentioned a lot in EA and the X-risk community. But, to my knowledge there hasn’t been that much research which applies construal level theory specifically to the psychology of the distant future.

It’s more like people look at these general studies of construal level theory, and then they noticed that, well, the temporal distance to the distant future is obviously extremely great. Hence, these general findings should apply to a very great extent. But, to my knowledge, this hasn’t been studied so much. And given how much people discuss near or far mode in this case, it seems that there should be some empirical research.

I should also mention that I find that construal level theory a very interesting and rich psychological theory in general. I could see that it could illuminate the psychology of the distant future in numerous ways. Maybe it could be some kind of a theoretical framework that I could use for many studies about the distant future. So, I recommend that key paper from 2010 by Trope and Liberman on construal level theory.

Lucas Perry: I think that just hearing you say this right now, it’s sort of opening my mind up to the wide spectrum of possible applications of psychology in this area.

You mentioned population ethics. That makes me just think of in the context of EA and longtermism and life in general, the extent to which psychological study and analysis can find ethical biases and root them out and correct for them, either by nudging or by changing the explicit methods by which humans cognize about such ethics. There’s the extent to which psychology can better inform our epistemics, so this is the extent to which we can be more rational.

And I’m reflecting now how quantum physics subverts many of our Newtonian mechanics and classical mechanics, intuitions about the world. And there’s the extent to which psychology can also inform the way in which our social and experiential lives also condition the way that we think about the world and the extent to which that sets us astray in trying to understand the fundamental nature of reality or thinking about the longterm future or thinking about ethics or anything else. It seems like you’re at the beginning stages of debugging humans on some of the most important problems that exist.

Stefan Schubert: Okay. That’s a nice way of putting it. I certainly think that there is room for way more research on the psychology of longtermism and X-risk.

Lucas Perry: Can you speak a little bit now here about speciesism? This is both an epistemic thing and an ethical thing in the sense that we’ve invented these categories of species to describe the way that evolutionary histories of beings bifurcate. And then, there’s the psychological side of the ethics of it where we unnecessarily devalue the life of other species given that they fit that other category.

Stefan Schubert: So, we have one paper on the review, which is called “Why People Prioritize Humans Over Animals: A Framework for Moral Anthropocentrism.

To give you a bit of context, there’s been a lot of research on speciesism and on humans prioritizing humans over animals. So, in this paper we sort of try to take a bit more systematic approach and pick these different hypotheses for why humans prioritize humans over animals against each other, and look at their relative strengths as well.

And what we find is that there is truth to several of these hypotheses of why humans prioritize humans over animals. One contributing factor is just that they value individuals with greater mental capacities, and most humans have great mental capacities than most animals.

However, that explains the only part of the effect we find. We also find that people think that humans should be prioritized over animals even if they have the same mental capacity. And here, we find that this is for two different reasons.

First, according to our findings, people are what we call species relativists. And by that, we mean that they think that members of the species, including different non-human species, should prioritize other members of that species.

So, for instance, humans should prioritize other humans, and an elephant should prioritize other elephants. And that means that because humans are the ones calling the shots in the world, we have a right then, according to this species relativist view, to prioritize our own species. But other species would, if they were in power. At least that’s the implication of what the participants say, if you take them at face value. That’s species relativism.

But then, there is also the fact that they exhibit an absolute preference for humans over animals, meaning that even if we control for the mental capacities of humans and animals, and even if we control for the species relativist factors that we control for who the individual who could help them is, there remains a difference which can’t be explained by those other factors.

So, there’s an absolute speciesist preference for humans which can’t be explained by any further factor. So, that’s an absolute speciesist preference as opposed to this species relativist view.

In total, there’s a bunch of factors that together explain why humans prioritize animals, and these factors may also influence each other. So, we present some evidence that if people have a speciesist preference for humans over animals, that might, in turn, lead them to believe that animals have less advanced mental capacities than they actually have. And because they have this view that individuals with lower mental capacity, they are less morally valuable, that leads them to further deprioritize animals.

So, these three different factors, they sort of interact with each other in intricate ways. Our paper gives this overview over these different factors which contribute to humans prioritizing humans over animals.

Lucas Perry: This helps to make clear to me that a successful psychological study with regards to at least ethical biases will isolate the salient variables which are knobs that are tweaking the moral saliency of one thing over another.

Now, you said mental capacities there. You guys aren’t bringing consciousness or sentience into this?

Stefan Schubert: We discuss different formulations at length, and we went for the somewhat generic formulation.

Lucas Perry: I think people have beliefs about the ability to rationalize and understand the world, and then how that may or may not be correlated with consciousness that most people don’t make explicit. It seems like there are some variables to unpack underneath cognitive capacity.

Stefan Schubert: I agree. This is still like fairly broad brushed. The other thing to say is that sometimes we say that this human has as advanced mental capacities as these animals. Then, they have no reason to believe that the human has a more sophisticated sentience or is more conscious or something like that.

Lucas Perry: Our species membership tells me that we probably have more consciousness. My bedrock thing is I care about how much the thing can suffer or not, not how well it can model the world. Though those things are maybe probably highly correlated with one another. I think I wouldn’t be a speciesist if I thought human beings were currently the most important thing on the planet.

Stefan Schubert: You’re a speciesist if you prioritize humans over animals purely because of species membership. But, if you prioritize one species over another for some other reasons which are morally relevant, then you would not be seen as a speciesist.

Lucas Perry: Yeah, I’m excited to see what comes of that. I think that working on overcoming racism and misogyny and other things, and I think that overcoming speciesism and temporal biases and physical space, proximity biases are some of the next stages in human moral evolution that have to come. So, I think it’s honestly terrific that you’re working on these issues.

Is there anything you would like to say or that you feel that we haven’t covered?

Stefan Schubert: We have one paper which is called “The Puzzle of Ineffective Giving,” where we study this misconception that people have, which is that they think the difference in effectiveness between charities is much smaller than it actually is. So, experts think that the most effective charities are vastly much more effective than the average charity, and people don’t know that.

That seems to suggest that beliefs play a role in ineffective giving. But, there was one interesting paper called “Impediments to Effective Altruism” where they show that even if you tell people that cancer charity is less effective than an arthritis charity, they still donate.

So, then we have this other paper called “The Many Obstacles to Effective Giving.” It’s a bit similar to this speciesist paper, I guess, that we sort of pit different competing hypotheses that people have studied against each other. We give people different tasks, for instance, tasks which involve identifiable victims and tasks which involve ineffective but low overhead charities.

And then, we sort of started, well, what if we tell them how to be effective? Does that change how they behave? What’s the role of that pure belief factor? What’s the role of preferences? The result is a bit of a mix. Both beliefs and preferences contribute to ineffective giving.

In the real world, it’s likely that are several beliefs and preferences that obstruct effective giving present simultaneously. For instance, people might fail to donate to the most effective charity because first, it’s not a disaster charity, and they might have a preference for a disaster charity. And it might have a high overhead, and they might falsely believe then that high overhead entails low effectiveness. And it might not highlight identifiable victims, and they have a preference for donating to identifiable victims.

Several of these obstacles are present at the same time, and in that sense, ineffective giving is overdetermined. So, fixing one specific obstacle may not make as much of the difference as one would have wanted. That might support the view that what we need is not primarily behavioral interventions that address individual obstacles, but rather a more broad mindset change that can motivate people to proactively seek out the most effective ways of doing good.

Lucas Perry: One other thing that’s coming to my mind is the proximity of a cause to someone’s attention and the degree to which it allows them to be celebrated in their community for the good that they have done.

Are you suggesting that the way for remedying this is to help instill a curiosity and something resembling the EA mindset that would allow people to do the cognitive exploration and work necessary to transcend these limitations that bind them to their ineffective giving or is that unrealistic?

Stefan Schubert: First of all, let me just say that with respect to this proximity issue, that was actually another task that we had. I didn’t mention all the tasks. So, we told people that you can either help a local charity or a charity, I think it was in India. And then, we told them that the Indian charity is more effective and asked “where would you want to donate?”

So, you’re absolutely right. That’s another obstacle to effective giving, that people sometimes have preferences or beliefs that local charities are more effective even when that’s not the case. Some donor I talked to, he said, “Learning how to donate effectively, it’s actually fairly complicated, and there are lots of different things to think about.”

So, just fixing the overhead myth or something like that, that may not take you very far, especially if you think that the very best charities that are sort of extremely much more effective than the average charity. So, what’s important is not going from an average charity to a somewhat more effective charity, but to actually find the very best charities.

And to do that, we may need to address many psychological obstacles because the most effective charities, they might be very weird and sort of concerned with longterm future or what-not. So, I do think that a mindset where people seek out effective charities, or defer to others who do, that might be necessary. It’s not super easy to make people adopt that mindset, definitely not.

Lucas Perry: We have charity evaluators, right? These institutions which are intended to be reputable enough that they can tell you which are the most effective charities to donate to. It wouldn’t even be enough to just market those really hard. They’d be like, “Okay, that’s cool. But, I’m still going to donate my money to seeing eye dogs because blindness is something that runs in my family and is experientially and morally salient for me.”

Is the way that we fix the world really about just getting people to give more, and what is the extent to which the institutions which exist, which require people to give, need to be corrected and fixed? There’s that tension there between just the mission of getting people to give more, and then the question of, well, why do we need to get everyone to give so much in the first place?

Stefan Schubert: This insight that ineffective giving is overdetermined and there are lots of things that stand in a way of effective giving, one thing I like about it is that it seems to sort of go well with this observation that it is actually, in the real world, very difficult to make people donate effectively.

I might relate there a bit to what you mentioned about the importance of giving more, and so we could sort of distinguish between the different kinds of psychological limitations. First, that limitations that relate to how much we give. We’re selfish, so therefore we don’t necessarily give as much of our monetary rather resources as we should. There are sort of limits to altruism.

But then, there are also limits to effectiveness. We are ineffective for various reasons that we’ve discussed. And then, there’s also fact that we can have the wrong moral goals. Maybe we work towards short term goals, but then we would realize on the careful reflection that we should work towards long term goals.

And then, I was thinking like, “Well, which of these obstacles should you then prioritize if you turn this sort of prioritization framework inwards?” And then, you might think that, well, at least with respect to giving, it might be difficult for you to increase the amount that you give by more than 10 times. Americans, for instance, they already donate several percent of their income. We know from historical experience that it might be hard for people to sustain very high levels of altruism, so maybe it’s difficult for them to sort of ramp up this altruist factor to the extreme amount.

But then, with effectiveness, if this story about heavy-tailed distributions of effectiveness is right, then you could increase the effectiveness of your donations a lot. And arguably, the sort of psychological price for that is lower. It’s very demanding to give up a huge proportion of your income for others, but I would say that it’s less demanding to redirect your donations to a more effective cause, even if you feel more strongly for the ineffective cause.

I think it’s difficult to really internalize how enormously important it is to go for the most effective option. And also, of course, then the third factor to sort of change your moral goals if necessary. If people would reduce their donations by 99%, they would reduce the impact by 99%. Many people would feel guilty about it.

But then, if they reduce their impact 99% via reducing their effectiveness 99% through choosing an ineffective charity, then people don’t feel similarly guilty, so similar to Nate Soares’ idea of a care-o-meter: our feelings aren’t adjusted for these things, so we don’t feel as much about the ineffectiveness as we do about altruistic sacrifice. And that might lead us to not focus enough on effectiveness, and we should really think carefully about going that extra mile for the sake of effectiveness.

Lucas Perry: Wonderful. I feel like you’ve given me a lot of concepts and tools that are just very helpful for reinvigorating a introspective mindfulness about altruism in my own life and how that can be nurtured and developed.

So, thank you so much. I’ve really enjoyed this conversation for the reasons I just said. I think this is a very important new research stream in this space, and it seems small now, but I really hope that it grows. And thank you for you and your colleagues work here on seeding and doing the initial work in this field.

Stefan Schubert: Thank you very much. Thank you for having me. It was a pleasure.

FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre

There exist many facts about the nature of reality which stand at odds with our commonly held intuitions and experiences of the world. Ultimately, there is a relativity of the simultaneity of events and there is no universal “now.” Are these facts baked into our experience of the world? Or are our experiences and intuitions at odds with these facts? When we consider this, the origins of our mental models, and what modern physics and cosmology tell us about the nature of reality, we are beckoned to identify our commonly held experiences and intuitions, to analyze them in the light of modern science and philosophy, and to come to new implicit, explicit, and experiential understandings of reality. In his book Cosmological Koans: A Journey to the Heart of Physical Reality, FLI co-founder Anthony Aguirre explores the nature of space, time, motion, quantum physics, cosmology, the observer, identity, and existence itself through Zen koans fueled by science and designed to elicit questions, experiences, and conceptual shifts in the reader. The universe can be deeply counter-intuitive at many levels and this conversation, rooted in Anthony’s book, is an attempt at exploring this problem and articulating the contemporary frontiers of science and philosophy.

Topics discussed include:

  • What is skillful of a synergy of Zen and scientific reasoning
  • The history and philosophy of science
  • The role of the observer in science and knowledge
  • The nature of information
  • What counts as real
  • The world in and of itself and the world we experience as populated by our concepts and models of it
  • Identity in human beings and future AI systems
  • Questions of how identity should evolve
  • Responsibilities and open questions associated with architecting life 3.0

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Welcome to the Future of Life Institute podcast. I’m Lucas Perry. Today, we’re speaking with Anthony Aguirre. He is a cosmologist, a co-founder of the Future of Life Institute, and a co-founder of the Foundational Questions Institute. He also has a cool prediction market called Metaculus that I suggest you check out. We’re discussing his book, Cosmological Koans: A Journey Into the Heart of Physical Reality. This is a book about physics from a deeply philosophical perspective in the format of Zen koans. This discussion is different from the usual topics of the podcast, thought there are certainly many parts that directly apply. I feel this will be of interest to people who like big questions about the nature of reality. Some questions that we explore are, what is skillful of a synergy of Zen and scientific reasoning, the history and philosophy of science, the nature of information, we ask what is real, and explore that question. We discuss the world in and of itself and the world we experience as populated by our concepts and stories about the universe. We discuss identity in people and future AI systems. We wonder about how identity should evolve in persons and AI systems. And we also get into the problem we face of architecting new forms of intelligence with their own lived experiences, and identities, and understandings of the world. 

As a bit of side news, Ariel is transitioning out of her role at FLI. So, i’ll be taking over the main FLI podcast from here on out. This podcast will continue to deal with broad issues in the space of existential risk and areas that pertain broadly to the Future of Life Institute. Like, AI risk and AI alignment, as well as bio-risk and climate change, and the stewardship of technology with wisdom and benevolence in mind. And the AI Alignment Podcast will continue to explore the technical, social, political, ethical, psychological, and broadly interdisciplinary facets of the AI alignment problem. So, I deeply appreciated this conversation with Anthony and I feel that conversations like these help me to live what I feel is an examined life. And if these topics and questions that I’ve mentioned are of interest to you or resonate with you then I think you’ll find this conversation valuable as well. 

So let’s get in to our conversation with Anthony Aguirre. 

We’re here today to discuss your work, Cosmological Koans: A Journey to the Heart of Physical Reality. As a little bit of background, tell me a little bit about your experience as a cosmologist and someone interested in Zen whose pursuits have culminated into his book.

Anthony Aguirre: I’ve been a cosmologist professionally for 20 years or so since grad school I suppose, but I’ve also for my whole life had just the drive to understand what reality is, what’s reality all about. One approach to that certainly to understanding physical reality is physics and cosmology and fundamental physics and so on. I would say that the understanding of mental reality, what is going on in the interior sense is also reality and is also crucially important. That’s what we actually experience. I’ve long had an interest in both sides of that question. What is this interior reality? Why do we have experience the way we do? How is our mind working? As well as what is the exterior reality of physics and the fundamental physical laws and the large scale picture of the universe and so on?

While professionally I’ve been very  focused on the external side and the cosmological side in particular, I’ve nourished that interest in the inner side as well and how that interior side and the exterior side connect in various ways. I think that longstanding interest has built the foundation of what then turned into this book that I’ve put together over a number of years that I don’t care to admit.

Lucas Perry: There’s this aspect of when we’re looking outward, we’re getting a story of the universe and then that story of the universe eventually leads up into us. For example as Carl Sagan classically pointed out, the atoms which make up your body had to be fused in supernovas, at least the things which aren’t hydrogen and helium. So we’re all basically complex aggregates of collapsed interstellar gas clouds. And this shows that looking outward into the cosmos is also a process of uncovering the story of the person and of the self as well.

Anthony Aguirre: Very much in that I think to understand how our mind works and how our body works, we have to situate that within a chain of wider and wider context. We have to think of ourselves as biological creatures, and that puts us in the biological context and evolution and evolution over the history of the earth, but that in turn is in the context of where the earth sits in cosmic evolution in the universe as a whole, and also where biology and its functioning sits within the context of physics and other sciences, information theory, computational science. I think to understand ourselves, we certainly have to understand those other layers of reality.

I think what’s often assumed though is that to understand those other layers of reality, we don’t have to understand how our mind works. I think that’s tricky because on the one hand, we’re asking for descriptions of objective reality, and we asking for laws of physics. We don’t want to ask for our opinion that we’re going to disagree about. We want something that transcends our own minds and our ability to understand or describe those things. We’re looking for something objective in that sense.

I think it’s also true that many of the things that we talk about is fairly objective contain unavoidably a fairly subjective component to them. Once we have the idea of an objective reality out there that is independent of who’s observing it, we ascribe a lot of objectivity to things that are in fact much more of a mix that have a lot more ingredients that we have brought to them than we like to admit and are not wholly out there to be observed by us as impartial observers but are very much a tangled interaction between the observer and the observed.

Lucas Perry: There are many different facets and perspectives here about why taking the cosmological perspective of understanding the history of the universe, as well as the person, is deeply informative. In terms of the perspective of the Future of Life Institute, understanding cosmology tells us what is ultimately possible for life in terms of how long the universe will last, and how far you can spread, and fundamental facts about information and entropy, which are interesting, and also ultimately determine how the fate of intelligence and consciousness in the world. There’s also this anthropic aspect that you’re touching on about how observers only observe the kinds of things that observers are able to observe. We can also consider the limits of the concepts that are born of being a primate conditioned by evolution and culture, and the extent to which our concepts are lived experiences within our world model. And then there’s this distinction between the map and the territory, or our world model and the world itself. And so perhaps part of fusing Zen with cosmology is experientially being mindful of not confusing the map for the territory in our moment to moment experience of things.

There’s also this scientific method for understanding what is ultimately true about the nature of reality, and then what Zen offers is an introspective technique for trying to understand the nature of the mind, the nature of consciousness, the causes and conditions which lead to suffering, and the concepts which inhabit and make up conscious experience. I think all of this thinking culminates into an authentically lived life as a scientist and as a person who wants to know the nature of things, to understand the heart of reality, to attempt to not be confused, and to live an examined life – both of the external world and the experiential world as a sentient being. 

Anthony Aguirre: Something like that, except I nurture no hope to ever not be confused. I think confusion is a perfectly admirable state in the sense that reality is confusing. You can try to think clearly, but I think there are always going to be questions of interests that you simply don’t understand. If you go into anything deeply enough, you will fairly quickly run into, wow, I don’t really get that. There are very few things that if you push into them carefully and skeptically and open-mindedly enough, you won’t come to that point. I think it would actually be I think let down if I ever got to the point where I wasn’t confused about something. All the fun would be gone, but otherwise, I think I agree with you. Where shall we start?

Lucas Perry: This helps to contextualize some of the motivations here. We can start by explaining why cosmology and Zen in particular? What are the skillful means born of a fusion of these two things? Why fuse these two things? I think some number of our audience will be intrinsically skeptical of all religion or spiritual pursuits. So why do this?

Anthony Aguirre: There are two aspects to it. I think one is a methodological one, which is Cosmological Koans is made up of these koans, and they’re not quite the same koans that you would get from a Zen teacher, but they’re sort of riddles or confrontations that are meant to take the recipient and cause them to be a little bit baffled, a little bit surprised, a little bit maybe shocked at some aspect of reality. The idea here is to both confront someone with something that is weird or unusual or contradicts what they might have believed beforehand in a comfortable, familiar way and make it uncomfortable and unfamiliar. Also to make the thing that is being discussed about the person rather than abstracts intellectual pursuit. Something that I like about Zen is that it’s about immediate experience. It’s about here you are here and now having this experience.

Part of the hope I think methodologically of Cosmological Koans is to try to put the reader personally in the experience rather than have it be stuff out there that physicists over there are thinking about and researching or we can speculate with a purely third person point of view to emphasize that if we’re talking about the universe and the laws of physics and reality, we’re part of the universe. We’re obeying those laws of physics. We’re part of reality. We’re all mixed up in that there can be cases where it’s useful to get a distance from that, but then there are also cases where it’s really important to understand what that all has to do with you. What does this say about me and my life, my experience, my individual subjective, first person view of the world? What does that have to do with these very third person objective things that physics studies?

Part of the point is an interesting and fun way to jolt someone into seeing the world in a new way. The other part is to make it about the reader in this case or about the person asking the questions and not just the universe out there. That’s one part of why I chose this particular format.

I think the other is a little bit more on the content side to say I think it’s dangerous to take things that were written 2,500 years ago and say, oh look, they anticipated what modern physics is finding now. They didn’t quite. Obviously, they didn’t know calculus, let alone anything else that modern physics knows. On the other hand, I think the history of thinking about reality from the inside out, from the interior perspective using a set of introspective tools that were incredibly sophisticated through thousands of years does have a lot to say about reality when the reality is both the internal reality and the external one.

In particular, when you’re talking about a person experiencing the physical world perceiving something in the exterior physical world in some way, what goes on in that process that has both the physical side to it and an internal subjective mental side to it, observing how much of the interior gets brought to the perception. In that sense, I think the Eastern traditions are way ahead of where the West was. The West has had this idea that there’s the external world out there that sends information in and we receive it and we have a pretty much accurate view of what the world is. The idea that instead what we are actually experiencing is very much a joint effort of the experiencer and that external world building up this thing in the middle that brings that individual along with a whole backdrop of social and biological and physical history to every perception. I think that is something that is (a) true, and (b) there’s been a lot more investigation of that on the Eastern and on the philosophical side, some in Western philosophy too of course, but on the philosophical side rather than just the physical side.

I think the book is also about exploring that connection. What are the connections between our personal first person, self-centered view and the external physical world? In doing that investigation, I’m happy to jump to whatever historical intellectual foundations there are, whether it’s Zen or Western philosophy or Indian philosophy or modern physics or whatever. My effort is to touch on all of those at some level in investigating that set of questions.

Lucas Perry: Human beings are the only general epistemic agents in the universe that we’re currently aware of. From the point of view of the person, all the progress we’ve done in philosophy and science, all that there has ever been historically, from a first person perspective, is consciousness and its contents, and our ability to engage with those contents. It is by virtue of engaging with the contents of consciousness that we believe that we gain access to the outside world.  You point out here that in Western traditions, it’s been felt that we just have all of this data come in and we’re basically just seeing and interacting with the world as it really is. But as we’ve moreso uncovered, and in reality, the process of science and interrogating the external world is more like you have this internal virtual world model simulation that you’re constructing, that is a representation of the world that you use to engage and navigate with it. 

From this first person experiential bedrock, Western philosophers like Descartes have tried to assume certain things about the nature of being, like “I think, therefore I am.” And from assumptions about being, the project and methodologies of science are born of that reasoning and follow from it. It seems like it took Western science a long time, perhaps up until quantum physics, to really come back to the observer, right?

Anthony Aguirre: Yeah. I would say that a significant part of the methodology of physics was at some level to explicitly get the observer out and to talk about only objectively mathematically definable things. The mathematical part is still with physics. The objective is still there, except that I think there’s a realization that one always has to, if one is being careful, talk about what actually gets observed. You could do all of classical physics at some level, physics up to the beginning of the 20th century without ever talking about the observer. You could say there is this object. It is doing this. These are the forces acting on it and so on. You don’t have to be very careful about who is measuring those properties or talking about them or in what terms.

Lucas Perry: Unless they would start to go fast and get big.

Anthony Aguirre: Before the 20th century, you didn’t care if things were going fast. In the beginning of the 20th century though, there was relativity, and there was quantum mechanics, and both of those suddenly had the agent doing the observations at their centers. In relativity, you suddenly have to worry about what reference frame you’re measuring things in, and things that you thought were objective facts like how long is the time interval between two things that happen suddenly were revealed to be not objective facts, but dependent on who the observer is in particular, what reference frame their state of motion and so on.

Everything else as it turned out is really more like a property of the world that the world can either have or not when someone checks. The structure of quantum mechanics is at some level things have a state, which encodes something about the objects, and the something that it encodes is there’s this set of questions that I could ask the object and I can get answers to those questions. There’s a particular set of questions that I might ask and I’d get definite answers. If I ask other questions that aren’t in that list, then I get answers still, but they’re indefinite, and so I have to use probabilities to describe them.

This is a very different structure to say the object is a list of potential answers to questions that I might pose. It’s very different from saying there’s a chunk of stuff that has a position and a momentum and a force is acting on it and so on. It feels very different. While mathematically you can make the connections between those, it is a very different way of thinking about reality. That is a big change obviously and one that I think still isn’t complete in the sense that as soon as you start to talk that way and say an electron or a glass of water or whatever is a set of potential answers to questions, that’s a little bit hard to swallow, but you immediately have to ask, well, who’s asking the questions and who’s getting the answers? That’s the observer.

The structure of quantum mechanics from the beginning has been mute about that. It said make an observation and you’ll get these probabilities. That’s just pushing the observer into the thing that by definition makes observations, but without a specification of what does that mean to make an observation, what’s allowed to do it and what isn’t? Can an electron observe another electron or does it have to be a big group of electrons? What is it exactly that counts as making an observation and so on? There are all these questions about what this actually means that have just been sitting around since quantum mechanics was created and really haven’t been answered at any agreed upon or really I would say satisfactory way.

Lucas Perry: Theres a ton there. In terms of your book, there’s this fusion between what is skillful and true about Zen and what is skillful and true about science. You discussed here historically this transition to an emphasis on the observer and information and how those change both epistemology and ontology. The project of Buddhism or the project of Zen is ultimately also different from the project and intentions of Western science historically in terms of the normative, and the ethics driving it, and whether it’s even trying to make claims about those kinds of things. Maybe you could also explain a little bit there about where the projects diverge, what they’re ultimately trying to say either about the nature of reality or the observer.

Anthony Aguirre: Certainly in physics and much of philosophy of physics I suppose, it’s purely about superior understanding of what physical reality is and how it functions and how to explain the world around us using mathematical theories but with little or no translation of that into anything normative or ethical or prescriptive in some way. It’s purely about what is, and not only is there no ought connected with it as maybe there shouldn’t be, but there’s no necessary connection between any statement of what ought to be and what is. No translation of because reality is like this, if we want this, we should do this.

Physics has got to be part of that. What we need to do in order to achieve our goals has to do with how the world works, and physics describes that so it has to be part of it and yet, it’s been somewhat disconnected from that in a way that it certainly isn’t in spiritual traditions like Buddhism where our goal in Buddhism is to reduce or eliminate suffering. This is how the mind works and therefore, this is what we need to do given the way the mind and reality works to reduce or eliminate suffering. That’s the fundamental goal, which is quite distinct from the fundamental goal of just I want to understand how reality works.

 do think there’s more to do, and obviously there are sciences that fill that role like psychology and social science and so on that are more about let’s understand how the mind works. Let’s understand how society works so that given some set of goals like greater harmony in society or greater individual happiness, we have some sense of what we should do in order to achieve those. I would say there’s a pretty big gap nowadays between those fields on the one hand and fundamental physics on the other hand. You can spend a lot of time doing social science or psychology without knowing any physics and vice versa, but at the same time, it’s not clear that they really should be so separate. Physics is talking about the basic nature of reality. Psychology is also talking about the basic nature of reality but two different sides of it, the interior side and the exterior side.

Those two are very much connected, and so it should not be entirely possible to fully understand one without at least some of the other. That I think is also part of the motivation that I have because I don’t think that you can have a comprehensive worldview of the type that you want to have in order to understand what we should do, without having some of both aspects in it.

Lucas Perry: The observer has been part of the equation the whole time. It’s just that classical mechanics is a problem such that it never really mattered that much, but now it matters more given astronomy and communications technologies.  When determining what is, the fact that an observer is trying to determine what is and that the observer has a particular nature impacts the process of trying to discover what is, but not only are there supposed “is statements” that we’re trying to discover or understand, but we’re also from one perspective conscious beings with experiences and we have suffering and joy, and are trying to determine what we ought to do. I think what you’re pointing towards is basically an alternate unification of the problem of determining what is, and also of the often overlooked fact that we are contextualized as a creature in the world we’re attempting to understand, and make decisions about what to do next.

Anthony Aguirre: I think you can think of that in very big terms like that in this cosmic context, what is subjectivity? What is consciousness? What does it mean to have feelings of moral value and so on? Let’s talk about that. I think it’s also worth being more concrete in the sense that if you think about my experience as an agent in the world insofar as I think the world is out there objectively and I’m just perceiving it more or less directly. I tend to make very real in my mind a lot of things that aren’t necessarily real. Things that are very much half created by me, I tend to then turn into objective things out there and then react to them. This is something that we just all do on a personal basis all the time in our daily lives. We make up stories and then we think that those stories are real. This is just a very concrete thing that we do every day.

Sometimes that works out well and sometimes it doesn’t because if the story that we have is different from the story that someone else has or the story that society has, or if some in some ways somewhat more objective story then we have a mismatch and we can cause a lot of poor choices and poor outcomes by doing that. Simply the very clear psychological fact that we can discover with a little bit of self analysis that the stories that we make up aren’t as true as we usually think they are, that’s just one end of the spectrum of this process by which we as sentient beings are very much co-creating the reality that we’re inhabiting.

I think this co-creation process we’re comfortable with the fact that it awkwardly happens when we make up stories about what happened yesterday when I was talking to so and so. We don’t think of it so much when we’re talking about a table. We think the table is there. It’s real. If anything, it is. When we go deeper, we can realize that all of the things like color and solidity and endurance over time aren’t in the way function of the atoms and the laws of physics evolving them. Those things are properties that we’ve brought as useful ways to describe the world that have developed over millions of years of evolution and thousands of years of social evolution and so on. Those properties, none of those things are built into the laws of nature. Those are all things that we’ve brought. That’s not to say that the table is made up. Obviously, it’s not. The table is very objective in a sense, but there’s no table built into the structure of the universe.

I think we tend to brush under the rug how much we bring to our description of reality. We say that it’s out there. We can realize that on small levels, but I think to realize the depth of how much we bring to our perceptions and where that stuff comes from, which is a long historical, complicated information generating process that takes a lot more diving in and thinking about.

Lucas Perry: Right. If one were god or if one were omniscient, then to know the universe at the ultimate level would be to know the cosmic wave function, and within the cosmic wave function, things like marriage and identity and the fact that I have a title and conceptual history about my life are not bedrock ontological things. Rather they’re concepts and stories that sentient beings make up due to, as you said, evolution and social conditioning and culture.

Anthony Aguirre: Right, but when you’re saying that, I think there’s a suggestion that the cosmic wave functions description would be better in some way. I’d take issue with that because I think if you were some super duper mega intelligence that just knew the position of every atom or exactly the cosmic wave function, that doesn’t mean that you would know that the table in front of me is brown. That description of reality has all the particles in it and their positions and at some level, all the information that you could have of the fundamental physics, but it’s completely missing a whole bunch of other stuff, which are the ways that we categorize that information into meaningful things like solidity and color and tableness.

Lucas Perry: It seems to me that that must be contained within that ultimate description of reality because in the end, we’re just arrangements of particles and if god or the omniscient thing could take the perspective of us then they would see the table or the chair and have that same story. Our stories about the world are information built into us. Right?

Anthony Aguirre: How would it do that? What I’m saying is there’s information. Say the wave function of the universe. That’s some big chunk of information describing all kinds of different observations you could make of locations of atoms and things, but nowhere in that description is it going to tell you the things that you would need to know in order to talk about whether there’s a glass on the table in front of me because glass and table and things are not part of that wave function. Those are concepts that have to be added to it. It’s more specification that has been added that exists because of our view of the world. It only exists from the interior perspective of where we are as creatures that have evolved and are looking out.

Lucas Perry: My perspective here is that given the full capacity of the universal wave function for the creation of all possible things, there is the total set of arbitrary concepts and stories and narratives and experiences that sentient beings might dream up that arrive within the context of that particular cosmic wave function. There could be tables and chairs, or sniffelwoops and worbblogs but if we were god and we had the wave function, we could run it such that we created the kinds of creatures who dreamt a life of sniffelwoops and worbblogs or whatever else. To me, it seems like it’s more contained within the original thing.

Anthony Aguirre: This is where I think it’s useful to talk about information because I think that I just disagree with that idea in the sense that if you think of an eight-bit string, so there’s 256 possibilities of where the ones and zeros can be on and off, if you think of all 256 of those things, then there’s no information there. Whereas when I say actually only 128 of these are allowed because the first one is a one, you cut down the list of possibilities, but by cutting it down, now there’s information. This is exactly the way that information physically or mathematically is defined. It’s by saying if all the possibilities are on equal footing, you might say equally probable, then there’s no information there. Whereas, if some of them are more probable or even known, like this is definitely a zero or one, then that whole thing has information in it.

I think very much the same way with reality. If you think of all the possibilities and they’re all on the table with equal validity, then there’s nothing there. There’s nothing interesting. There’s no information there. It’s when you cut down the possibilities that the information appears. You can look at this in many different contexts. If you think about it in quantum mechanics, if you start some system out, it evolves into many possibilities. When you make an observation of it, you’re saying, oh, this possibility was actually realized and in that sense, you’ve created information there.

Now suppose you subscribe to the many worlds view of quantum mechanics. You would say that the world evolves into two copies, one in which thing A happened and one in which thing B happened. In that combination, A and B, there’s less information than in either A or B. If you’re observer A or if you’re observer B, you have more information than if you’re observer C looking at the combination of things. In that sense, I think we as residents, not with omniscient view, but as limited agents that have a particular point of view actually have more information about the world in a particular sense than someone who has the full view. The person with the full view can say, well, if I were this person, I would see this, or if I were this person, I would see that. They have in some sense a greater analytical power, but there’s a missing aspect of that, which is to make a choice as to which one you’re actually looking at, which one you’re actually residing in.

Lucas Perry: It’s like the world model which you’re identified with or the world model which you’re ultimately running is the point. The eight-bit string that you mentioned: that contains all possible information that can be contained within that string. Your point is that when we begin to limit it is when we begin to encode more information.

Anthony Aguirre: That’s right. There’s a famous story called the Library of Babel by Borges. It’s a library with every possible sequence of characters just book, after book, after book. You have to ask yourself how much information is there in that library. On the one hand, it seems like a ton because each volume you pick out has a big string of characters in it, but on the other hand, there’s nothing there. You would search forever practically far longer than the age of the universe before you found even a sentence that made any sense.

Lucas Perry: The books also contain the entire multi-verse, right?

Anthony Aguirre: If they go on infinitely long, if they’re not finite length books. This is a very paradoxical thing about information, I think, which is that if you combine many things with information in them, you get something without information in it. That’s very, very strange. That’s what the Library of Babel is. I think it’s many things with lots of information, but combined, they give you nothing. I think that’s in some level how the universe is that it might be a very low information thing in and of itself, but incredibly high information from the standpoint of the beings that are in it like us.

Anthony Aguirre: When you think of it that way, we become vastly, vastly more important than you might think because all of that information that the universe then contains is defined in terms of us, in terms of the point of view that we’re looking out from, without which there’s sort of nothing there. That’s a very provocative and strange view of the world, but that’s more and more the way I think maybe it is.

Lucas Perry: I’m honestly confused. Can you expand upon your example? 

Anthony Aguirre: Suppose you’ve got the library of Babel. It’s there, it’s all written out. But suppose that once there’s a sentence like, “I am here observing the world,” that you can attribute to that sentence a point of view. So once you have that sequence of words like, “I am here observing the world,” it has a subjective experience. So then almost no book has that in this whole library, but a very, very, very select few do. And then you focus on those books. That sub-selection of books you would say there’s a lot of information associated with that subsection, because making something more special means that it has more information. So once you specify something, there’s a bunch of information associated with it.

Anthony Aguirre: By picking out those particular books, now you’ve created information. What I’m saying is there’s a very particular subset of the universe or subset of the ways the universe could be, that adds a perspective that has a subjective sense of looking out at the world. And if you specify, once you focus in from all the different states of the universe to those associated … having that perspective, that creates a whole bunch of information. That’s the way that I look at our role as subjective observers in the universe, that by being in a first person perspective, you’re sub-selecting a very, very, very special set of matter and thus creating a whole ton of information relative to all possible ways that the matter could be arranged.

Lucas Perry: So for example, say the kitchen is dirty, and if you leave the kitchen alone, entropy will just continue to make the kitchen more dirty because there are more possible states in which the kitchen is dirty than it is clean, and there are more possible states in the universe in which sentient human beings do not arise. But here we are, encoded on a planet with the rest of organic life … and in total, evolution and the history of life on this planet requires requires a large and unequal amount of information and specification. 

Anthony Aguirre: Yes, I would say … We haven’t talked about entropy, and I don’t know if we should. Genericness is the opposite of information. So when something’s very specific, there’s information content, and when it’s very generic, there’s less information content. This is at some level saying, “Our first person perspective as conscious beings is very, very specific.” I think there is something very special and mysterious at least, about the fact that there’s this very particular set of stuff in the universe that seems to have a first person perspective associated with it. That’s where we are, sort of almost by definition.

That’s where I think the question of agency and observation and consciousness has something to do with how the universe is constituted, not in that it changes the universe in some way, but that connected with this particular perspective is all this information, and if the physical world is at some level made of information, that’s a very radical thing because that’s saying that through our conscious existence and our particular point of view, we’re creating information, and information is reality, and therefore we’re creating reality.

There are all these ways that we apply physics to reality. They’re very information theoretic. There’s this sort of claim that a more useful way to think about the constituents of reality are as informational entities. And then the second claim is that by specifying, we create information. And then the third is that by being conscious observers who come into being in the universe and then have our perspective that we look out toward the universe from, that we are making a selection, we’re specifying, “This is what I see.” So we’re then creating a bunch of information and thus creating a reality.

In that sense, I’m claiming that we create a reality, not from some, “I think in my mind and therefore reality appears like magical powers,” but that if we really talk about what’s real, it isn’t just little bits of stuff I think, but it’s everything else that makes up reality and that information that makes up reality is something that we very much are part of the creation of. 

There are different definitions of information, but the way that the word is most commonly used is for Shannon information. And what that is, is an amount that is associated with a set of probabilities. So if I say I’m going to roll some dice, what am I going to roll? So you’d say, “I don’t know.” And I’d say, “Okay, so what probabilities would you ascribe to what I’m going to roll?” And you’d say, “Well probably a sixth for each side of the die.” And I would say that there’s zero information in that description. And I say that because that’s the most uncertain you could be about the rolls of the dice. There’s no information there in your description of the die.

Now I roll it, and we see that it’s a three. So now the probability of three is 100% or at least very close to it. And the probability of all the other ones is zero. And now there is information in our description. Something specific has happened, and we’ve created information. That’s not a magical thing; it’s just the information is associated with probabilities over things, and when we change the probabilities, we change how much information there is.

Usually when we observe things, we narrow the probabilities. That’s kind of the point of making observations, to find out more about something. In that sense, we can say that we’re creating information or we’re gathering information, so we’ve created information or gathered it in that sense by doing the measurement. In that sense, any time we look at anything, we’re creating information, right?

If I just think what is behind me, well there’s probably a pillar. It might be over there, it might be over there. Now let me turn around and look. Now I’ve gathered information or created information in my description of pillar location. Now when we’re talking about a wave function and somebody measuring the wave function, and we want to keep track of all of the information and so on, it gets rather tricky because there are questions about whose probabilities are we talking about, and whose observations and what are they observing. So we have to get really careful and technical about what sort of probabilities are being defined and whose they are, and how are they evolving.

When you read something like, “Information is preserved in the universe,” what that actually means is that if I take some description of the universe now and then I close my eyes and I evolve that description using the laws of physics, the information that my description had will be preserved. So the laws of physics themselves will not change the amount of information in that description.

But as soon as I open my eyes and look, it changes, because I just will observe something and I’ll see that I closed my eyes, the universe could have evolved into two different things. Now I open them and see which one it actually evolved into. Now I increased the information. I reduced the uncertainty. So it’s very, very subtle, the way in which the universe preserves information. The dynamics of the universe, the laws of physics, preserve the information that is associated with a description that you have of the world. There’s an incredible amount of richness there because that’s what’s actually happening. If you want to think about what reality is, that’s what reality is, and it’s the observers who are creating that description and observing that world and changing the description to match what they saw. Reality is a combination of those two things: the evolution of the world by the laws of physics, and the interaction of that with the person who or the whatever it is that is asking the questions and making the observations.

What’s very tricky is that unlike matter, information is not something that you can say, “I’ve got four bits of information here and five bits of information here, so I’m going to combine them and get nine bits of information.” Sometimes that’s true, but other times it’s very much not true. That’s what’s very, very, very tricky I think. So if I say I’ve got a die and I rolled a one with a 100% chance, that’s information. If I say I have a die and I rolled a two, or if I say I had a die and then rolled a three, all of those have information associated with them. But if I combine those in the sense that I say I have a die and I rolled a one and a two and a three and a four and a five and a six, then there’s no information associated with that.

All of the things happened, and so that’s what’s so tricky about it. It’s the same with the library of Babel. If I take every possibility on an equal footing, then none of them is special and there’s no information associated with that. If I take a whole bunch of special things and put them in a big pot, I just have a big mess and then there’s nothing special any more.

When I say something like, “The world is made out of information,” that means that it has different sort of properties than if it was made out of stuff. Because stuff … Like you take away some stuff and there’s less stuff. Or you divide the stuff in two and each half has half as much stuff. And information is not necessarily that way. And so if you have a bunch of information or a description of something and you take a subset of it, you’ve actually made more information even though there’s less that you’re talking about.

It’s different than the way we think about the makeup of reality when you think about it as made up of stuff, and has just very different properties that are somewhat counter-intuitive when we’re used to thinking about the world as being made up of stuff.

Lucas Perry: I’m happy that we have spent this much time on just discussing information, because I think that it offers an important conceptual shift for seeing the world, and a good challenging of some commonly held intuitions – at least, that I have. The question for me now is, what are the relevant and interesting implications here for agents? The one thing that had been coming to my mind is… and to inject more Zen here… there is a koan that goes something like: “first there were mountains and then there were no mountains, and then there were mountains.”  This seems to have parallels to the view that you’re articulating, because first you’re just stupefied and bought into the reality of your conceptualizations and stories where you say “I’m actually ultimately a human being, and I have a story about my life where I got married, and I had a thing called a job, and there were tables, which were solid and brown and had other properties…” But as you were saying, there’s no tableness or table in the wave function; these are all stories and abstractions which we use because they are functional or useful for us. And then when we see that we go, “Okay, so there aren’t really mountains in the way that I thought, mountains are just stories we tell ourselves about the wave function.”

But then I think it seems like you’re pointing out here again, there’s sort of this ethical or normative imperative where it’s like, “okay, so mountains are mountains again, because I need my concept and lived experience of a mountain to exist in the world, and to exist amongst human institutions and concepts and language, and even though I may return to this, this all may be viewed in a new light. Is this pointing in the right direction in your opinion?

Anthony Aguirre: I think in a sense, in that we think we’re so important, and the things around us are real, and then we realize as we study physics that actually, we’re tiny little blips in this potentially infinite or at least extremely large, somewhat uncaring-seeming universe, that the things that we thought are real are kind of fictitious, and partly made up by our own history and perceptions and things, that the table isn’t really real but it’s made up of atoms or wave function or what have you.

But then I would say, why do you attribute more realness to the wave function than the table? The wave function is a sort of very impoverished description of the world that doesn’t contain tables and things. So I think there’s this pathology of saying because something is described by fundamental physical mathematical laws, it’s more real than something like a table that is described by people talking about tables to other people.

There’s something very different about those things, but is one of them more real and what does that even mean? If the table is not contained in the wave function and the wave function isn’t really contained in the table, they’re just different things. They’re both, in my view, made out of information, but rather different types and accessible to rather different things.

To me, the, “Then I realized it was a mountain again,” is that yes, the table is kind of an illusion in a sense. It’s made out of atoms and we bring all this stuff to it and we make up solidity and brownness and stuff. So it’s not a fundamental part of the universe. It’s not objectively real, but then I think at some level nothing is so purely objectively real. It’s a sliding scale, and then it’s got a place for things like the wave function of the universe and the fundamental laws of physics at the more objective end of things, and brownness and solidity at the more subjective end of things, and my feelings about tables and my thirst for water at the very subjective end of things. But I see it as a sort of continuous spectrum, and that all of those things are real, just in somewhat different ways. In that sense, I think I’ve come back to those illusory things being real again in a sense, but just from a rather different perspective, if we’re going to be Zen about it.

Lucas Perry: Yeah, it seems to be an open question in physics and cosmology. There is still arguing now currently going on about what it means for something to be real. I guess I would argue that something is real if it maybe has causality or that causality would supervene upon that thing… I’m not even sure, I don’t think I’m even going to start here, I think I would probably be wrong. So…

Anthony Aguirre: Well, I think the problem is in trying to make a binary distinction between whether things are real or not or objective or not. I just think that’s the wrong way to think about it. I think there are things that are much more objective than other things, and things that are much less objective than other things, and to the extent that you want to connect real with being objective, there are then things that are more and less real.

In one of the koans in the book, I make this argument that we think of a mathematical statement like the Pythagorean theorem, say, or some other beautiful thing like Euler’s theorem relating exponentials to cosines and sines, that these are objective special things built into the universe, because we feel like once we understand these things, we see that they must have been true and existed before any people were around. Like it couldn’t be that the Pythagorean theorem just came into being when Pythagoras or someone else discovered it, or Euler’s theorem. They were true all the way back until before the first stars and whatnot.

And that’s clearly the case. There is no time at which those things became true. At the same time, suppose I just take some axioms of mathematics that we employ now, and some sort of rules for generating new true statements from them. And then I just take a computer and start churning out statements. So I churn out all possible consequences of those axioms. Now, if I let that computer churn long enough, somewhere in that string of true statements will be something that can be translated into the Pythagorean theorem or Euler’s theorem. It’s in there somewhere. But am I doing mathematics? I would say I’m not, in the sense that all I’m doing is generating an infinite number of true statements if I let this thing go on forever.

But almost all of them are super uninteresting. They’re just strings of gobbledygook that are true given the axioms and the rules for generating new true statements, but they don’t mean anything. Whereas Euler’s theorem is a very, very special statement that means something. So what we’re doing when we’re doing mathematics, we feel like what we’re doing is proving stuff to be true. And we are at some level, but I think what we’re really doing from this perspective is out of this catalog that is information-free of true statements, we’re picking out a very, very special subset that are interesting. And in making that selection, we’re once again creating information. And the information that we’re creating is really what we’re doing, I think, when we’re doing mathematics.

The information contained in the statement that the Pythagorean theorem is an interesting theorem that applies to stuff in the real world and that we should teach our kids in school, that only came into being when humans did. So although the statement has always been true, the information I think was created along with humans. So I think you kind of get to have it both ways. It is built into the universe, but at the same time, it’s created, so you discover it and you create it.

I think there’s a lot of things that are that way. And although the Pythagorean theorem feels super objective, you can’t disagree with the Pythagorean theorem in a sense, we all agree on it once we understand what it is, at the same time, it’s got this subjective aspect to it that out of all the theorems we selected, this particular one of interest … We also selected the axioms by the way, out of all different sets of axioms we could have chosen. So there’s this combination of objectivity and the subjectivity that we as humans that like to do geometry and think about the world and prove theorems and stuff have brought to it. And that combination is what’s created the information that is associated with the Pythagorean theorem.

Lucas Perry: Yeah. You threw the word “subjectivity” there, but this process is bringing us to the truth, right? I mean, the question is again, what is true or real?

Anthony Aguirre: There are different senses of subjectivity. So there’s one sense of having an interior world view, having consciousness or awareness or something like that, being a subject. And there’s another of saying that its perspectival, that it’s relative or something, that different agents might not agree on it or might see it a little bit differently. So I’d want to distinguish between those two.

Lucas Perry: In which sense did you mean?

Anthony Aguirre: What I mean is that the Pythagorean theorem is quite objective in the sense that once lots of agents agree on the premises and the ground rules, we’re all going to agree on Pythagorean theorem. Whereas we might not agree on whether ice cream is good, but it’s still a little bit not objective.

Lucas Perry: It’s like a small part of all possible mathematically true statements which arise out of those axioms.

Anthony Aguirre: Yes. And that some community of agents in a historical process had to select that out. It can’t be divorced from the process and the agents that brought it into being, and so it’s not entirely objective in that sense.

Lucas Perry: Okay. Yeah, yeah, that makes sense. I see. So this is a question I was intending on asking you an hour ago before we went down this wormhole, first I’m interested in just the structure of your book. How do you structure your book in terms of the ideas and what leads to what?

Anthony Aguirre: Just a brief outline of the book: there are a few different layers of structure. One is the koans themselves, which are sort of parables or little tales that encode some idea. There’s maybe a metaphor or just the idea itself, and the koans take place as part of a narrative that takes place starting in 1610 or 1630 or so, in a trip from Italy to in the end, Kyoto. So this across the world journey that takes place through these koans. And they don’t come in chronological order, so you kind of have to piece together the storyline as the book goes on. But it kind of comes together in the end, so there’s a sequence of things that are happening through the koans, and there’s a storyline that you get to see assemble itself and it involves a genie and it involves a sword fight and it involves all kinds of fun stuff.

That’s one layer of the structure, is the koans forming the narrative. Then after each koan is a commentary that’s kind of delving into the ideas, providing some background, filling in some physics, talking about what that koan was getting at. And in some cases, it’s kind of a resolution to it, like here’s the paradox and here’s the resolution to that paradox. But more often, it’s here’s the question, here’s how to understand what that question is really asking. Here’s a deeper question that we don’t know the answer to, and maybe we’ll come back to later in the book or maybe we won’t. So there’s kind of this development of a whole bunch of physics ideas that are going on in those commentaries.

In terms of the physics ideas, there’s a sequence. There’s first classical physics including relativity. The second part is quantum mechanics, essentially. The third part is statistical mechanics and information theory. The fourth part is cosmology. The fifth part is the connections to the interior sense, like subjectivity and the subject and experiments and thinking about interior sense and consciousness and the eye. And then the last part is a sort of more philosophical section, bringing things together in the way that we’ve been discussing, like how much of reality is out there, how much of it is constructed by us, or us as us writ large as a society and thinking beings and biological evolution and so on. So that’s kind of the structure of the book.

Lucas Perry: Can you read for us two of your favorite koans in the book?

Anthony Aguirre: This one alludes to a classic philosophical thought experiment of the ship of Theseus. This one’s called What Is It You Sail In? It takes place in Shanghai, China in 1620. “After such vast overland distances, you’re relieved that the next piece of your journey will be at sea, where you’ve always felt comfortable. Then you see the ship. You’ve never beheld a sorrier pile of junk. The hull seems to be made mostly of patches, and the patches appear to be made of other patches. The nails look nailed together. The sails are clearly mostly a quilt of canvas sacks and old clothing. ‘Does it float?’ you ask the first mate, packing in as much skepticism as you can fit. ‘Yes. Many repairs, true. But she is still my good companion, [Atixia 00:25:46], still the same ship she ever was.’

Is she?, you wonder. Then you look down at your fingernails, your skin, the fading scar on your arm and wonder, am I? Then you look at the river, the sea, the port and all around. Is anything?”

So what this one’s getting at is this classic tale where if you replace one board of a ship, you’d still say it’s the same ship; you’ve just replaced one little piece of it. But as you replace more and more pieces of it, at some point, every piece of the ship might be a piece that wasn’t there before. So is it the same ship or it’s not? Every single piece has been replaced. And our body is pretty much like this; on a multi-year timescale, we replace pretty much everything.

The idea of this is to get at the fact that when we think of a thing like an identity that something has, it’s much more about the form and I would say the information content in a sense, than about the matter that it’s made up of. The matter’s very interchangeable. That’s sort of the way of kicking off a discussion of what does it mean for something to exist? What is it made of? What does it mean for something to be different than another thing? What are the different forms of existence? What is the form versus the matter?

And with the conclusion that at some level, the very idea of matter is a bit of an illusion. There’s kind of form in the sense that when you think of little bits of stuff, and you break those little bits of stuff down farther, you see that there are protons and electrons and neutrons and whatnot, but what those things are, they’re not little bits of stuff. They’re sort of amounts or properties of something. Like we think of energy or mass as a thing, but it’s better to think of it as a property that something might have if you look.

The fact that you have an electron really means that you’ve got something with a little bit of the energy property or a little bit of the mass property, a little bit of the spin property, a little bit of the electron lepton number property, and that’s it. And maybe you talk about its position or its speed or something. So it’s more like a little bundle of properties than a little bundle of stuff. And then when you think of agglomerations of atoms, it’s the same way. Like the way that they’re arranged is a sort of informational thing, and questions you can ask and get answers to.

Going back to our earlier conversation, this is just a slightly more concrete version of the claim that when we say what something’s made of, there are lots of different answers to that question that are useful in different ways. But the answer that it’s made of stuff is maybe not so useful as we usually think it is.

Lucas Perry: So just to clarify for listeners, koans in Zen traditionally are supposed to be not explicitly philosophically analytical, but experiential things which are supposed to subvert commonly held intuitions which may take you from seeing mountains as mountains, to no mountains, to mountains again. So here there’s this perspective that there’s both supposedly the atoms which make up me and you, and then the way in which the atoms are arranged, and then this koan that you say elicits the thought that you can remove any bit of information from me, and you can continue to move one bit of information from me at a time, and there’s no one bit of information that I would say is essential to what I call Lucas, or what I take to be myself. Nor atoms. So then what am I? How many atoms or bits of information do you have to take away from me until I stop being Lucas? And so one may arrive at the place where you’re deeply questioning the category of Lucas altogether.

Anthony Aguirre: Yeah. The things in this book are not Zen koans in the sense that a lot of them are pretty philosophical and intellectual and analytical, which Zen koans are sort of not. But at the same time, when you delve into them and try to experience them, when you think not of the abstract idea of the ship in this koan and lepton numbers and energy and things like that, but when you apply it to yourself and think, okay, what am I if I’m not this body?, then it becomes a bit more like a genuine Zen koan. You’re sort of like, ah, I don’t know what I am. And that’s a weird place to be. I don’t know what I am.

Lucas Perry: Yeah. Sure. And the wisdom to be found is the subversion of a ton of different commonly held intuitions, which are evolutionarily conditioned, which are culturally conditioned and socially conditioned. So yeah, this has to do with the sense of permanent things and objects, and then what identity ultimately is, or what our preferences are about identity, or if there are normative or ethical imparitives about the sense of identity that we out to take. Are there any other ideas here for some other major intuitions that you’re attempting to subvert in your book?

Anthony Aguirre: Well yeah, there’s … I guess it depends which ones you have, but I’ve subverted as many as I can. I mean, a big one I think is the idea of a sort of singular individual self, and that’s one that is really interesting to experiment with. The way we go through our lives pretty much all the time is that there’s this one-to-one correspondence between our feeling that we’re an individual self looking out at the world, there’s an “I”. We feel like there’s this little nugget of me-ness that’s experiencing the world and owns mental faculties, and then owns and steers around this body that’s made out of physical stuff.

That’s the intuition that we go through life with, but then there are all kinds of thought experiments you can do that put tension on that. And one of them that I go through a lot in the book is what happens when the body gets split or duplicated, or there are multiple copies of it and things like that. And some of those things are physically impossible or so extraordinarily difficult that they’re not worth thinking about, but some of them are very much things that might automatically happen as part of physics, if we really could instantaneously copy a person and create a duplicate of them across the room or something like that.

What does that mean? How do we think about that? When we’ve broken that one-to-one correspondence between the thing that we like to think of as ourself and our little nugget of I-ness, and the physical body, which we know is very, very closely related to that thing. When one of them bifurcates into two, it kind of throws that whole thing up in the air, like now what do we think? And it gets very unsettling to be confronted with that. There are several koans investigating that at various different levels that don’t really draw any conclusions, I would say. They’re more experiments that I’m sort of inviting other people to subject themselves to, just as I have thinking about them.

It’s very confusing how to think about them. Like, should I care if I get copied to another copy across the room and then get instantaneously destroyed? Should that bother me? Should I fear that process? What if it’s not across the room, but across the universe? And what if it’s not instantaneously that I appear across the room, but I get destroyed now, and I exist on the other side of the universe a billion years from now, the same configuration of atoms? Do I care that that happens? There are no easy answers to this, I think, and they’re not questions that you can easily dismiss.

Lucas Perry: I think that this has extremely huge ethical implications, and represents, if transcended, an important point in human evolution. There is this koan, which is something like, “If you see the Buddha on the road, kill him.” Which means if you think you’ve reached something like enlightenment, it’s not that, because enlightenment is another one of these stories. But insofar as human beings are capable of transcending illusions and reaching anything called enlightenment… I think that an introspective journey into trying to understand the self and the world is one of the most interesting pursuits a human being can do. And just to contextualize this and, I think, paint the picture better, it’s evolution that has evolved these information processing systems, with this virtual sense of self that exists in the world model we have, and the model we have about ourselves and our body, and this is because this is good for self preservation. 

So you can say, “Where do you feel you’re located?” Well I sort of feel I’m behind my face and I feel I have a body and I have this large narrative of self concept and identity, which is like, “OI’m Lucas. I’m from here. I have this concept of self which I’ve created, which is basically this extremely elaborative connotative web of all the things which I think make up my identity. And under scrutiny, this is basically just all conditioned, it’s all outside of myself, all prior to myself, I’m not self-made at all, yet I think that I’m some sort of self separate entity. And then comes along Abrahamic religions at some point in the story of humanity, which are going to have tremendous cultural and social implications on the way that evolution has already bred ego-primates like ourselves. We’re primates with egos and now we have Abrahamic religions, which are contributing to this problem by conditioning the language and philosophy and thought of the West, which say that ultimately you’re a soul, you’re not just a physical thing.

You’re actually a soul who has a body and you’re basically just visiting here for a while, and then the thing that is essentially you will go to the next level of existence. This leads to, I think, reifying this rational conceptualization of self and this experience itself. Where you feel like you have a body, you feel that your heart beats itself, you feel that think your thoughts and you say things like, “I have a brain.” Who is it that stands in relation to the brain? Or we might say something like, “I have a body.” Who is it that has a body? So it seems like our language is clearly conditioned and structured around our sense and understanding of self. And there’s also this sense in which you’ve been trying to subvert some sorts of ideas here, like sameness or otherness, what counts as the same ship or not. And from an ultimate physics perspective, the thing that is fusing the stars is the same thing that is thinking my thoughts. The fundamental ontology of the world is running everything, and I’m not separate from that, yet if feels like I am, and this seems to have tremendous ethical implications.

For example, people believe that people are deserving of retribution for crimes or acting immorally, as if they had chosen in some ultimate and concrete sense what to do. The ultimate spiritual experience, or at least the ultimate insight, is to see this whole thing for what it is, to realize that basically everyone is spell bound by these narratives of self, and these different intuitions we have about the world, and that we’re basically bought into this story that I think Abrahamic religions have led to a deeper conditioning in us. It seems to me that atheists also experience themselves this way. We think when we die there’ll be nothing, there will just be an annihilation of the self, but part of this realization process is that there’s no self to be annihilated to begin with. There’s just consciousness and its contents, and ultimately by this process you may come to see that consciousness is something empty of self and empty of identity. It’s just another thing that is happening.

Anthony Aguirre: I think there are a lot of these cases where the mountain becomes less then more of a mountain and then more and less of a mountain. You touched upon consciousness and free will and many other things that are also in this, and there’s a lot of discussion of free will in the book and we can get into that too. I think with consciousness or the self, I find myself in this strange sort of war in the sense that, on the one hand I feel like there’s a sense in which this self that we construct, is kind of an illusionary thing and that the ego and things that we attach to, is kind of an illusionary thing. But at the same time, A, it sure feels real and the feeling of being Anthony, I think is a kind of unique thing.

I don’t subscribe to the notion that there’s this little nugget of soul stuff that exists at the core of a person. It’s easy to sort of make fun of this, but at the same time I think the idea that there’s something intrinsically equally valuable to each person is really, really important. I mean it underlies a lot of our way of thinking about society and morality, in ways that I find very valuable. And so while I kind of doubt the sort of metaphysics of the individual’s soul in that sense, I worry what happens to the way we’ve constructed our scheme of values. If we grade people on a sliding scale, you’re more valuable than this other person. I think that sense of equal intrinsic human worth is incredibly crucial and has led to a lot of moral progress. So I have this really ambivalent feeling, in that I doubt that there’s some metaphysical basis for that, but at the same time I really, really value that way of looking at the self, in terms of society and morality and so on, that we’ve constructed on top of that.

Lucas Perry: Yeah, so there’s the concept in zen Buddhism of skillful means. So one could say that the concept of each human being having some kind of equal and intrinsic worth, which is related to their uniqueness and fundamental being as being a human being, that that is skillful. 

Anthony Aguirre: It’s not something that in some sense makes any rational sense. Whatever you name, some people have more of it than others. Money, capability, intelligence, sensitivity.

Lucas Perry: Even consciousness.

Anthony Aguirre: Consciousness maybe. Maybe some people are just a lot more conscious than others. If we can measure it, maybe some people would be like a 10 on the dial and others would be 2. Who knows?

Lucas Perry: I think that’s absolutely probably true, because some people are brain dead. Medically there’s a sliding scale of brain activity, so yeah, I think today it seems clear that some people are more conscious than others.

Anthony Aguirre: Yes, that’s certainly true. I mean when we go to sleep, we’re less conscious. But nonetheless, although anything that you can measure about people and their experience of the world varies and if you could quantify it on a scale, some people would have more and less. Nonetheless, we find it useful to maintain this idea that there is some intrinsic equality among people and I worry what would happen if we let go of that. What kind of world would we build without that assumption? So I find it valuable to keep that assumption, but I’m conflicted about that honestly, because on what basis do we make that assumption? I really feel good about it, but I’m not sure I can point to why. Maybe that’s just what we do. We say this is an axiom that we choose to believe that there’s an intrinsic moral value to people and I respect that, because I think you have to have axioms. But it’s an interesting place that we’ve come to, I think in terms of the relation between our beliefs about reality and our beliefs about morality.

Lucas Perry: Yeah. I mean there’s the question, as we approach AI and super intelligence, of what authentic experiential and ethical enlightenment and idealization means. From my perspective the development of this idea, which is correlated with the enlightenment and humanism, right? Is a very recent thing, the 17 and the 1800’s, right? So it seems clear from a cosmological context that this norm or ethical view is obviously based on a bunch of things that are just not true, but at the same time it’s been ethnically very skillful and meaningful for fixing many of the immoral things that humans do, that are unethical. But obviously it seems like it will give way to something else, and the question is, is what else does it give way to?

So if we create Life 3.0 and we create AI’s that do not care about getting turned off for two minutes and then waking up again, because they don’t feel the delusion of a self. That to me seems to be a step in moral evolution, and why I think that ultimately it would be super useful for AI design, if the AI designers would consider the role that identity plays in forming strong AI systems that are there to help us. We have the opportunity here to have selfless AI systems, they’re not going to be confused like we are. They’re not going to think they have souls, or feel like they have souls, or have strong senses of self. So it seems like there’s opportunities here, and questions around what it means to transcend many of the aspects of human experience, and how best it would be to instantiate that in advanced AI systems. 

Anthony Aguirre: Yeah, I think there’s a lot of valuable stuff to talk about there. In humans, there are a whole bunch of things that go together that don’t necessarily have to be packaged together. Intelligence and consciousness are packaged together, it’s not clear to what degree those have to be. It’s not clear how much consciousness and selfness have to be packaged together. It’s not clear how much consciousness or selfness and a valence to consciousness, a positive or negative experience have to be packaged together. Could we conceive of something that is intelligent, but not conscious? I think we certainly could, depending on how intelligent it has to be. I think we have those things and depending on what we mean by consciousness, I guess. Can we imagine something that is conscious and intelligent, but without a self, maybe? Or conscious, but it doesn’t matter to it how something goes. So it’s something that’s conscious, but can’t really have a moral weight in the sense that it doesn’t either suffer or experience positive feelings, but it does experience.

I think there’s often a notion that if something is said to have consciousness, then we have to care about it. It’s not totally clear that that’s the case and at what level do we have to care about somethings preferences? The rain prefers to fall down, but I don’t really care and if I frustrate the rain by putting up an umbrella, I don’t feel bad about that. So at what level do preferences matter and how do we define those? So there are all these really, really interesting questions and what’s both sort of exciting and terrifying, is that we have a situation in which those questions are going to play out. In that we’re going to be creating things that are intelligent and we’re doing that now depending on how intelligent they have to be again. That may or may not be conscious, that may or may not have preferences, may or may not matter. They may or may not experience something positive or negative when those preferences are satisfied or not.

And I think we have the possibility of both moral catastrophe if we do things wrong at some level, but an enormous opportunity as well, in the sense that you’ve pointed out that we may be able to create agents that are purely selfless and insofar as other beings have a moral value. These beings can be absolute altruists, like Stewart has been pointing out in his book. Absolute altruism is a pretty tough one for humans to attain, but might be really easy for beings that we construct that aren’t tied to an evolutionary history and all those sorts of things that we came out of.

It may still be that the sort of moral value of the universe centers around the beings that do have meaningful preferences, like humans. Where meaning sort of ultimately sits, what is important and what’s not and what’s valuable and what’s not. If that isn’t grounded in the preferences of experiencing conscious beings, then I don’t know where it’s grounded, so there’s a lot of questions that come up with that. Does it just disappear if those beings disappear and so on? All incredibly important questions I think, because we’re now at the point in the next however many years, 50, 100, maybe less, maybe more. Where our decisions are going to affect what sorts of beings the universe gets inhabited by in the far future and we really need to avoid catastrophic blunders in how that plays out.

Lucas Perry: Yeah. There this whole aspect of AI alignment that you’re touching on, that is not just AI alignment, but AI generation and creation. The problem has been focused on how we can get AI systems, in so far as we create them, to serve the needs of human beings, to understand our preference hierarchies, to understand our metapreferences. But in the creation of Life 3.0, there’s this perspective that you’re creating something who, by virtue of how it is created, it is potentially more morally relevant than you, it may be capable of much more experience, much more profound levels of experience, which also means that there’s this aspect of AI alignment which is about qualia architecting or experience architecting or reflecting on the fact that we’re building Life 3.0. These aren’t just systems that can process information for us, there are important questions about what it is like to be that system in terms of experience and ethics and moral relevance. If you create something with the kind of experience that you have, and it has the escape velocity to become super intelligent and populate the cosmic endowment with whatever it determines to be the good, or what we determine to be the good, what is the result of that?

One last thing that I’m nervous about is that the way that the illusion of self will contribute to a fair and valuable AI alignment. This consideration is in relation to us not being able to see what is ultimately good. We could ultimately be tied up in the preservation of our own arbitrary identities, like the Lucas identity or the Anthony identity. You could be creating something like blissful, purely altruistic, benevolent Boddhisattva gods, but we never did because we had this fear and this illusion of self-annihilation. And that’s not to deny that our information can be destroyed, and maybe we care a lot about the way that the Lucas identity information is arranged, but when we question these types of intuitions that we have, it makes me question and wonder if my conditioned identity is actually as important as I think it is, or as I experience it to be.

Anthony Aguirre: Yeah, I think this is a very horrifyingly thorny question that we have to face and my hope is that we have a long time to face it. I’m very much an advocate of creating intelligent systems that can be incredibly helpful and economically beneficial and then reaping those benefits for a good long time while we sort ourselves out. But with a fairly strict upper limit on how intelligent and powerful we make those things. Because I think if huge gains in the capability of machine systems happens in a period of years or even decades, the chance of us getting these big questions right, seems to me like almost zero. There’s a lot of argumentation about how difficult is it to build a machine system that has the same sort of general intelligence that we do. And I think part of what makes that question hard, is thinking about the huge amount of effort that went in evolutionarily and otherwise to creating the sort of robust intelligence that humans have.

I mean we’ve built up over millions of years in this incredibly difficult adversarial environment, where robustness is incredibly important. Cleverness is pretty important, but being able to cope with a wide variety of circumstances is kind of what life and mind has done. And I think the degree to which AGI will be difficult, is at some level the degree to which it has to attain a similar level of generality and robustness, that we’ve spent just an ungodly amount of computation over the evolution of life on earth to attain. If we have to do anything like that level of computation, it’s going to take just an extraordinarily long time. But I think we don’t know to what degree all of that is necessary and to what degree we can really skip over a lot of it, in the same way that we skip over a lot of evolution of flying when we build an airplane.

But I think there’s another question, which is that of experience and feeling that were even more clueless as to where we would possibly start. If we wanted to create an appreciation for music, you have no clue where to even begin with that question, right? What does it even mean to appreciate or listen to, in some sense have preferences. You can maybe make a machine that will sort different kinds of music into different categories, but do you really feel like there’s going to be any music appreciation in there or in any other human feeling? These are things that have a very, very long, complicated evolutionary history and it’s really unclear to me that we’re going to get them in machine form without something like that. But at least as our moral system is currently construed, those are the things that actually matter.

Whether conscious beings are having a good time, is pretty much the foundation of what we consider to be important, morally speaking at least. Unless we have ideas like we have to do it with a way to please some deity or something like that. So I just don’t know, when you’re talking about future AI beings that have a much richer and deeper interior sense, that’s like the AGI problem squared. We can at least imagine what it’s like to make a general intelligence, an idea of what it would take to do that. But when you talk about creating a feeling being, with deeper, more profound feelings that we have, just no clue what that means in terms of actually engineering or something.

Lucas Perry: So putting on the table all of the moral anti-realism considerations and thought that many people in the AI alignment community may have… Their view is that there’s the set of the historically conditioned preferences that we have and that’s it. We can imagine if horshoecrabs had been able to create a being more intelligent than them, a being that was aligned to horshoecrabs preferences and preference hierarchy. And we can imagine that the horseshoecrabs were very interested and committed to just being horseshoecrabs, because that’s what horseshoecrab wants to do. So now you have this being that was able to maintain it’s own existential condition of the horseshoecrab for a very long time. That just seems like an obvious moral catastrophe. It seems like a waste of what could have been.

Anthony Aguirre: That’s true. But if you imagine that the horseshoe crabs, instead creating elaborate structures out of sand, that they decided we’re their betters and we’re like, this is their legacy was to create these intricate sand structures, because the universe deserves to be inhabited by these much greater beings than them. Then that’s also a moral catastrophe, right? Because the sand structures have no value whatsoever.

Lucas Perry: Yeah. I don’t want humans to do any of these things. I don’t want human beings to go around building monuments, and I don’t want us to lock in to the human condition either. Both of these cases obviously seem like horrible waste, and now you’re helping to articulate the issue that human beings are at a certain place in evolution. 

And so if we’re to create Life 3.0, then it’s also unclear epistemically how we are to evaluate what kinds of exotic qualia states are the kinds that are morally good, and I don’t even know how to begin to answer that question.

So we may be unaware of experiences that literally astronomically better than the kinds of experiences that we have access to, and it’s unclear to me how you would navigate effectively towards that, other than amplifying what we already have.

Anthony Aguirre: Yeah. I guess my instinct on that is to look more on the biology side then the machine side and to say as biological systems, we’re going to continue to evolve in various ways. Some of those might be natural, some of them might be engineered and so on. Maybe some of them are symbiotic, but I think it’s hard for me to imagine how we’re going to have confidence that the things that are being created have an experience that we would recognize or find valuable, if they don’t have some level of continuity with what we are, that we can directly experience. The reason I feel confidence that my dog is actually feeling some level of joy or frustration or whatever, is really by analogy, right? There’s no way that I can get inside the dog’s mind, maybe someday there will be, but there’s no way at the moment. I assume that because we have this common evolutionary heritage, that the outward manifestations of those feelings correspond to some inward feelings in much the same way that they do in humans and much the same the way that they do in me. And I feel quite confident about that really, although for a long period of history, people have believed otherwise at times.

So I think realistically all we’re going to be able to do, is reason by analogy and that’s not going to work very well I think with machine systems, because it’s quite clear that we’ll be able to create machine systems that can wag their tails and smile and things, even though there’s manifestly nothing behind that. So at what point we would start to believe the sort of behavioral cues and say that there’s some interior sense behind that, is very, very unclear when we’re talking about a machine system. And I think we’re very likely to make all kinds of moral errors in either ascribing too much or too little interior experience to machines, because we have no real way of knowing to make any meaningful connection between those things. I suspect that we’ll tend to make the error in both directions. We’ll create things that seem kind of lifelike and attribute all kinds of interior life to them that we shouldn’t and if we go on long enough, we may well create things that have some interior sense that we don’t attribute to them and make all kinds of errors that way too.

So I think it’s quite fraught actually in that sense and I don’t know what we’re going to do about that. I mean we can always hope that the intractably hard problems that we can’t solve now, will just be solved by something much smarter than us. But I do worry a little bit about attributing sort of godlike powers to something by saying, “Oh, it’s super intelligent, so it will be able to do that.” I’m not terribly optimistic. It may well be that the time at which something is so intelligent that it can solve the problem of consciousness and qualia and all these things, it’d be so far beyond the time at which it was smart enough to completely change reality in the world and all kinds of other things. That it’s almost past the horizon of what we can think about now, it’s sort of past the singularity in that sense. We can speculate, hopefully or not hopefully, but it’s not clear on what basis we would be speculating.

Lucas Perry: Yeah. At least the questions that it will need to face, and then we can leave it open as to whether or not and how long it will need to address those questions. So we discussed who I am, I don’t know. You touched on identity and free will. I think that free will in the libertarian sense, as in I could have done otherwise, is basically one of these common sense intuitions that is functionally useful, but ultimately illusory.

Anthony Aguirre: Yeah, I disagree. I will just say briefly, I prefer to think of free will as a set of claims that may or may not be true. And I think in general it’s useful to decompose the question of free will into a set of claims that may or may not be true. And I think when you do that, you find that most of the claims are true, but there may be some big fuzzy metaphysically thing that you’re equating to that set of claims and then claiming it’s not true. So that’s my feeling, that when you actually try to operationalize what you mean by free will, you’ll find that a lot of the things that you mean actually are properties of reality. But if you sort of invent a thing that you call free will, that’s by its nature can’t be part of a physical world, then yes, that doesn’t exist. In a nutshell that’s my point of view, but we could go into a lot more depth some other time.

Lucas Perry: I think I understand that from that short summary. So for this last part then, can you just touch on, because I think this is an interesting point, as we come to the end of the conversation. Form is emptiness, emptiness is form. What does that mean?

Anthony Aguirre: So form is emptiness, is coming back to the discussion of earlier. That when we talk about something like a table, that thing that we call real and existing and objective in some sense, is actually composed of all kinds of ingredients that are not that thing. Our evolutionary history and our concept of solidity and shape, all of these things come together from many different sources and as the Buddhist would say, “There’s no intrinsic self existence of a table.” It very much exists relative to a whole bunch of other things, that we and many other people and processes and so on, bring into being. So that’s the form is emptiness. The emptiness is the emptiness of an intrinsic self existence, so that’s the way that I view the form is emptiness.

But turning that around, that emptiness is form, is yes, even though the table is empty of inherit existence, you can still knock on it. It’s still there, it’s still real and it’s in many ways as real as anything else. If you look for something that is more intrinsically existing than a table, you’re not really going to find it and so we might as well call all of those things real, in which case the emptiness is form again, it’s something. That’s the way I sort of view it and that’s the way that I’ve explored it in that section of the book.

 So to talk about like the ship, that there’s this form of the ship that is kind of what we call the ship. That’s the arrangement of atoms and so on, it’s kind of made out of information and whatnot. That that form is empty in the sense that there are all these ingredients, that come from all these different places that come together to make that thing, but then that doesn’t mean it’s non-existent or meaningless or something like that. That there very much is meaning in the fact that something is a ship rather than something else, that is reality. So that’s kind of the case that I’m putting together in that last section of the book. It’s not so simply either, our straight forward sense of a table as a real existing thing, nor is it, everything is an illusion. It’s like a dream, it’s like a phantasm, nothing is real. Neither of those is the right way to look at it.

Lucas Perry: Yeah, I think that your articulation here brings me again back, for better or for worse, to mountains, no mountains, and mountains again. I came into this conversation with my conventional view of things, and then there’s “form is emptiness.” Oh so okay, so no mountains. But then “emptiness is form.” Okay, mountains again. And given this conceptual back and forth, you can decide what to do from there.

Anthony Aguirre: So have we come back to the mountain in this conversation, at this point?

Lucas Perry: Yeah. I think we’re back to mountains. So I tremendously valued this conversation and feel that it’s given me a lot to consider. And I will re-enter the realm of feeling like a self and inhabiting a world of chairs, tables, objects and people. And will have to engage with some more thinking about information theory. And with that, thank you so much.

 

FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce

Most of us working on catastrophic and existential threats focus on trying to prevent them — not on figuring out how to survive the aftermath. But what if, despite everyone’s best efforts, humanity does undergo such a catastrophe? This month’s podcast is all about what we can do in the present to ensure humanity’s survival in a future worst-case scenario. Ariel is joined by Dave Denkenberger and Joshua Pearce, co-authors of the book Feeding Everyone No Matter What, who explain what would constitute a catastrophic event, what it would take to feed the global population, and how their research could help address world hunger today. They also discuss infrastructural preparations, appropriate technology, and why it’s worth investing in these efforts.

Topics discussed include:

  • Causes of global catastrophe
  • Planning for catastrophic events
  • Getting governments onboard
  • Application to current crises
  • Alternative food sources
  • Historical precedence for societal collapse
  • Appropriate technology
  • Hardwired optimism
  • Surprising things that could save lives
  • Climate change and adaptation
  • Moral hazards
  • Why it’s in the best interest of the global wealthy to make food more available

References discussed include:

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: In a world of people who worry about catastrophic threats to humanity, most efforts are geared toward preventing catastrophic threats. But what happens if something does go catastrophically wrong? How can we ensure that things don’t spiral out of control, but instead, humanity is set up to save as many lives as possible, and return to a stable, thriving state, as soon as possible? I’m Ariel Conn, and on this month’s episode of the FLI podcast, I’m speaking with Dave Denkenberger and Joshua Pearce.

Dave and Joshua want to make sure that if a catastrophic event occurs, then at the very least, all of the survivors around the planet will be able to continue eating. Dave got his Master’s from Princeton in mechanical and aerospace engineering, and his PhD from the University of Colorado at Boulder in building engineering. His dissertation was on his patented heat exchanger. He is an assistant professor at University of Alaska Fairbanks in mechanical engineering. He co-founded and directs the Alliance to Feed the Earth in Disasters, also known as ALLFED, and he donates half his income to that. He received the National Science Foundation Graduate Research Fellowship. He is a Penn State distinguished alumnus and he is a registered professional engineer. He has authored 56 publications with over 1600 citations and over 50,000 downloads — including the book Feeding Everyone No Matter What, which he co-authored with Joshua — and his work has been featured in over 20 countries, over 200 articles, including Science.

Joshua received his PhD in materials engineering from the Pennsylvania State University. He then developed the first sustainability program in the Pennsylvania State system of higher education and helped develop the Applied Sustainability Graduate Engineering Program while at Queens University Canada. He is currently the Richard Witte Professor of Materials Science and Engineering and a professor cross-appointed in the Department of Materials Science and Engineering, and he’s in the Department of Electrical and Computer Engineering at the Michigan Technological University where he runs the Open Sustainability Technology research group. He was a Fulbright-Aalto University Distinguished Chair last year and remains a visiting professor of photovoltaics and Nano-engineering at Aalto University. He’s also a visiting professor at the University of Lorraine in France. His research concentrates on the use of open source appropriate technology to find collaborative solutions to problems in sustainability and poverty reduction. He has authored over 250 publications, which have earned more than 11,000 citations. You can find his work on appropedia.org, and his research is regularly covered by the international and national press and continually ranks in the top 0.1% on academia.edu. He helped found the field of alternative food for global catastrophes with Dave, and again he was co-author on the book Feeding Everyone No Matter What.

So Dave and Joshua, thank you so much for joining us this month.

Dave Denkenberger: Thank you.

Joshua Pearce: Thank you for having us.

Ariel Conn: My first question for the two of you is a two-part question. First, why did you decide to consider how to survive a disaster rather — than focusing on prevention, as so many other people do? And second, how did you two start working together on this topic?

Joshua Pearce: So, I’ll take a first crack at this. Both of us have worked in the area of prevention, particularly in regards to alternative energy sources in order to be able to mitigate climate destabilization from fossil fuel burning. But what we both came to realize is that many of the disasters that we look at that could actually wipe out humanity aren’t things that we can necessarily do anything to avoid. The ones that we can do something about — climate change and nuclear winter — we’ve even worked together on it.

So for example, we did a study where we looked at how many nuclear weapons a state should have if they would continue to be rational. And by rational I mean even if everything were to go your way, if you shot all of your nuclear weapons, they all hit their targets, the people you were aiming at weren’t firing back at you, at what point would just the effects of firing that many weapons hurt your own society, possibly kill many of your own people, or destroy your own nation?

The answer to that turned out to be a really remarkably low number. The answer was 100. And many of the nuclear power states currently have more weapons than that. And so it’s clear at least from our current political system that we’re not behaving rationally and that there’s a real need to have a backup plan for humanity in case something does go wrong — whether it’s our fault, or whether it’s just something that happens in nature that we can’t control like a super volcano or an asteroid impact.

Dave Denkenberger: Even though there is more focus on preventing a catastrophe than there is on resilience to the catastrophe, overall the field is highly neglected. As someone pointed out, there are still more publications on dung beetles than there are on preventing or dealing with global catastrophic risks. But I would say that the particular sub-field of resilience to the catastrophes is even more neglected. That’s why I think it’s a high priority to investigate.

Joshua Pearce: We actually met way back as undergraduate students at Penn State. I was a chemistry and physics double major and one of my friends a year above said, “You have to take an engineering science class before you leave.” It changed his life. I signed up for this class taught by the man that eventually became my advisor, Christopher Wronski, and it was a brutal class — very difficult conceptually and mathematically. And I remember when one of my first tests came back, there was this bimodal distribution where there were two students who scored A’s and everybody else failed. Turned out that the two students were Dave and I, so we started working together then just on homework assignments, and then continued collaborating through all different areas of technical experiments and theory for years and years. And then Dave had this very interesting idea about what do we do in the event of a global catastrophe? How can we feed everybody? And to attack it as an engineering problem, rather than a social problem. We started working on it very aggressively.

Dave Denkenberger: So it’s been, I guess, 18 years now that we’ve been working together: a very fruitful collaboration.

Ariel Conn: Before I get any farther into the interview, let’s quickly define what a catastrophic event is and the types of catastrophic events that you both look at most.

Dave Denkenberger: The original focus was on the catastrophes that could collapse global agriculture. These would include nuclear winter from a full-scale nuclear war like US-Russia, causing burning of cities and blocking of the sun with smoke, but it could also mean a super volcanic eruption like the one that happened about 74,000 years ago that many think nearly wiped out the human species. And then there could also be a large asteroid impact similar to the one that wiped out the dinosaurs about 66 million years ago.

And in those cases, it’s very clear we need to have some other alternative source of food, but we also look at what I call the 10% global shortfalls. These are things like a volcano that caused the year without a summer in 1816, might have reduced food supply by about 10%, and caused widespread famine including in Europe and almost in the US. Then it could be a slightly smaller sized asteroid, or a regional nuclear war, and actually many other catastrophes such as a super weed, a plant that could out-compete crops. If this happened naturally, it probably would be slow enough that we could respond, but if it were part of a coordinated terrorist attack, that could be catastrophic. Even though technically we waste more than 10% of our food and we feed more than 10% of our food to animals, I think realistically, if we had a 10% food shortfall, the price of food would go so high that hundreds of millions of people could starve.

Joshua Pearce: Something that’s really important to understand about the way that we analyze these risks is that currently, even with the agricultural system completely working fine, we’ve got somewhere on the order of 800 million people without enough food to eat, because of waste and inefficiencies. And so anything that starts to cut into our ability for our agricultural system to continue, especially if all of plant life no longer works for a number of years because of the sun being blocked, we have to have some method to provide alternative foods to feed the bulk of the human population.

Ariel Conn: I think that ties in to the next question then, and that is what does it mean to feed everyone no matter what, as you say in the title of your book?

Dave Denkenberger: As Joshua pointed out, we are still not feeding everyone adequately right now. The idea of feeding everyone no matter what is an aspirational goal, and it’s showing that if we cooperated, we could actually feed everyone, even if the sun is blocked. Of course, it might not work out exactly like that, but we think that we can do much better than if we were not prepared for one of these catastrophes.

Joshua Pearce: Right. Today, roughly one in nine people go to bed hungry every night, and somewhere on the order of 25,000 people starve to death or die from hunger-related disease [per day]. And so one of the inspiring things from our initial analysis drawn up in the book is that even in the worst-case scenarios where something major happens, like a comet strike that would wipe out the dinosaurs, humans don’t need to be wiped out: We could provide for ourselves. And the embarrassing thing is that today, even with the agricultural system working fine, we’re not able to do that. And so what I’m at least hoping is that some of our work on these alternative foods provides another mechanism to provide low-cost calories for the people that need it, even today when there is no catastrophe.

Dave Denkenberger: One of the technologies that we think could be useful even now is there’s a company called Comet Bio that is turning agricultural residues like leaves and stalks into edible sugar, and they think that’s actually going to be able to compete with sugar cane. It has the advantage of not taking up lots of land that we might be cutting the rainforest down for, so it has environmental benefits as well as humanitarian benefits. Another area that I think would be relevant is in smaller disasters, such as an earthquake or a hurricane, generally the cheapest solution is just shipping in grain from outside, but if transportation is disrupted, it might make sense to be able to produce some food locally — like if a hurricane blows all the crops down and you’re not going to be able to get any normal harvest from them, you can actually grind up those leaves, like from wheat leaves, and squeeze out the liquid, boil the liquid, and then you get a protein concentrate, and people can eat that.

Ariel Conn: So that’s definitely a question that I had, and that is to what extent can we start implementing some of the plans today during a disaster? This is a pre-recorded podcast; Dorian has just struck the Bahamas. Can the stuff that you are working on now help people who are still stuck on an island after it’s been ravaged by a hurricane?

Dave Denkenberger: I think there is potential for that, the getting food from leaves. There’s actually a non-profit organization called Leaf for Life that has been doing this in less developed countries for decades now. Some other possibilities would be some mushrooms can mature in just a few weeks, and they can grow on waste, basically.

Joshua Pearce: The ones that would be good for an immediate catastrophe are the in between food that we’re working on: between the time that you run out of stored food and the time that you can ramp up the full scale, alternative foods.

Ariel Conn: Can you elaborate on that a little bit more and explain what that process would look like? What does happen between when the disaster strikes? And what does it look like to start ramping up food development in a couple weeks or a couple months or however long that takes?

Joshua Pearce: In the book we develop 10 primary pathways to develop alternative food sources that could feed the entire global population. But the big challenge for that is it’s not just are there enough calories — but you have to have enough calories at the right time.

If, say, a comet strikes tomorrow and throws up a huge amount of earth and ash and covers the sun, we’d have roughly six months of stored food in grocery stores and pantry that we could use to eat. But then for most of the major sources of alternative food, it would take around a year to ramp them up, to take these processes that might not even exist now and get them to industrial scale to feed billions of people. So the most challenging is that six-month-to-one-year period, and for those we would be using the alternative foods that Dave talked about, the mushrooms that can grow really fast and leaves. And the leaf one, part of those leaves can come from agricultural residues, things that we already know are safe.

The much larger biomass that we might be able to use is just normal killed tree leaves. The only problem with that is that there hasn’t been really any research into whether or not that’s safe. We don’t know, for example, if you can eat maple or oak leaf concentrate. The studies haven’t been done yet. And that’s one of the areas that we’re really focusing on now, is to take some of these ideas that are promising and prove that they’re actually technically feasible and safe for people to use in the event of a serious catastrophe, a minor one, or just being able to feed people that for whatever reason don’t have enough food.

Dave Denkenberger: I would add that even though we might have six months of stored food, that would be a best-case scenario when we’ve just had the harvest in the northern hemisphere; We could only have two or three months of stored food. But in many of these catastrophes, even a pretty severe nuclear winter, there’s likely to be some sunlight still coming down to the earth, and so a recent project we’ve been working on is growing seaweed. This has a lot of advantages because seaweed can tolerate low light levels, the ocean would not cool as fast as on the land, and it grows very quickly. So we’ve actually been applying seaweed growth models to the conditions of nuclear winter.

Ariel Conn: You talk about the food that we have stored being able to last for two to six months. How much transportation is involved in that? And how much transportation would we have, given different scenarios? I’ve heard that the town I’m in now, if it gets blocked off by a big snow storm, we have about two weeks of food. So I’m curious: How does that apply elsewhere? And are we worried about transportation being cut off, or do we think that transportation will still be possible?

Dave Denkenberger: Certainly there will be destruction of infrastructure regionally, whether it’s nuclear war or a super volcano or asteroid impact. So in those affected countries, transportation of food is going to be very challenging, but most of the people would not be in those countries. That’s why we think that there’s still going to be a lot of infrastructure still functioning. There are still going to be chemical factories that we can retrofit to turn leaves into sugar, or another one of the technologies is turning natural gas into single-cell protein.

Ariel Conn: There’s the issue of developing agriculture if the sun is blocked, which is one of the things that you guys are working on, and that can happen with nuclear war leading to nuclear winter; It can happen with the super volcano, with the asteroid. Let’s go a little more in depth and into what happens with these catastrophic events that block the sun. What happens with them? Why are they so devastating?

Joshua Pearce: All the past literature on what would happen if, say, we lost agriculture for a number of years, is all pretty grim. The base assumption is that everyone would simply starve to death, and there might be some fighting before that happens. When you look at what would happen based on previous knowledge of generating food from traditional ways, those were the right answers. And so, what we’re calling catastrophic events not only deal with the most extreme ones, the sun-killing ideas, but also the maybe a little less tragic but still very detrimental to the agricultural system: so something like a planned number of terrorist events to wipe out the major bread baskets of the world. Again, for the same idea, is that you’re impacting the number of available calories that are available to the entire population, and our work is trying to ensure that we can still feed everyone.

Dave Denkenberger: We wrote a paper on if we had a scenario that chaos did not break out, but there was still trade between countries and sharing of information and a global price of food — in that case, with stored food, there might around 10% of people surviving. It could be much worse though. As Joshua pointed out, if the food were distributed equally, then everyone would starve. Also people have pointed out, well, in civilization, we have food storage, so some people could survive — but if there’s a loss of civilization through the catastrophe, and we have to go back to being hunter-gatherers, first, hunter gatherers that we still have now generally don’t have food storage, so they would not survive, but then there’s a recent book called The Secret of Our Success that argues that it might not be as easy as we think to go back to being hunter-gatherers.

So that is another failure mode where it could actually cause human extinction. But then even if we don’t have extinction, if we have a collapse of civilization, there are many reasons why we might not be able to recover civilization. We’ve had a stable climate for the last 10,000 years; That might not continue. We’ve already used up the easily accessible fossil fuels that we wouldn’t have to rebuild industrial civilization. Just thinking about the original definition of civilization, about being able to cooperate with people who are not related to you, like outside your tribe — maybe the trauma of the catastrophe could make the remaining humans less open to trusting people, and maybe we would not recover that civilization. And then I would say even if we don’t lose civilization, the trauma of the catastrophe could make other catastrophes more likely.

One people are concerned about is global totalitarianism. We’ve had totalitarian states in the past, but they’ve generally been out-competed by other, free-er societies. But if it were a global totalitarianism, then there would be no competition, and that might be a stable state that we could be stuck in. And then even if we don’t go that route, the trauma from the catastrophe could cause worse values that end up in artificial intelligence that could define our future. And I would say even on these catastrophes that are slightly less extreme, the 10% food shortfalls, we don’t know what would happen after that. Tensions would be high; This could end up in full-scale nuclear war, and then some of these really extreme scenarios occurring.

Ariel Conn: What’s the historical precedence that we’ve got to work with in terms of trying to figure out how humanity would respond?

Dave Denkenberger: There have been localized collapses of society, and Jared Diamond has cataloged a lot of these in his book Collapse, but you can argue that there have even been more global collapse scenarios. Jeffrey Ladish has been looking at some collapses historically, and some catastrophes — like the black death was very high mortality but did not result in a collapse of economic production in Europe; But other collapses actually have occurred. There’s enough uncertainty to say that collapse is possible and that we might not recover from it.

Ariel Conn: A lot of this is about food production, but I think you guys have also done work on instances in which maybe it’s easier to produce food but other resources have been destroyed. So for example, a solar flare, a solar storm knocks out our electric grid. How do we address that?

Joshua Pearce: In the event that a solar flare wipes out the electricity grid and most non-shielded electrical devices, that would be another scenario where we might legitimately lose civilization. There’s been a lot of work in the electrical engineering community on how we might shield things and harden them, but one of the things that we can absolutely do, at least on the electricity side, is start to go from our centralized grid infrastructure into a more decentralized method of producing and consuming electricity. The idea here would be that the grid would break down into a federation of micro-grids, and the micro-grids could be as small as even your own house, where you, say, have solar panels on your roof producing electricity that would charge a small battery, and then when those two sources of power don’t provide enough, you have a backup generator, a co-generation system.

And a lot of the work my group has done has shown that in the United States, those types of systems are already economic. Pretty much everywhere in the US now, if you have exposure to sunshine, you can produce electricity less expensively than you buy it from the grid. If you add in the backup generator, the backup co-gen — in many places, particularly in the northern part of the US, that’s necessary in order to provide yourself with power — that again makes you more secure. And in the event of some of these catastrophes that we’re looking at, now the ones that block the sun, the solar won’t be particularly useful, but what solar does do is preserve our fossil fuels for use in the event of a catastrophe. And if you are truly insular, in that you’re able to produce all of your own power, then you have a backup generator of some kind and fuel storage onsite.

In the context of providing some resiliency for the overall civilization, many of the technical paths that we’re on now, at least electrically, are moving us in that direction anyway. Solar and wind power are both the fastest growing sources of electricity generation both in the US and globally, and their costs now are so competitive that we’re seeing that accelerate much faster than anyone predicted.

Dave Denkenberger: It is true that a solar flare would generally only affect the large grid systems. In 1859 there was the Carrington event that basically destroyed our telegraph systems, which was all we had at the time. But then we also had a near miss with a solar flare in 2012, so the world almost did end in 2012. But then there’s evidence that in the first millennium AD that there were even larger solar storms that could disrupt electricity globally. But there are other ways that electricity could be disrupted. One of those is the high altitude detonation of a nuclear weapon, producing an electromagnetic pulse or an EMP. If this were done multiple places around the world, that could disrupt electricity globally, and the problem with that is it could affect even smaller systems. Then there’s also the coordinated cyber attack, which could be led by a narrow artificial intelligence computer virus, and then anything connected to the internet would be vulnerable, basically.

In these scenarios, at least the sun would still be shining. But we wouldn’t have our tractors, because basically everything is dependent on electricity, like pulling fossil fuels out of the ground, and we also wouldn’t have our industrial fertilizers. And so the assumption is as well that most people would die, because the reason we can feed more than seven billion people is because of the industry we’ve developed. People have also talked about, well, let’s harden the grid to EMP, but that would cost something like $100 billion.

So what we’ve been looking at are, what are inexpensive ways of getting prepared if there is a loss of electricity? One of those is can we make quickly farming implements that would work by hand or by animal power? And even though a very small percent of our total land area is being plowed by draft animals, we still actually have a lot of cows left for food, not for draft animals. It would actually be feasible to do that. 

But if we lost electricity, we’d lose communications. We have a short wave radio, or ham radio, expert on our team who’s been doing this for 58 years, and he’s estimated that for something like five million dollars, we could actually have a backup communication system, and then we would also need to have a backup power system, which would likely be solar cells. But we would need to have this system not plugged into the grid, because if it’s plugged in, it would likely get destroyed by the EMP.

Joshua Pearce: And this gets into that area of appropriate technology and open source appropriate technology that we’ve done a lot of work on. And the idea basically is that the plans for something like a solar powered ham radio station that would be used as a backup communication system, those plans need to be developed now and shared globally so that everyone, no matter where they happen to be, can start to implement these basic safety precautions now. We’re trying to do that for all the tools that we’re implementing, sharing them on sites like Appropedia.org, which is an appropriate technology wiki that already is trying to help small-scale farmers in the developing world now lift themselves out of poverty by applying science and technologies that we already know about that are generally small-scale, low-cost, and not terribly sophisticated. And so there’s many things as an overall global society that we understand much better how to do now that if you just share a little bit of information in the right way, you can help people — both today but also in the event of a catastrophe.

Dave Denkenberger: And I think that’s critical: that if one of these catastrophes happened and people realized that most people were going to die, I’m very worried that there would be chaos, potentially within countries, and then also between countries. But if people realized that we could actually feed everyone if we cooperated, then I think we have a much better chance of cooperating, so you could think of this actually as a peace project.

Ariel Conn: One of the criticisms that I’ve heard, that honestly I think it’s a little strange, but the idea that we don’t need to deal with worrying about alternative foods now because if a catastrophe strikes, then we’ll be motivated to develop these alternative food systems.

I was curious if you guys have estimates of how much of a time difference you think would exist between us having a plan for how we would feed people if these disasters do strike versus us realizing the disaster has struck and now we need to figure something out, and how long it would take us to figure something out? That second part of the question is both in situations where people are cooperating and also in situations where people are not cooperating.

Dave Denkenberger: I think that if you don’t have chaos, the big problem is that yes, people would be able to put lots of money into developing food sources, but there are some things that take a certain amount of calendar time, like testing out different diets for animals or building pilot factories for food production. You generally need to test these things out before you build the large factories. I don’t have a quantitative estimate, but I do think it would delay by many months; And as we said, we only have a few months of food storage, so I do think that a delay would cost many lives and could result in the collapse of civilization that could have been prevented if we were actually prepared ahead of time.

Joshua Pearce: I think the boy scouts are right on this. You should always be prepared. If you think about just something like the number of types of leaves that would need to be tested, if we get a head start on it in order to determine toxicity as well as the nutrients that could come from them, we’ll be much, much better off in the event of a catastrophe — whether or not we’re working together. And in the cases where we’re not working together, to have this knowledge that’s built up within the population and spread out, makes it much more likely that overall humanity will survive.

Ariel Conn: What, roughly, does it cost to plan ahead: to do this research and to get systems and organization in place so that we can feed people if a disaster strikes?

Dave Denkenberger: Around order of magnitude $100 million. We think that that would fund a lot of research to figure out what are the most promising food sources, and also interventions for handling the loss of electricity and industry, and then also doing development of the most promising food sources, actual pilot scale, and funding a backup communications system, and then also working with countries, corporations, international organizations to actually have response plans for how we would respond quickly in a catastrophe. It’s really a very small amount of money compared to the benefit, in terms of how many lives we could save and preserving civilization.

Joshua Pearce: All this money doesn’t have to come at once, and some of the issues of alternative foods are being funded in other ways. There already are, for example, chemical engineering plants being looked at to be turned into food supply factories. That work is already ongoing. What Dave is talking about is combining all the efforts that are already existing and what ALLFED is trying to do, in order to be able to provide a very good, solid backup plan for society.

Ariel Conn: So Joshua, you mentioned ALLFED, and I think now is a good time to transition to that. Can you guys explain what ALLFED is?

Dave Denkenberger: The Alliance to Feed the Earth in Disasters, or ALLFED, is a non-profit organization that I helped to co-found, and our goal is to build an alliance with interested stakeholders to do this research on alternate food sources, develop the sources, and then also develop these response plans.

Ariel Conn: I’ll also add a quick disclosure that I also do work with ALLFED, so I don’t know if people will care, but there that is. So what are some of the challenges you’ve faced so far in trying to implement these solutions?

Dave Denkenberger: I would say a big challenge, a surprise that came to me, is that when we’ve started talking to international organizations and countries, no one appears to have a plan for what would happen. Of course you hear about the continuity of government plans, and bunkers, but there doesn’t seem to be a plan for actually keeping most people alive. And this doesn’t apply just to the sun-blocking catastrophes; It also applies to the 10% shortfalls.

There was a UK government study that estimated that extreme weather on multiple continents, like flooding and droughts, has something like an 80% chance of happening this century that would actually reduce the food supply by 10%. And yet no one has a plan of how they would react. It’s been a challenge for people to actually take this seriously.

Joshua Pearce: I think that goes back to the devaluation of human life, where we’re not taking seriously the thousands of people that, say, starve to death today and we’re not actively trying to solve that problem when from a financial standpoint, it’s trivial based on the total economic output of the globe; From a technical standpoint, it’s ridiculously easy; But we don’t have the social infrastructure in place in order to just be able to feed everyone now and be able to meet the basic needs of humanity. What we’re proposing is to prepare for a catastrophe in order to be able to feed everybody: That actually is pretty radical.

Initially, I think when we got started, overcoming the views that this was a radical departure for what the types of research that would normally be funded or anything like that — that was something that was challenging. But I think now existential risk just as a field is growing and maturing, and because many of the technologies in the alternative food sector that we’ve looked at have direct applications today, it’s being seen as less and less radical — although, in the popular media, for example, they’d be more happy for us to talk about how we could turn rotting wood into beetles and then eat beetles than to actually look at concrete plans in order to be able to implement it and do the research that needs to be done in order to make sure that that is the right path.

Ariel Conn: Do you think people also struggle with the idea that these disasters will even happen? That there’s that issue of people not being able to recognize the risks?

Joshua Pearce: It’s very hard to comprehend. You may have your family and your friends; It’s hard to imagine a really large catastrophe. But these have happened throughout history, both at the global scale but even just something like a world war has happened multiple times in the last century. We’re, I think, hardwired to be a little bit optimistic about these things, and no one wants to see any of this happen, but that doesn’t mean that it’s a good idea to put our head in the sand. And even though it’s a relatively low probability event, say the case of an all-out nuclear war, something on the order of one percent, it still is there. And as we’ve seen in recent history, even some of the countries that we think of as stable aren’t really necessarily stable.

And so currently we have thousands of nuclear warheads, and it only takes a tiny fraction of them in order to be able to push us into one of these global catastrophic scenarios. Whether that’s an accident or one crazy government actor or a legitimate small-scale war, say an India and a Pakistan that pull out the nuclear weapons, these are things that we should be preparing for.

In the beginning it was a little bit more difficult to have people consider them, but now it’s becoming more and more mainstream. Many of our publications and ALLFED publications and collaborators are pushing into the mainstream of the literature.

Dave Denkenberger: I would say even though the probability each year is relatively low, it certainly adds up over time, and we’re eventually going to have at least some natural disaster like a volcano. But people have said, “Well, it might not occur in my lifetime, so if I work on this or if I donate to it, my money might be wasted” — and I said, “Well, do you consider if you pay for insurance and don’t get anything out of it in a year, your money is wasted?” “No.” So basically I think of this as an insurance policy for civilization.

Ariel Conn: In your research, personally for you, what are some of the interesting things that you found that you think could actually save a lot of lives that you hadn’t expected?

Dave Denkenberger: I think one particularly promising one is the turning of natural gas into single-cell protein, and fortunately, there are actually two companies that are doing this right now. They are focusing on stranded natural gas, which means too far away from a market, and they’re actually producing this as fish food and other animal feed.

Joshua Pearce: For me, living up here in the upper peninsula of Michigan where we’re surrounded by trees, can’t help but look out my window and look at all the potential biomass that could actually be a food source. If it turns out that we can get even a small fraction of that into human edible food, I think that could really shift the balance in providing food, both now and in the case of a disaster.

Dave Denkenberger: One interesting thing coming to Alaska is I’ve learned about the Aleutian Islands that stick out into the pacific. They are very cloudy. It is so cool in the summer that they cannot even grow trees. They also don’t get very much rain. The conditions there are actually fairly similar to nuclear winter in the tropics; And yet, they can grow potatoes. So lately I’ve become more optimistic that we might be able to do some agriculture near the equator where it would not freeze, even in nuclear winter.

Ariel Conn: I want to switch gears a little bit. We’ve been talking about disasters that would be relatively immediate, but one of the threats that we’re trying to figure out how to deal with now is climate change. And I was wondering how efforts that you’re both putting into alternative foods could help as we try to figure out how to adapt to climate change.

Joshua Pearce: I think a lot of the work that we’re doing has a dual use. Because we are trying to squeeze every last calorie we could out of primarily fossil fuel sources and trees and leaves, that if by using those same techniques in the ongoing disaster of climate change, we can hopefully feed more people. And so that’s things like growing mushrooms on partially decomposed wood, eating the mushrooms, but then feeding the leftovers to, say, ruminants or chickens, and then eating those. There’s a lot of industrial ecology practices we can apply to the agricultural food system so that we can get every last calorie out of our primary inputs. So that I think is something we can focus on now and push forward regardless of the speed of the catastrophe.

Dave Denkenberger: I would also say that in addition to this extreme weather on multiple continents that is made more likely by climate change, there’s also abrupt climate change in the ice core record. We’ve had an 18 degree fahrenheit drop in just one decade over a continent. That could be another scenario of a 10% food shortfall globally. And another one people have talked about is what’s called extreme climate change that would still be slow. This is sometimes called tail risk, where we have this expected or median climate change of a few degrees celsius, but maybe there would be five or even 10 degrees celsius — so 18 degree fahrenheit — that could happen over a century or two. We might not be able to have agriculture at all in the tropics, so it would be very valuable to have some food backup plan for that.

Ariel Conn: I wanted to get into concerns about moral hazards with this research. I’ve heard some criticism that if you present a solution to, say, surviving nuclear winter that maybe people will think nuclear war is more feasible. How do you address concerns like that — that if we give people a means of not starving, they’ll do something stupid?

Dave Denkenberger: I think you’ve actually summarized this succinctly by saying, this would be like saying we shouldn’t have the jaws of life because that would cause people to drive recklessly. But the longer answer would be: there is evidence that the awareness of nuclear winter in the 80s was a reason that Gorbachev and Reagan worked towards reducing the nuclear stockpile. However, we still have enough nuclear weapons to potentially cause nuclear winter, and I doubt that the decision in the heat of the moment to go to nuclear war is actually going to take into account the non-target countries. I also think that there’s a significant cost of nuclear war directly, independent of nuclear winter. I would also say that this backup plan helps up with catastrophes that we don’t have control over, like a volcanic eruption. Overall, I think we’re much better off with a backup plan.

Joshua Pearce: I of course completely agree. It’s insane to not have a backup plan. The idea that the irrational behavior that’s currently displayed in any country with more than 100 nuclear weapons isn’t going to get worse because now they know that at a larger fraction their population won’t starve to death as they use them — I think that’s crazy.

Ariel Conn: As you’ve mentioned, there are quite a few governments — in fact, as far as I can tell, all governments don’t really have a backup plan. How surprised have you been by this? And also how optimistic are you that you can convince governments to start implementing some sort of plan to feed people if disaster happens?

Dave Denkenberger: As I said, I certainly have been surprised with the lack of plans. I think that as we develop the research further and are able to show examples of companies already doing very similar things, showing more detailed analysis of what current factories we have that could be retrofitted quickly to produce food — that’s actually an active area of research that we’re doing right now — then I am optimistic that governments will eventually come around to the value of planning for these catastrophes.

Joshua Pearce: I think it’s slightly depressing when you look around the globe and all the hundreds of countries, and how poorly most of them care for their own citizens. It’s sort of a commentary on how evolved or how much of a civilization we really are, so instead of comparing number of Olympic medals or how much economic output your country does, I think we should look at the poorest citizens in each country. And if you can’t feed the people that are in your country, you should be embarrassed to be a world leader. And for whatever reason, world leaders show their faces every day while their constituents, the citizens of their countries, are starving to death today, let alone in the event of a catastrophe.

If you look at the — I’ll call them the more civilized countries, and I’ve been spending some time in Europe, where rational, science-based approaches to governing are much more mature than what I’ve been used to. I think it gives me quite a bit of optimism as we take these ideas of sustainability and of long-term planning seriously, try to move civilization into a state where it’s not doing significant harm to the environment or to our own health or to the health and the environment in the future — that gives me a lot of cause for hope. Hopefully as all the different countries throughout the world mature and grow up as governments, they can start taking the health and welfare of their own populations much more seriously.

Dave Denkenberger: And I think that even though I’m personally very motivated about the long-term future of human civilization, I think that because what we’re proposing is so cost effective, even if an individual government doesn’t put very much weight on people outside its borders, or in future generations even within the country, it’s still cost effective. And we actually wrote a paper from the US perspective showing how cheaply they could get prepared and save so many lives just within their own borders.

Ariel Conn: What do you think is most important for people to understand about both ALLFED and the other research you’re doing? And is there anything, especially that you think we didn’t get into, that is important to mention?

Dave Denkenberger: I would say that thanks to recent grants from the Berkeley Existential Risk Initiative, the Effective Altruism Lottery, and the Center for Effective Altruism, that we’ve been able to do, especially this year, a lot of new research and, as I mentioned, retrofitting factories to produce food. We’re also looking at, can we construct factories quickly, like having construction crews work around the clock? Also investigating seaweed; But I would still say that there’s much more work to do, and we have been building our alliance, and we have many researchers and volunteers that are ready to do more work with additional funding, so we estimate in the next 12 months that we could effectively use approximately $1.5 million.

Joshua Pearce: A lot of the areas of research that are needed to provide a strong backup plan for humanity are relatively greenfield; This isn’t areas that people have done a lot of research in before. And so for other academics, maybe small companies that slightly overlap the alternative food ecosystem of intellectual pursuits, there’s a lot of opportunities for you to get involved, either in direct collaboration with ALLFED or just bringing these types of ideas into your own subfield. And so we’re always looking out for collaborators, and we’re happy to talk to anybody that’s interested in this area and would like to move the ball forward.

Dave Denkenberger: We have a list of theses that undergraduates or graduates could do on the website called Effective Thesis. We’ve gotten a number of volunteers through that.

I would also say another surprising thing to me was that when we were looking at these scenarios of if the world cooperated but only had stored food, the amount of money people would spend on that stored food was tremendous — something like $90 trillion. And that huge expenditure, only 10% of people survived. But instead if we could produce alternate foods, our goal is around a dollar a dry pound of food. One pound of dry food can feed a person for a day, then more like 97% of people would be able to afford food with their current incomes. And yet, even though we feed so many more people, the total expenditure on food was less. You could argue that even if you are in the global wealthy that could potentially survive one of these catastrophes if chaos didn’t break out, it would still be in your interest to get prepared for alternate foods, because you’d have to pay less money for your food.

Ariel Conn: And that’s all with a research funding request of 1.5 million? Is that correct?

Dave Denkenberger: The full plan is more like $100 million.

Joshua Pearce: It’s what we could use as the current team now, effectively.

Ariel Conn: Okay. Well, even the 100 million still seems reasonable.

Joshua Pearce: It’s still a bargain. One of the things we’ve been primarily assuming during all of our core scenarios is that there would be human cooperation, and that things would break down into fighting, but as we know historically, that’s an extremely optimistic way to look at it. And so even if you’re one of the global wealthy, in the top 10% globally in terms of financial means and capital, even if you would be able to feed yourself in one of these relatively modest reductions in overall agricultural supply, it is not realistic to assume that the poor people are just going to lay down and starve to death. They’re going to be storming your mansion. And so if you can provide them with food with a relatively low upfront capital investment, it makes a lot of sense, again, for you personally, because you’re not fighting them off at your door.

Dave Denkenberger: One other thing that surprised me was we did a real worst case scenario where the sun is mostly blocked, say by nuclear winter, but then we also had a loss of electricity and industry globally, say there were multiple EMPs around the world. And I, going into it, was not too optimistic that we’d be able to feed everyone. But we actually have a paper on it saying that it’s technically feasible, so I think it really comes down to getting prepared and having that message in the decision makers at the right time, such that they realize it’s in their interest to cooperate.

Another issue that surprised me: when we were writing the book, I thought about seaweed, but then I looked at how much seaweed for sushi cost, and it was just tremendously expensive per calorie, so I didn’t pursue it. But then I found out later that we actually produce a lot of seaweed at a reasonable price. And so now I think that we might be able to scale up that food source from seaweed in just a few months.

Ariel Conn: How quickly does seaweed grow, and how abundantly?

Dave Denkenberger: It depends on the species, but one species that is edible, we put into the scenario of nuclear winter, and one thing to note is that the ocean, as the upper layers cool, they sink, and then the lower layers of the ocean come to the surface, and that brings nutrients to the surface. We found in pretty big areas on Earth, in the ocean, that the seaweed could actually grow more than 10% per day. With that exponential growth, you quickly scale up to feeding a lot of people. Now of course we need to scale up the infrastructure, the ropes that it grows on, but that’s what we’re working out.

The other thing I would add is that in these catastrophes, if many people are starving, then I think not only will people not care about saving other species, but they may actively eat other species to extinction. And it turns out that feeding seven billion people is a lot more food than keeping, say, 500 individuals of many different species alive. And so I think we could actually use this to save a lot of species. And if it were a natural catastrophe, well some species would go extinct naturally — so maybe for the first time, humans could actually be increasing biodiversity.

Joshua Pearce: That’s a nice optimistic way to end this.

Ariel Conn: Yeah, that’s what I was just thinking. Anything else?

Dave Denkenberger: I think that’s it.

Joshua Pearce: We’re all good.

Ariel Conn: All right. This has been a really interesting conversation. Thank you so much for joining us.

Dave Denkenberger: Thank you.

Joshua Pearce: Thank you for having us.

 

FLI Podcast: Beyond the Arms Race Narrative: AI & China with Helen Toner & Elsa Kania

Discussions of Chinese artificial intelligence frequently center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond the arms race narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. 

Topics discussed in this episode include:

  • The rise of AI in China
  • The escalation of tensions between U.S. and China in the AI realm 
  • Chinese AI Development plans and policy initiatives
  • The AI arms race narrative and the problems with it 
  • Civil-military fusion in China vs. U.S.
  • The regulation of Chinese-American technological collaboration
  • AI and authoritarianism
  • Openness in AI research and when it is (and isn’t) appropriate
  • The relationship between privacy and advancement in AI 

References discussed in this episode include:

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast! I’m your host Ariel Conn. Now, by sheer coincidence, Lucas and I both brought on guests to cover the same theme this month, and that is AI and China. Fortunately, AI and China is a huge topic with a lot to cover. For this episode, I’m pleased to have Helen Toner and Elsa Kania join the show. We will be discussing things like the Beijing AI Principles, why the AI arms race narrative is problematic, civil-military fusion in China versus in the US, the use of AI in human rights abuses, and much more.

Helen is Director of Strategy at Georgetown’s Center for Security and Emerging Technology. She previously worked as a Senior Research Analyst at the Open Philanthropy Project, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen holds a Bachelor of Science and a Diploma in Languages from the University of Melbourne.

Elsa is a Research Fellow also at Georgetown’s CSET, and she is also a PhD student in Harvard University’s Department of Government. Her research focuses on Chinese military innovation and technological development.

Elsa and Helen, thank you so much for joining us.

Helen Toner: Great to be here.

Elsa Kania: Glad to be here.

Ariel Conn: So, I have a lot of questions for you about what’s happening in China with AI, and how that’s impacting U.S. China relations. But before I dig into all of that, I want to actually start with some of the more recent news, which is the Beijing principles that came out recently. I was actually surprised because they seem to be some of the strongest principles about artificial intelligence that I’ve seen, and I was wondering if you both could comment on your own reactions to those principles.

Elsa Kania: I was encouraged to see these principles released, and I think it is heartening to see greater discussion of AI ethics in China. At the same time, I’m not convinced that these are necessarily strong in the sense of not clear as to what the mechanism for enforcement would be, and I think that this is not unique to China, but I think often the articulation of principles can be a means of burnishing the image, whether of a company or a country, with regard to its intentions in AI.

Although it’s encouraging to hear a commitment to use AI to do good, and for humanity, and control risks, these are very abstract statements, and some of them are rather starkly at odds with realities of how we know AI is being abused by the Chinese government today for purposes that reinforce the coercive capacity of the state: including censorship, surveillance; prominently in Xinjiang where facial recognition has been racially targeted against ethnic minorities, against the backdrop of the incarceration and imprisonment of upwards of a million — by some estimates — Uyghurs in Xinjiang.

So, I think it’s hard not to feel a degree of cognitive dissonance when reading these principles. And again I applaud those involved in the process for their efforts and for continuing to move this conversation forward in China; But again, I’m skeptical that this espoused commitment to certain ethics will necessarily constrain the Chinese government from using AI in ways that it appears to be deeply committed to do so for reasons of concerns about social stability and state security.

Ariel Conn: So one question that I have is, did the Chinese government actually sign on to these principles? Or is it other entities that are involved?

Elsa Kania: So the Beijing AI principles were launched in some association with the Ministry of Science and Technology for China. So, certainly the Chinese government, actually initially in its New Generation AI Development Plan back in the summer of 2017, had committed to trying to lead and engage with issues of legal, ethical, and regulatory frameworks for artificial intelligence. And I think it is telling that these have been released in English; And to some degree part of the audience for these principles is international, against the backdrop of a push for the Chinese government to promote international cooperation in AI.

And the launch of a number of world AI conferences and attempts to really engage with the international community, again, are encouraging in some respects — but also there can be a level of inconsistency. And I think a major asymmetry is the fact that these principles, and many initiatives in AI ethics in China, are shaped by the government’s involvement. And it’s hard to imagine the sort of open exchange among civil society and different stakeholders that we’ve seen in the United States, and globally, happen in China, given the role of the government. I think it’s telling at the same time that the preamble for the Beijing AI principles talks about the construction of a human community with a shared future, which is a staple in Xi Jinping’s propaganda, and a concept that really encapsulates Chinese ambitions to shape the future course of global governance.

So again, I think I’m heartened to see greater discussion of AI ethics in China. But I think the environment in which these conversations are happening — as well as of course the constraints from any meaningful enforcement, or alteration of the government’s current trajectory in AI — makes me skeptical in some respects. I hope that I am wrong, and I hope that we will see this call to use AI for humanity, and to be diverse and inclusive, start to shape the conversation. So, it will be interesting to see whether we see indicators of results, or impact from these principles going forward.

Helen Toner: Yeah. I think that’s exactly right. And in particular, the release of these principles I think made clear a limitation of this kind of document in general. This was one of a series of sets of principles like this that have been released by a number of different organizations. And the fact of seeing principles like this that look so good on paper, in contrast with some of the behavior that Elsa described from the Chinese government, I think really puts into stark relief the limitations of well-meaning, nice sounding ideas like this that really have no enforcement mechanism.

Ariel, you asked about whether the Chinese government had signed onto these, and as Elsa described, there was certainly government involvement here. But just because there is some amount of the government giving, or some part of the Chinese government giving its blessing to the principles, does not imply that there are any kind of enforcement mechanisms, or any kind of teeth to a document of this kind.

Elsa Kania: And certainly that’s not unique to China. And I think there have been questions of whether corporate AI principles, whether from American or Chinese companies, are essentially intended for public relations purposes, or will actually shape the company’s decision making. So, I think it’s really important to move these conversations forward on ethics. At the same time, it will be interesting to see how principles translate into practice, or perhaps in some cases don’t.

Ariel Conn: So I want to backtrack a little bit to where some of the discussion about China’s development of AI started, at least from more Western perspectives. My understanding is that seeing AlphaGo beat Lee Sedol led to something of a rallying cry — I don’t know if that’s quite the right phrase — but that that sort of helped trigger the Chinese government to say, “We need to be developing this a lot stronger and faster.” Is that the case? Or what’s been sort of the trajectory of AI development in China?

Elsa Kania: I think it depends on how far back you want to go historically.

Ariel Conn: That’s fair.

Elsa Kania: I think in recent history certainly AlphaGo was a unique moment — both as an indication of how rapidly AI was progressing, given that experts had not anticipated an AI could win the game of Go for another 10, perhaps 15 years — and also in the context of how the Chinese government, and even the Chinese military, saw this as an indication of the capabilities of American artificial intelligence, including the relevance of the capacities for tactics and strategizing, command decision making in a military context. 

At the same time of course I think another influence in 2016 appears to have been the U.S. government’s emphasis on AI at the time, including a plan for research and development that may have received more attention in Beijing than it did in Washington in some respects, because this does appear to have been one of the factors that inspired China’s New Generation AI Development Plan, launched the following year. 

But I think if we’re looking at the history of AI in China, we can trace it back much further: even some linkages to the early history of cybernetics and systems engineering. And there are honestly some quite interesting episodes early on, because during the Cold War, artificial intelligence could be a topic that had some ideological undertones and underpinnings — including how the Soviet Union saw AI in system science, and some of the critiques of this as revisionism.

And then there is even an interesting detour in the 80s or so: when Qian Xuesen, a prominent strategic scientist in China’s nuclear weapons program, saw AI as entangled with an interest in parapsychology — including exceptional human body functions such as the capacity to recognize characters with your ears. There was a craze for ESP in China in the 80s, and actually received some attention in scientific literature as well: There was an interesting conflation of artificial intelligence and special functions that became the subject of some ideological debate in which Qian Xuesen was an advocate essentially of ESP in ways that undermined early AI development in China.

And other academic rivals in the Chinese Academy of Sciences argued in favor of AI as a discipline of emerging science relative to the pseudoscience that human special functions turned out to be, and this became a debate of some ideological importance as well against the backdrop of questions of arbitrating what science was, and how the Chinese Communist Party tried to sort of shape science. 

I think that does go to illustrate that although a lot of the headlines about China’s rise in AI are much more recent, not only state support for research, but also the significant increasing in publications far predates this attention, and really can be traced to some degree to the 90s, and especially from the mid 2000s onward.

Helen Toner: I’ll just add as well that if we’re thinking about what it is that caused this surge in Western interest in Chinese AI, I think a really important part of the backdrop is the shift in U.S. defense thinking to move away from thinking primarily about terrorism, and non-state actors as the primary threat to U.S. security, and shifting towards thinking about near-peer adversaries — so primarily China and Russia — which is a recent change in U.S. doctrine. And I think that is also an important factor in understanding why Chinese interest and success in AI has become such an important sort of conspicuous part of the discussion.

Elsa Kania: There’s also been really a recalibration of assessments of the state of technology and innovation in China, from often outright skepticism and dismissal that China could innovate to sometimes now a course correction towards the opposite extreme; and now anxieties that China may be beating us in the “race for AI” or 5G — even quantum computing has provoked a lot of concern. So, I think on one hand it is long overdue that U.S. policy makers and the American National Security community take seriously what are quite real and rapid advances in science and technology in China.

At the same time I think sometimes this reaction has resulted in more inflated assessments that have provoked concerns about the notion of an arms race, which I think is really wrong and misleading framing of this when we’re talking about a general purpose technology that has such a range of applications, and for which the economic and societal impacts may be more significant than the military applications in the near-term, which I say is an analyst who focuses on military issues.

Ariel Conn: I want to keep going with this idea of the fear that’s sort of been developing in the U.S. in response to China’s developments. And I guess I first started seeing it a lot more when China released their Next Generation Artificial Intelligence Plan — I believe that’s the one that said by 2030 they wanted to dominate in AI.

Helen Toner: That’s right.

Ariel Conn: So I’d like to hear both of your thoughts on that. But I’m also sort of interested in — to me it seemed like that plan came out in part as a response to what they were seeing from the US, and then the U.S. response to this is to — maybe panic is a little bit extreme, but possibly overreact to the Chinese plan — and maybe they didn’t overreact, that might be incorrect. But it seems like we’re definitely seeing an escalation occurring.

So let’s start by just talking about what that plan said, and then I want to dive into this idea of the escalation, and maybe how we can look at that problem, or address it, or consider it.

Elsa Kania: So, I’d been certainly looking at a lot of different plans and policy initiatives for the 13th Five-Year Plan period, which is 2016 to 2020, and I had noticed when this New Generation AI Development Plan came out; and initially it was only available in Chinese. A couple of us, after we’d come across it initially, had organized to work on a translation of it, and to this day that’s still the only unofficial English translation of this plan available. So far as I can tell the Chinese government itself never actually translated that plan. And in that regard, it does not appear to have been intended for an international audience in the way that, for instance, the Beijing AI Principles were.

So, I think that some of the rhetoric in the plan that rightly provoked concerns — calling for China to lead the world in AI and be a premier global innovation center for artificial intelligence — is striking, but is consistent with S&T plans that often call for China to seize the strategic commanding heights of innovation, and future advantage. So I think that a lot of the signaling about the strategic importance of AI to some degree was intended for an internal audience, and certainly we’ve seen a powerful response in terms of plans and policies launched across all elements of the Chinese government, and at all levels of government including a number of cities and provinces.

I do think it was highly significant in reflecting how the Chinese government saw AI as really a critical strategic technology to transform the Chinese economy, and society, and military — though that’s discussed in less detail in the plan.

But there is also an open acknowledgement in the plan that China still sees itself as well behind the U.S. in some respects. So, I think the ambitions and the resources and policy support across all levels of government that this plan has catalyzed are extremely significant, and I think do merit some concern, but I think some of the rhetoric about an AI race, or arms race — clearly there is competition in this domain. But I do think the plan should be placed in the context of an overall drive by the Chinese government to escape the middle income trap, and sustain economic growth at a time when it’s slowing and looking to AI as an important instrument to advance these national objectives.

Helen Toner: I also think there is something kind of amusing that happened where, as Elsa said earlier, it seems like one driver of the creation of this plan was that China saw the U.S. government under the Obama administration in 2016 run a series of events and then put together a white paper about AI, and a federal R&D plan. And China’s response to this was to think, “Oh, we should really put together our own strategy, since the U.S.has one.” And then somehow with the change in administrations, and the time that had elapsed, there suddenly became this narrative of, “Oh no, China has an AI strategy and the U.S. doesn’t have one; So now we have to have one because they have one.” And that was a little bit farcical to be honest. And I think has now died down after, I believe it’s called the American AI Initiative that President Trump released. But that was amusing to watch while it was happening.

Elsa Kania: I hope that the concerns over the state of AI in China can provoke concerns that motivate productive responses. I agree that sometimes the debate has focused too much on the notion of what it would mean to have an AI strategy, or concerns about the plan as sort of one of the most tangible manifestations of these ambitions. But I do think there are reasons for concern that the U.S. has really not recognized the competitive challenge, and sometimes still seems to take for granted American leadership in emerging technologies for which the landscape does remain much more contested.

Helen Toner: For sure.

Ariel Conn: Do you feel like we’re starting to see de-escalation then — that people are starting to maybe change their rhetoric about making sure someone’s ahead, or who’s ahead, or all that type of lingo? Or do you think we are still seeing this escalation that is perhaps being reported in the press still?

Helen Toner: I think there is still a significant amount of concern. Perhaps one shift that we’ve seen a little bit — and Elsa I’d be curious if you agree — is that I think around the time that the Next Generation Plan was released, and attention was starting to turn to China, there began to be a bit of a narrative of, “Not only is China trying to catch up with the U.S. and making progress in catching up with the U.S. but perhaps has already surpassed the U.S. and is perhaps already clearly ahead in AI research globally.” That’s an extremely difficult thing to measure, but I think some of the arguments that were made to say that were not as well backed up as they could have been.

Maybe one thing that I’ve observed over the last six or 12 months is a little bit of a rebalancing in thinking. It’s certainly true that China is investing very heavily in this, and is trying really hard; And it’s certainly true that they are seeing some results from that, but it’s not at all clear that they have already caught up with the U.S. in any meaningful way, or are surpassing it. Of course, it depends how you slice up the space, and whether you’re looking more at fundamental research, or applied research, or so on. But that might be one shift we’ve seen a little bit.

Elsa Kania: I agree. I think there has continued to be a recalibration of assessments, and even a rethinking of the notion of what leading in AI even means. And I used to be asked the question all the time of who was winning the race, or even arms race, for AI. And often I would respond by breaking down the question, asking, “Well what do you mean by who?” Because the answer will differ depending on whether we’re talking about American and Chinese companies, relative to how do we think about aggregating China and the United States as a whole when it comes to AI research — particularly considering the level of integration and interdependence between American and Chinese innovation ecosystems. What do we mean by winning in this context? How do we think about the metrics, or even desired end states? Is this a race to develop something akin to artificial general intelligence? Or is this a rivalry to see which nation can best leverage AI for economic and societal development across the board?

And then again, why do we continue to talk about this as a race? I think that is a metaphor in framing that does readily come to mind and can be catchy. And as someone who looks at the military dimension of this quite frequently, I often find myself explaining why I don’t think “arms race” is an appropriate conceptualization either. Because this is a technology that will have a range of applications across different elements of the military enterprise — and that does have great promise for providing decisive advantage in the future of warfare, and yet we’re not talking about a single capability or weapon systems, but rather something that is much more general purpose, and that is fairly nascent in its development.

So, AI does factor into this overall U.S.-China military competition that is much more complex and amorphous than the notion of an arms race to develop killer robots would imply. Because certainly there are autonomous weapons development underway in the U.S. and China today; and I think that is quite concerning from the perspective of thinking about the future military balance, or how the U.S. and Chinese militaries might be increasing the risks of a crisis, and considerations of how to mitigate those concerns and reinforce strategic stability.

So hopefully there is starting to be greater questioning of some of these more simplistic framings, often in headlines, often in some of the more sensationalist statements out there. I don’t believe China is yet an AI superpower, but clearly China is an AI powerhouse.

Ariel Conn: Somewhat recently there was an op ed by Peter Thiel in which he claims that China’s tech development is naturally a part of the military. There’s also this idea that I think comes from China of military-civil fusion. And I was wondering if you could go into the extent to which China’s AI development is naturally a part of their military, and the extent to which companies and research institutes are able to differentiate their work from military applications.

Elsa Kania: All right. So, the article in question did not provide a very nuanced discussion of these issues. And to start I would say that it is hardly surprising that the Chinese military is apparently enthusiastic about leveraging artificial intelligence. China’s new national defense white paper, titled “China’s National Defense in the New Era,” talked about advances in technologies like big data, cloud computing, artificial intelligence, quantum information, as significant at a time when the character of warfare is evolving — what is known as today’s informatized warfare, towards future intelligentized warfare, in which some of these emerging technologies, namely artificial intelligence, could be integrated into the system of systems for future conflict.

And the Chinese military is pursuing this notion of military intelligentization, which essentially involves looking to leverage AI for a range of military applications. At the same time, I see military-civil fusion as a concept and strategy to remain quite aspirational in some respects.

There’s also a degree of irony, I’d argue, that much of what China is attempting to achieve through military-civil fusion is inspired by dynamics and processes that they have seen be successful in the American defense innovation ecosystem. I think sometimes there is this tendency to talk about military-civil fusion as this exotic or uniquely Chinese approach, when in fact there are certain aspects of it that are directly mimicking, or responding to, or learning from what the U.S. has had within our ecosystem for a much longer history. And China’s trying to create this more rapidly and more recently. 

So, the delta of increase, perhaps, and the level of integration between defense, academic, and commercial developments, may be greater. But I think the actual results so far are more limited. And again it is significant, and there are reasons for concern. We are seeing a greater and greater blurring of boundaries between defense and commercial research, but the fusion is again much more aspirational, as opposed to the current state of play.

Helen Toner: I’ll add as well, returning to that specific op ed when Thiel mentioned military-civil fusion, he actually linked to an article by a colleague of Elsa’s and mine, Lorand Laskai, where he wrote about military-civil fusion, and Lorand straight up said that Thiel had clearly not read the article, based on the way that he described military-civil fusion.

Ariel Conn: Well, that’s reassuring.

Elsa Kania: We are seeing militaries around the world, the U.S. and China among them, looking to build bridges to the private sector, and deepening cooperation with commercial enterprises. And I think it’s worth thinking about the factors that could provide a potential advantage; or for militaries that are looking to increase their capacity as organizations to leverage these technologies — this is an important dimension of that. And I think we are seeing some major progress in China in terms of new partnerships, including initiatives at the local level, new parks, new joint laboratories. But I do think, as with the overall status of China’s AI plan, there’s a lot of activity and a lot of investment. But the results are harder to ascertain at this point.

And again, I think it also does speak to questions of ethics in the sense that we have in the U.S. seen very open debate about companies and concerns, particularly of their employees, about whether they should or should not be working with the military or government on different projects. And I remain skeptical that we could see comparable debates or conversations happening in China, or that a Chinese company would outright say no to the government. I think certainly some companies may resist on certain points, or at the margins, especially when they have commercial interests that differ from the priorities of the government. But I do think the political economy of this ecosystem as a whole is very distinct.

And again I’m skeptical that if the employees of a Chinese company had moral qualms about working with the Chinese military, they’d have the freedom to organize, and engage in activism to try to change that.

Ariel Conn: I’d like to go into that a little bit more, because there’s definitely concerns that get raised that we have companies in the U.S. that are rejecting contracts with the U.S. government for fear that their work will be militarized, while at the same time — as you said — companies in China may not have that luxury. But then there’s also instances where you have say Google in China doing research, and so does that mean that Google is essentially working with the Chinese military and not the U.S. military? I think there’s a lot of misunderstanding about what the situation actually is there. I was wondering if you could both go into that a little bit.

Helen Toner: Yeah. I think this is a refrain that comes up a lot in DC as, “Well, look at how Google withdrew from its contract to work on Project Maven,” which is a Department of Defense Initiative looking at tagging overhead imagery, “So clearly U.S. companies aren’t willing to work with the U.S. government, But on the other hand they are still working in China. And as we all know, research in China is immediately used by the Chinese military, so therefore, they’re aiding the Chinese military even though they’re not willing to aid the U.S. military.” And I do think this is highly oversimplified description, and pretty incorrect.

So, a couple elements here. One is that I think the Google Project Maven decision seems to have been pretty unique. We haven’t really seen it repeated by other companies. Google continues to work with the U.S. military and the U.S. government in some other ways — for example working on DARPA projects, and working on other projects; And other U.S. companies are also very willing to work with the U.S. government including really world-leading companies. A big example right now is Amazon and Microsoft bidding on this JEDI contract, which is to provide cloud computing services to the Pentagon. So, I think on the one hand, this claim that U.S. companies are unwilling to work with the U.S. military is a vast overgeneralization.

And then on the other hand, I think I would point back to what Elsa was saying about the state of military-civil fusion in China, and the extent to which it makes sense or doesn’t make sense to say that any research done in China is immediately going to be incorporated into Chinese military technologies. I definitely wouldn’t say there is nothing to be concerned about here. But I think that the simplified refrain is not very productive.

Elsa Kania: With regard to some of these controversies, I do continue to believe that having these open debates, and the freedom that American companies and researchers have, is a strength of our system. I don’t think we should envy the state of play in China, where we have seen the Chinese Communist Party become more and more intrusive with regard to its impositions upon the tech sector, and I think there may be costs in terms of the long-term trajectory of innovation in China.

And with regard to the particular activities of American companies in China, certainly there have been some cases where companies have engaged in projects, or with partners, that I think are quite problematic. And one of the most prominent examples of that recently has been Google’s involvement in Dragonfly — creating a censored search engine — which was thoroughly condemned, including because of its apparent inconsistency with their principles. So, I do think there are concerns not only of values but also of security when it comes to American companies and universities that are engaged in China, and it’s never quite a black and white issue or distinction.

So for instance in the case of Google, their presence in China in terms of research does remain fairly limited. There have been a couple of cases where papers published in collaboration between a Google researcher and a Chinese colleague involve topics that are quite sensitive and evidently not the best topic on which to be collaborating, in my opinion — such as target recognition. There’s also been concerns over research on facial recognition, given the known abuse of that technology by the Chinese government. 

I think that also when American companies or universities partner or coauthor with Chinese counterparts, especially those that are linked to or are outright elements of the Chinese military — such as the National University of Defense Technology, which has been quite active in overseas collaborations — I do think that there should be some red lines. I don’t think the answer is “no American companies or universities should do any work on AI in China.” I think that would actually be damaging to American innovation, and I think some of the criticisms of Google have been unfair in that regard, because I do think that a more nuanced conversation is really critical going forward to think about the risks and how to get policy right.

Ariel Conn: So I want to come back to this idea of openness in a minute, but first I want to stick with some pseudo-military concerns. Maybe this is more reflective of what I’m reading, but I seem to see a lot more concern being raised about military applications of AI in China, and some concerns obviously about AI use with their humanitarian issues are starting to come to the surface. In light of some recent events especially like what we’re seeing in Hong Kong, and then with the Uyghurs, should we be worrying more about how China is using AI for what we perceive as human rights abuses?

Elsa Kania: That is something that greatly concerns me, particularly when it comes to the gravity of the atrocities in Xinjiang. And certainly there are very low tech coercive elements to how the Chinese government is essentially trying to re-engineer an entire population in ways that have been compared by experts as tantamount to a cultural genocide, and the creation of concentration camps — and beyond that, the pervasiveness of biometrics and surveillance enabled by facial recognition, and the creation of new software programs to better aggregate big data about individuals. I think all of that paints a very dark picture of ways in which artificial intelligence can enable authoritarianism, and can reinforce the Chinese government’s capability to repress its own population in ways that in some cases can become pervasive in day to day life.

And I’d say that having been to Beijing recently, surveillance is kind of like air pollution. It is pervasive, in terms of the cameras you see out on the streets. It is inescapable in a sense, and it is something that the average person or citizen in China can do very little about. I think of course this is not quite a perfect panopticon yet; Elements of this remain a work in progress. But I do think that the overall trajectory of these developments is deeply worrying in terms of human rights abuses, and yet it’s not as much of a feature of conversations in AI ethics in China. But I think it does overshadow some of the more positive aspects of what the Chinese government is doing with AI, like in health care and education, that this is also very much a reality.

And I think when it comes to the Chinese military’s interest in AI, it is quite a complex landscape of research and development and experimentation. To my knowledge it does not appear that the Chinese military is yet at the stage of deploying all that much in the way of AI: again very active efforts and long term development of weapons systems — including cruise missiles, hypersonics, a range of unmanned systems across all domains with growing degrees of autonomy, unmanned underwater vehicles and submarines, progress in swarming that has been prominently demonstrated, scavenger robots in space as a covert counter-space capability, human machine integration or interaction.

But I think that the translation of some of these initial stages of military innovation into future capabilities will be challenging for the PLA in some respects. There could be ways in which the Chinese military has advantages relative to the U.S., given apparent enthusiasm and support from top-level leadership at the level of Xi Jinping himself, and several prominent generals, who have been advocating for and supporting investments in these future capabilities.

But I do think that we’re really just at the start of seeing what AI will mean for the future of military affairs, and future of warfare. But when it comes to developments underway in China, particularly in the Chinese defense industry, I think the willingness of Chinese companies to export drones, robotic systems — many of which again have growing levels of autonomy, or at least are advertised as such — is also concerning from the perspective of other militaries that will be acquiring these capabilities and could use them in ways that violate human rights. 

But I do think there are concerns how the Chinese military would use its own capabilities. The export of some of these weapons systems going forward, as well as the potential use of made-in-China technologies by non-state actors and terrorist organizations, as we’ve already seen with the use of drones made by DJI by ISIS, or Daesh, in Syria, including as improvised IEDs. So there are no shortage of reasons for concerns, but I’ll stop there for now.

Ariel Conn: Helen, did you have anything you wanted to add?

Helen Toner: I think Elsa said it well. I would just reiterate that I think the ways that we’re starting to see China incorporating AI into its larger surveillance state, and methods of domestic control, are extremely concerning.

Ariel Conn: There’s debate I think about how open AI companies and researchers should be about their technology. But we sort of have a culture of openness in AI. And so I’m sort of curious: how is that being treated in China? Does it seem like that can actually help mitigate some of the negative applications that we see of AI? Or does it help enable the Chinese or anyone else to develop AI in non-beneficial ways that we are concerned about? What’s the role of openness in this?

Elsa Kania: I think openness is vital to innovation, and I hope that can be sustained — even as we are seeing greater concerns about the misuse or transfer of these technologies. I think that the level of openness and integration between the American and Chinese innovation ecosystems is useful in the sense that it does provide a level of visibility, or awareness, or sort of a shared understanding of the state of research. But I think at the same time there are reasons to have some thought-through parameters on that openness, or again — whether from the perspective of ethics or security — ways that having better guidelines or frameworks for how to engage, I think, will be important in order to sustain that openness and engagement.

I think that having better guardrails, and how to think about where openness is warranted, and when there should be at the very least common sense, and hopefully some rigorous consideration of these concerns, will be important. And then also another dimension of openness is thinking about when to release, or publish, or make available certain research, or even the tools underlying those advances; and when it’s better to keep more information proprietary. And I think the greater concern there, beyond the U.S.-China relationship, may be the potential for misuse or exploitation of these technologies by non-state actors, or terrorist organizations, even high end criminal organizations. I think the openness of the AI field is really critical. But I also think to sustain that, it will be important to think very carefully through some of these potential negative externalities across the board.

Helen Toner: One element that makes it extra complicated here in terms of openness and collaboration between U.S. and Chinese researchers: so much of the work that is going on there is really quite basic research — work on computer vision, or on speech recognition, or things of that nature. And that kind of research can be used for so many things, including both harmful, oppressive applications, as well as many much more acceptable applications. I think it’s really difficult to think through how to think about openness in that context.

So, one thing I would love to see is more information being made available to researchers. For example, I do think that any researcher who is working with a Chinese individual, or company, or organization should be aware of what is going on in Xinjiang, and should be aware of the governance practices that are common in China. And it would be great if there were more information available on specific institutions, and how they’re connected to various practices, and so on. That would be a good step towards helping non-Chinese researchers understand what kinds of situations they might be getting themselves involved in.

Ariel Conn: Do you get the sense that AI researchers are considering how some of their work can be applied in these situations where human rights abuses are taking place? I mean, I think we’re starting to see that more, but I guess maybe how much do you feel like you’re seeing that vs. how much more do you think AI researchers need to be making themselves aware?

Helen Toner: I think there’s a lot of interest and care among many AI researchers in how their work will be used, and in making the world a better place, and so on. And I think things like Google’s withdrawal from Project Maven, and also the pressure that was put on Google when it was leaked that it was working on a censored search engine to be used in China: I think those are both evidence of the level of, I guess, caring that is there. But I do think that there could be more awareness of specific issues that are going on in China. I think the situation in Xinjiang is gradually becoming more widely known, but I wouldn’t be surprised if it wasn’t something that plenty of AI researchers had come across. I think it’s a matter of pairing that interest in how their work might be used with information about what is going on, and what might happen in the future.

Ariel Conn: One of the things that I’ve also read, and I think both of you addressed this in works of yours that I was looking at: there’s this concern that China obviously has a lot more people, their privacy policies aren’t as strict, and so they have a lot more access to big data, and that that could be a huge advantage for them. Reading some of your work, it sounded like maybe that wasn’t quite the advantage that people worry about, at least yet. And I was hoping you could explain a little bit about technological difficulties that they might be facing even if they do have more data.

Helen Toner: For sure. I think there are quite a few different ways in which this argument is weaker than it might appear at first. So, I think there are many reasons to be concerned about the privacy implications of China’s data practices. Certainly having spent time in China, it’s very clear that the instant messages you’re sending, for example, are not only being read by you; That’s certainly concerning from that perspective. But if we’re talking about whether data will give them an advantage in developing AI, think there are a few different reasons to be a little bit skeptical.

One reason, which I think you alluded to, is simply whether they can make use of this data that they’re collecting. There was some reporting, I believe, last year coming out of Tencent, talking about ways in which data was very siloed inside the company, and it’s notoriously difficult. The joke among the data scientists is that when you’re trying to solve some problem with data, you spend the first 90% of your time just cleaning and structuring the data, and then only the last 10 percent actually solving the problem. So, that’s the sort of logistical or practical issue that you mentioned.

Other issues are things like: the U.S. doesn’t have as large a population as China, but U.S. companies have much greater international reach. So, they often have as many, if not more, users compared with Chinese companies. Even more importantly, I think, are two extra issues — one of which being that for most AI applications, the kind of data that will be useful in training a given model needs to be relevant to the problem that model is solving. So, if you have lots of data about Chinese customers’ purchases on Taobao, which is Chinese Amazon, then you’re going to be really good at predicting what kind of purchases Chinese consumers will make on Taobao. But that’s not going to help you with, for example, the kind of overhead imagery analysis that Project Maven was targeting, and things like this.

So that’s one really fundamental problem, I think, is this matter of data primarily being useful for training systems that are solving problems that are very related to the data that you have. And then a second really fundamental issue is thinking about how important it is or isn’t to have pre-gathered data in order to train a given model. And so, something that I think is left out of a lot of conversations on this issue is the fact that many types of models — notably, reinforcement learning models — can often be trained on what is referred to as synthetic data, which basically means data that you generate during the experiment — as opposed to requiring a pre gathered data set that you are training your model on.

So, an example of this would be AlphaGo, that we mentioned before. The original AlphaGo was first trained on human games, and then fine tuned from there. But AlphaGo Zero, which was released subsequently, did not actually need any pre-collected data, and instead just used computation to simulate games and play against itself, and thereby learn how to play the game even better than AlphaGo, which was trained on human data. So, I think there are all manner of reasons to be a little bit skeptical of this story that China has some fundamental advantage in access to data.

Elsa Kania: Those are all great points, and I would just add that I think this is particularly true when we look at the apparent disparities in access to data between China’s commercial ecosystem and the Chinese military. As Helen mentioned, much of that data generated from China’s mobile ecosystem will have very little relevance if you are looking to build advanced weapon systems, and the critical question going forward, or the much more relevant concern, will be the Chinese military’s capacity as an organization to improve its management and employment of its own data, while also gaining access to other relevant sources of data and looking to leverage simulations, even war gaming, as techniques to generate more data of relevance to training AI systems for military purposes.

So, the notion that data is the new oil I think is at best a massive oversimplification, given this is much more a complex landscape; And access to and use of, even labeling of data become very practical measures that militaries, among other bureaucracies, will have to grapple with as they think about how to develop AI that is trained particularly for the missions they have in mind.

Ariel Conn: So, does it seem fair to say then that it’s perfectly reasonable for Western countries to maintain, and possibly even develop, stricter privacy laws and still remain competitive?

Helen Toner: I think absolutely. The idea that one would need to reduce privacy controls in order to keep up with some volume of data that needs to be collected in order to be competitive in AI fundamentally misunderstands how AI research works. And I think also misunderstands the ways that Western companies will stay competitive; I think it’s not an accident that WeChat, for example, the most popular messaging app in China has really struggled to spread beyond China, the Chinese diaspora. I would posit that a significant part of that is the fact that it’s clear that messages on that app are going to the Chinese government. So, I think U.S. and other Western companies should be wary of sacrificing the kinds of features and functionalities that are based in the values that we hold dear.

Elsa Kania: I’d just add that I think there’s often this framing of a dichotomy between privacy and advancement in AI — and as Helen said, I think that there are ways to reconcile our priorities and our values in this context. And I think the U.S. government can also do much more when it comes to better leveraging data that it does have available, and making it more open for research purposes while focusing on privacy in the process. Exploitation of data should not come at the expense of privacy or be seen as at odds with advancement.

Helen Toner: And I’ll also add as well that we’re seeing advancements in various technologies that make it possible to utilize data without invading the privacy of the holder of that data. So, these are things like differential privacy, multi-party computation, a number of other related techniques that make it possible to securely and privately make use of data for improving goals without exposing the individual data of any particular user.

Ariel Conn: I feel like that in and of itself is another podcast topic.

Helen Toner: I agree.

Ariel Conn: The last question I have is: what do you think is most important for people to know and consider when looking at Chinese AI development and the Western concerns about it?

Elsa Kania: The U.S. in many respects does remain in a fairly advantageous position. However, I worry we may erode our own advantages if we don’t recognize what they are. And I think it does come down to the fact that the openness of the American innovation ecosystem, including our welcome to students and scholars from all over the world, has been critical to progress in science in the United States. And I think it’s really vital to sustain that. I think between the United States and China today, the critical determinant of competitive advantage going forward will be talent. I think there are many ways that China continues to struggle and is lagging behind its access to human capital resources — though there are some major policy initiatives underway from the Chinese Ministry of Education, significant expansions of the use of AI in and for education.

So, I think that as we think about relative trajectories in the long term, it will be important to think about talent, and how this is playing out in a very complex and often very integrated landscape between the U.S. and China. And I’ve said it before, and I’ll say it again: I think in the United States it is encouraging that the Department of Defense has a strategy for AI and is thinking very carefully about the ethics and opportunities it provides. I hope that the U.S. Department of Education, and that states and cities across the U.S., will also start to think more about what AI can do in terms of opportunities, in terms of more personalized and modernized approaches to education in the 21st century.

Because I think again, although I’m someone who as an analyst looks more at the military elements of this question, I think talent and education are foundational to everything. And some of what the Chinese government is doing with exploring the potential of AI in education are things that I wish the U.S. government would consider pursuing equally actively — though with greater concern to privacy and to the well-being of students. I don’t think we should necessarily envy or look to emulate many elements of China’s approach, but I think on talent and education it’s really critical for the U.S. to think about that as a main frontier of competition and to sustain openness to students and scientists from around the world, which requires thinking about some of these tricky issues of immigration that have become politicized to a level that is unfortunate and risks damaging our overall innovation ecosystem, not to mention the well-being and opportunities of those who can sometimes get caught in this crossfire in terms of the geopolitics and politics.

Helen Toner: I’d echo what Elsa said. I think in a nutshell what I would recommend for those interested in thinking about China’s prospects in AI is to be less concerned about how much data they have access to, or about the Chinese government and its plans being a well-oiled machine that works perfectly on the first try — and to pay more attention to, on the one hand, the willingness of the Chinese Communist Party to use extremely oppressive measures, and on the other hand, to pay more attention to the question of human capital and talent in AI development, and to focus more on how the U.S. can do better at attracting and retaining top talent — which has historically been something the U.S. has done really well, but for a variety of reasons has perhaps started to slide a little bit in recent years.

Ariel Conn: All right. Well, thank you both so much for joining this month. This was really interesting for me.

Elsa Kania: Thank you so much. Enjoyed the conversation, and certainly much more to discuss on these fronts.

Helen Toner: Thanks so much for having us.

 

 

The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield

Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems; They also discussed our species’ unique strengths and vulnerabilities — and the ways in which technology has heightened both — with respect to the changing climate.

This month’s podcast helps serve as the basis for a new podcast we’re launching later this month about the climate crisis. We’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more!. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified when the climate series launches.

We’d also like to make sure we’re covering the climate topics that are of most interest to you. If you have a couple minutes, please fill out a short survey at surveymonkey.com/r/climatepodcastsurvey, and let us know what you want to learn more about.

Topics discussed in this episode include:

  • What an existential risk is and how to classify different threats
  • Systems critical to human civilization
  • Destabilizing conditions and the global systems death spiral
  • How we’re vulnerable as a species
  • The “rungless ladder”
  • Why we can’t wait for technology to solve climate change
  • Uncertainty and how to deal with it
  • How to incentivize more creative science
  • What individuals can do

References discussed in this episode include:

Want to get involved? CSER is hiring! Find a list of openings here.

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast. I’m your host, Ariel Conn, and I am especially excited about this month’s episode. Not only because, as always, we have two amazing guests joining us, but also because this podcast helps lay the groundwork for an upcoming series we’re releasing on climate change.

There’s a lot of debate within the existential risk community about whether the climate crisis really does pose an existential threat, or if it will just be really, really bad for humanity. But this debate exists because we don’t know enough yet about how bad the climate crisis will get nor about how humanity will react to these changes. It’s very possible that today’s predicted scenarios for the future underestimate how bad climate change could be, while also underestimating how badly humanity will respond to these changes. Yet if we can get enough people to take this threat seriously and to take real, meaningful action, then we could prevent the worst of climate change, and maybe even improve some aspects of life. 

In late August, we’ll be launching a new podcast series dedicated to climate change. I’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified as soon as the climate series launches.

But first, today, I’m joined by two guests who suggest we should reconsider studying climate change as an existential threat. Dr. Simon Beard and Haydn Belfield are researchers at University of Cambridge’s Center for the Study of Existential Risk, or CSER. CSER is an interdisciplinary research group dedicated to the study and mitigation of risks that could lead to human extinction or a civilizational collapse. They study existential risks, develop collaborative strategies to reduce them, and foster a global community of academics, technologists, and policy makers working to safeguard humanity. Their research focuses on four areas: biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.

Simon is a senior research associate and academic program manager; He’s a moral philosopher by training. Haydn is a research associate and academic project manager, as well as an associate fellow at the Leverhulme Center for the Future of Intelligence. His background is in politics and policy, including working for the UK Labor party for several years. Simon and Haydn, thank you so much for joining us today.

Simon Beard: Thank you.

Haydn Belfield: Hello, thank you.

Ariel Conn: So I’ve brought you both on to talk about some work that you’re involved with, looking at studying climate change as an existential risk. But before we really get into that, I want to remind people about some of the terminology. So I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.

Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. 

So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.

Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that’s consistent with and supporting our health and our continued survival, and that the institutions that we’ve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we’ll basically, we’ll be able to get on with our lives.

If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that’s really hard for us to respond to.

And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can’t get them back or it’s going to be really hard. And life as we know it cannot be resumed; We’re going to have to live in a very different and very inferior world, at least from our current way of thinking.

Haydn Belfield: I think that sort of captures it really well. One thing that you could kind of visualize, it might be something like, imagine a really bad endemic. 100 years ago, we had the Spanish flu pandemic that killed 100 million people — that was really bad. But it could be even worse. So imagine one tomorrow that killed a billion people. That would be one of the worst things that’s ever happened to humanity; It would be sort of a global catastrophic risk. But it might not end our story, it might not be the end of our potential. But imagine if it killed everyone, or it killed almost everyone, and it was impossible to recover: That would be an existential risk.

Ariel Conn: So, there’s — at least I’ve seen some debate about whether we want to consider climate change as falling into either a global catastrophic or existential risk category. And I want to start first with an article that, Simon, you wrote back in 2017, to consider this question. The subheading of your article is a question that I think is actually really important. And it was: how much should we care about something that is probably not going to happen? I want to ask you about that — how much should we care about something that is probably not going to happen?

Simon Beard: I think this is really important when you think about existential risk. People’s minds, they want to think about predictions, they want someone who works in existential risk to be a prophet of doom. That is the idea that we have — that you know what the future is going to be like, and it’s going to be terrible, and what you’re saying is, this is what’s going to happen. That’s not how people who work in existential risk operate. We are dealing with risks, and risks are about knowing all the possible outcomes: whether any of those are this severe long term threat, an irrecoverable loss to our species.

And it doesn’t have to be the case that you think that something is the most likely or the most probable as a potential outcome for you to get really worried about the thing that could bring that about. And even a 1% risk of one of these existential catastrophes is still completely unacceptable because of the scale of the threat, and the harm we’re talking about. And because if this happens, there is no going back; It’s not something that we can do a safe experiment with.

So when you’re dealing with risk, you have to deal with probabilities. You don’t have to be convinced that climate change is going to have these effects to really place it on the same level as some of the other existential risks that people talk about — nuclear weapons, and artificial intelligence, and so on — you just need to see that this is possible. We can’t exclude it based on the knowledge that we have at the moment, but it seems like a credible threat with a real chance of materializing. And something that we can do about it, because ultimately the aim of all existential risk research is safety — trying to make the world a safer place and the future of humanity a more certain thing.

Ariel Conn: Before I get into the work that you’re doing now, I want to stick with one more question that I have about this article. I was amused when you sent me the link to it — you sort of prefaced it by saying that you think it’s rather emblematic of some of the problematic ways that we think about climate change, especially as an existential risk, and that your thinking has evolved in the last couple of years since writing this. I was hoping you could just talk a little bit about some of the problems you see with the way we’re thinking about climate change as an x-risk.

Simon Beard: I wrote this paper largely out of a realization that people wanted us to talk about climate change in the next century. And we wanted to talk about it. It’s always up there on the list of risks and threats that people bring up when you talk about existential risk. And so I thought, well, let’s get the ball rolling; Let’s review what’s out there, and the kind of predictions that people who seem to know what they’re talking about have made about this — you know, economists, climate scientists, and so on — and make this case that this suggests there is a credible threat, and we need to take this seriously. And that seemed, at the time, like a really good place to start.

But the more I thought about it afterwards, the more flawed I saw the approach as being. And it’s hard to regret a paper like that, because I’m still convinced that the risk is very real, and people need to take it seriously. But for instance, one of the things that kept on coming up is that when people make predictions about climate change as an existential risk, they’re always very vague. Why is it a risk? What’s the sort of scenarios that we worry about? Where are the danger levels? And they always want to link it to a particular temperature threshold or a particular greenhouse gas trajectory. And that just didn’t strike me as credible, that we would cross a particular temperature threshold and then that would be the end of humanity.

Because of course, a huge amount of the risk that we face depends upon how humanity responds to the changing climate, not just upon climate change. I think people have this idea in their mind that it’ll get so hot, everyone will fry or everyone will die of heat exhaustion. And that’s just not a credible scenario. So there were these really credible scholars, like Marty Weitzman and Ram Ramanathan, who tried to work this out, and have tried to predict what was going to happen. But they seemed to me to be missing a lot, and try and make very precise claims but based on very vague scenarios. So we kind of said at that point, we’re going to stop doing this until we have worked out a better way of thinking about climate change as an existential threat. And we’ve been thinking a lot about this in the intervening 18 months, and that’s where the research that you’re seeing that we’re hoping to publish soon and the desire to do this podcast really come from. So it seems to us that there are kind of three ways that people have gone about thinking about climate change as an existential risk. It’s a really hard question. We don’t really know what’s going to happen. There’s a lot of speculation involved in this.

One of the ways that people have gone about trying to respond to this has just been to speculate, just been to come up with some plausible scenario or pick a temperature number out of the air and say, “Well, that seems about right, if that were to happen that would lead to human extinction, or at least a major disruption of all of these systems that we rely upon. So what’s the risk of that happening, and then we’ll label that as the existential climate threat.” As far as we can tell, there isn’t the research to back up some of these numbers. Many of them conflict: In Ram Ramanathan’s paper he goes for five degrees; In Marty Weitzman’s paper he goes to six degrees; There’s another paper that was produced by Breakthrough where they go for four degrees. There’s kind of quite a lot of disagreement about where the danger levels lie.

And some of it’s just really bad. So there’s this prominent paper by Jem Bendell — he never got it published, but it’s been read like 150,000 times, I think — on adapting to extreme climate change. And he just picks this random scenario where the sea levels rise, a whole bunch of coastal nuclear reactors get inundated with seawater, and they go critical, and this just causes human extinction. That’s not credible in many different ways, not least just that won’t have that much damage. But it just doesn’t seem credible that this slow sea level rise would have this disastrous meltdown effect — we could respond to that. What passes for scientific study and speculation didn’t seem good enough to us.

Then there were some papers which just kind of passed the whole thing by — say, “Well, we can’t come up with a plausible scenario or a plausible threat level, but there just seem to be a lot of bad things going on around there. Given that we know that the climate is changing, and that we are responding to this in a variety of ways, probably quite inadequately, it doesn’t help us to prioritize efforts or really understand the level of risk we face and when maybe some more extreme measures like geoengineering become more appropriate because of the level of risk that we face.”

And then there’s a final set of studies — there have been an increasing number of these; one recently came out in Vox, Anders Sandberg has done one, and Toby Ord talks about one — where people say, “Well, let’s just go for the things that we know, let’s go for the best data and the best studies.” And these usually focus on a very limited number of climate effects, the more direct impacts of things like heat exhaustion, perhaps sometimes the crop failure — but only really looking at the most direct climate impacts and only where there are existing studies. And then they try and extrapolate from that, sometimes using integrated assessment models, sometimes it’s the other kinds of analysis, but usually in quite a straightforward linear economic analysis or epidemiological analysis.

And that also is useful. I don’t want to dis these papers; I think that they provide very useful information for us. But there is no way that that can constitute an adequate risk assessment, given the complexity of the impacts that climate change is having, and the ways in which we’re responding to that. And it’s very easy for people to read these numbers and these figures and conclude, as I think the Vox article did, climate change isn’t an existential risk, it’s just going to kill a lot of people. Well, no, we know it will kill a lot of people, but that doesn’t answer the question about whether it is an existential threat. There are a lot of things that you’re not considering in this analysis. So given that there wasn’t really a good example that we could follow within the literature, we’ve kind of turned it on its head. And we’re now saying, maybe we need to work backwards.

Rather than trying to work forwards from the climate change we’re expecting and the effects that we think that is going to have and then whether these seem to constitute an existential threat, maybe we need to start from the other end and think about what are the conditions that could most plausibly destabilize the global civilization and the continued future of our species? And then work back from them to ask, are there plausible climate scenarios that could bring these about? And there’s already been some interesting work in this area for natural systems, and this kind of global Earth system thinking and the planetary boundaries framework, but there’s been very little work on this done at the social level.

And even less work done when you consider that we rely on both social and natural systems for our survival. So what we really need is some kind of approach that will integrate these two. That’s a huge research agenda. So this is how we think we’re going to proceed in trying to move beyond the limited research that we’ve got available. And now we need to go ahead and actually construct these analysis and do a lot more work in this field. And maybe we’re going to start to be able to produce a better answer.

Ariel Conn: Can you give some examples of the research that has started with this approach of working backwards?

Simon Beard: So there’s been some really interesting research coming out of the Stockholm Resilience Center dealing with natural Earth systems. So they first produced this paper on planetary boundaries, where they looked at a range of, I think it’s nine systems — the biosphere, biogeochemical systems, yes, climate system and so on — and said, are these systems operating in what we would consider their normal functioning boundaries? That’s how they’ve operated throughout the pliocene, throughout the last several thousand years, during which human civilization has developed. Or do they show signs of transitioning to a new state of abnormal operation? Or are they in a state that’s already posing high risk to the future of human civilization, but without really specifying what that risk is.

Then they produced another paper recently on Hothouse Earth, where they started to look for tipping points within the system, points where, in a sense, change become self perpetuating. And rather than just a kind of gradual transition from what we’re used to, to maybe an abnormal condition, all of a sudden, a whole bunch of changes start to accelerate. So it becomes much harder to adapt to these. Their analysis is quite limited, but they argue that quite a lot of these tipping point seem to start kicking in at about one and a half to two degrees warming above pre-industrial levels.

We’re getting quite close to that now. But yeah, the real question for us at the Center for the Study of Existential risk looking at humanity is, what are the effects of this going to be? And also what are the risks that exist within those socio-technological systems, the institutions that we set up, the way that we survive as a civilization, the way we get our food, the way we get our information, and so on, because there’s also significant fragilities and potential tipping points there as well. 

That’s a very new sort of study, I mean, to the point were a lot of people just refer back to this one book written by Jared Diamond in 2005 as if it was the authoritative tome on collapse. And it’s a popular book, and he’s not an expert in this: He’s kind of a very generalist scholar, but he provides a very narrative-based analysis of the collapse of certain historical civilizations and draws out a couple of key lessons from that. But it’s all very vague and really written for a general audience. And that still kind of stands out as this is the weighty tome, this is where you go to get answers to your questions. It’s very early and we think that there’s a lot of room for better analysis of that question. And that’s something we’re looking at a lot.

Ariel Conn: Can you talk about the difference between treating climate change itself as an existential risk, like saying this is an x-risk, and studying it as if it poses such a threat? If that distinction makes sense?

Simon Beard: Yeah. When you label something as an existential risk, I think that is in many ways a very political move. And I think that that has been the predominant lens through which people have approached this question of how we should talk about climate change. People want to draw attention to it, they realize that there’s a lot of bad things that could come from it. And it seems like we could improve the quality of our future lives relatively easily by tackling climate change.

It’s not like AI safety, you know, the threats that we face from advance artificial intelligence, where you really have to have advanced knowledge of machine learning and a lot of skills and do a lot of research to understand what’s going on here and what the real threats that we face might be. This is quite clear. So talking about it, labeling it as an existential risk has predominantly been a political act. But we are an academic institution. 

I think when you ask this question about studying it as an existential threat, one of the great challenges we face is all things that are perceived as existential threats, they’re all interconnected. Human extinction, or the collapse of our civilization, or these outcomes that we worry about: these are scenarios and they will have complex causes — complex technological causes, complex natural causes. And in a sense, when you want to ask the question, should we study climate change as an existential risk? What you’re really asking is, if we look at everything that flows from climate change, will we learn something about the conditions that could precipitate the end of our civilization? 

Now, ultimately, that might come about because of some heat exhaustion or vast crop failure because of the climate change directly. It may come about because, say, climate change triggers a nuclear war. And then there’s a question of, was that a climate-based extinction or a nuclear-based extinction? Or it might come about because we develop technologies to counter climate change, and then those technologies prove to be more dangerous than we thought and pose an existential threat. So when we carve this off as an academic question, what we really want to know is, do we understand more about the conditions that would lead to existential risk, and do we understand more about how we can prevent this bad thing from happening, if we look specifically at climate change? It’s a slightly different bar. But it’s all really just this question of, is talking about climate change, or thinking about climate change, a way to move to a safer world? We think it is but we think that there’s quite a lot of complex, difficult research that is needed to really make that so. And at the moment, what we have is a lot of speculation.

Haydn Belfield: I’ve got maybe an answer to that as well. Over the last few years, lots, and lots of politicians have said climate change is an existential risk, and lots of activists as well. So you get lots and lots of speeches, or rallies, or articles saying this is an existential risk. But at the same time, over the last few years, we’ve had people who study existential risk for a living, saying, “Well, we think it’s an existential risk in the same way that nuclear war is an existential risk. But it’s not maybe this single event that could kill lots and lots of people, or everyone, in kind of one fell swoop.”

So you get people saying, “Well, it’s not a direct risk on its own, because you can’t really kill absolutely everybody on earth with climate change. Maybe there’s bits of the world you can’t live in, but people move around. So it’s not an existential risk.” And I think the problem with both of these ways of viewing it is that word that I’ve been emphasizing, “an.” So I would kind of want to ban the word “an” existential risk, or “a” existential risk, and just say, does it contribute to existential risk in general?

So it’s pretty clear that climate change is going to make a bunch of the hazards that we face — like pandemics, or conflict, or environmental one-off disasters — more likely, but it will also make us more vulnerable to a whole range of hazards, and it will also increase the chances of all these types of things happening, and increase our exposure. So like with Simon, I would want to ask, is climate change going to increase the existential risk we face, and not get hung up on this question of is it “an” existential risk?

Simon Beard: The problem is, unfortunately, there is an existing terminology and existing way of talking that to some extent we’re bound up with. And this is how the debate is. So we’ve really struggled with to what extent we kind of impose the terminology that we’ve most liked on the field and the way that these things are discussed? And we know ultimately existential risk is just a thing; It’s a homogenous lump at the end of human civilization or the human species, and what we’re really looking at is the drivers of that and the things that push that up, and we want to push it down. That is not a concept that I think lots of people find easy to engage with. People do like to carve this up into particular hazards and vulnerabilities and so on.

Haydn Belfield: That’s how most of risk studies works. Most of when you study natural disasters, or you study accidents, in an industry setting, that’s what you’re looking at. You’re not looking at this risk as completely separate. You’re saying, “What hazards are we facing? What are our vulnerabilities? And what are our exposure,” and kind of combining all of those into having some overall assessment of the risk you face. You don’t try and silo it up into, this is bio, this is nuclear, this is AI, this is environment.

Ariel Conn: So that connects to a question that I have for you both. And that is what do you see as society’s greatest vulnerabilities today?

Haydn Belfield: Do you want to give that a go, Simon?

Simon Beard: Sure. So I really hesitate to answer any question that’s posed quite in that way, just because I don’t know what our greatest vulnerability is.

Haydn Belfield: Because you’re a very good academic, Simon.

Simon Beard: But we know some of the things that contribute to our vulnerability overall. One that really sticks in my head came out of a study we did looking at what we can learn from previous mass extinction events. And one of the things that people have found looking at the species that tend to die out in mass extinctions, and the species that survive, is this idea that the specialists — the efficient specialists — who’ve really carved out a strong biological niche for themselves, and are often the ones that are doing very well as a result of that, tend to be the species that die out, and the species that survive are the species that are generalists. But that means that within any given niche or habitat or environment, they’re always much more marginal, biologically speaking.

And then you say, “Well, what is humanity? Are we a specialist that’s very vulnerable to collapse, or are we a generalist that’s very robust and resilient to this kind of collapse that would fare very well?” And what you have to say is, as a species, when you consider humanity on its own, we seem to be the ultimate generalist, and indeed, we’re the only generalist who’s really moved beyond marginality. We thrive in every environment, every biome, and we survive in places where almost no other life form would survive. We survived on the surface of the moon — not for very long, but we did; We survived Antarctica, on the back ice, for long periods of time. And we can survive at the bottom of the Mariana Trench, and just a ridiculously large range of habitats.

But of course, the way we’ve achieved that is that every individual is now an incredible specialist. There are very few people in the world who could really support themselves. And you can’t just sort of pick it up and go along with it. You know like this last weekend, I went to an agricultural museum with my kids, and they were showing, you know, how you plow fields and how you gather crops and looked after it. And there’s a lot of really important, quite artisanal skills about what you had to do to gather the food and protect it and prepare it and so on. And you can’t just pick this up with a book; you really have to spend a long time learning it and getting used to it and getting your body strong enough to do these things.

And so every one of us as an individual, I think, is very vulnerable, and relies upon these massive global systems that we’ve set up, these massive global institutions, to provide this support and to make us this wonderfully adaptable generalist species. So, so long as institutions and the technologies that they’ve created and the broad socio-technological systems that we’ve created — so long as they carry on thriving and operating as we want them to, then we are very, very generalist, very adaptable, very likely to make it through any kind of trouble that we might face in the next couple of centuries — with a few exceptions, a few really extreme events. 

But the flip side of that is anything that threatens those global socio-technological institutions also threatens to move us from this very resilient global population we have at the moment to an incredibly fragile one. If we fall back on individuals and our communities, all of a sudden, we are going to become the vulnerable specialist that each of us individually is. That is a potentially catastrophic outcome that people don’t think about enough.

Haydn Belfield: One of my colleagues, Luke Kemp, likes to describe this as a rungless ladder. So the idea is that there’s been lots and lots of collapses before in human history. But what normally happens is elites at the top of the society collapse, and it’s bad for them. But for everyone else, you kind of drop one rung down on the ladder, but it’s okay, you just go back to the farm, and you still know how to farm, your family’s still farming — things get a little worse, maybe, but it’s not really that bad. And you get people leaving the cities, things like that; But you only drop one rung down the ladder, you don’t fall off it. But as we’ve gone many, many more rungs up the ladder, we’ve knocked out every rung below us. And now we’re really high up the ladder. Very few of us know how to farm, how to hunt or gather, how to survive, and so on. So were we to fall off that rungless ladder, then we might come crashing down with a wallop.

Ariel Conn: I’m sort of curious. We’re talking about how humanity is generalist but we’re looking within the boundaries of the types of places we can live. And yet, we’re all very specifically, as you described, reliant on technology in order to live in these very different, diverse environments. And so I wonder if we actually are generalists? Or if we are still specialists at a societal level because of technology, if that makes sense?

Simon Beard: Absolutely. I mean, the point of this was, we kind of wanted to work out where we fell on the spectrum. And basically, it’s a spectrum that you can’t apply to humanity: We appear to fall as the most extreme species in both ends. And I think one of the reasons for that is that the scale as it would be applied to most species really only looks at the physical characteristics of the species, and how they interact directly with their environment — whereas we’ve developed all these highly emergent systems that go way beyond how we interact with the environment, that determine how we interact with one another, and how we interact with the technologies that we’ve created.

And those basically allow us to interact with the world around us in the same ways that both generalists and specialists would. That’s great in many ways: It’s really served us well as a species, it’s been part of the hallmark of our success and our ability to get this far. But it is a real threat, because it adds a whole bunch of systems that have to be operating in a way as we expect them to in order for us to continue. Maybe so long as these systems function it makes us more resilient to normal environmental shocks. But it makes us vulnerable to a whole bunch of other shocks.

And then you look at the way that we actually treat these emergent socio-technological systems. And we’re constantly driving for efficiency; We’re constantly driving for growth, as quick and easy growth as we can get. And the ways that you do that are often by making the systems themselves much less resilient. Resiliency requires redundancy, requires diversity, requires flexibility, requires all of the things that either an economic planner or a market functioning on short-term economic return really hate, because they get in the way of productivity.

Haydn Belfield: Do you want to explain what resilience is?

Simon Beard: No.

Ariel Conn: Hayden do you want to explain it?

Haydn Belfield: I’ll give it a shot, yeah. So, just since people might not be familiar with it — so what I normally think of is someone balancing. How robust they are is how much you can push that person balancing before they fall over, and then resilience is how quickly they get up and can balance again. The next time they balance, they’re even stronger than before. So that’s what we’re talking about when we’re talking about resilience, how quickly and how well you’re able to respond to those kinds of external shocks.

Ariel Conn: I want to stick with this topic of the impact of technology, because one of the arguments that I often hear about why climate change isn’t as big of an existential threat or a contributor to existential risk as some people worry is because at some point in the near future, we will develop technologies that will help us address climate change, and so we don’t need to worry about it. You guys bring this up in the paper that you’re working on as potentially a dangerous approach; I was hoping you could talk about that.

Simon Beard: I think there’s various problems with looking for the technological solutions. One of them is technologies tend to be developed for quite specific purposes. But some of the conditions that we are examining as potential civilization collapse due to climate change scenarios involve quite widespread and wide-scale systemic change to society and to the environment around us. And engineers have a great challenge even capturing and responding to one kind of change. Engineering is an art of the small; It’s a reductionist art; You break things down, and you look at the components, and you solve each of the challenges one by one.

And there are definitely visionary engineers who look at systems and look at how the parts all fit together. But even there, you have to have a model, you have to have a basic set of assumptions of how all these parts fit together and how they’re going to interact. And this is why you get things like Murphy’s Law — you know, if it can go wrong, it will go wrong — because that’s not how the real world works. The real world is constantly throwing different challenges at you, problems that you didn’t foresee, or couldn’t have foreseen because they are inconsistent with the assumption you made, all of these things. 

So it is quite a stretch to put your faith in technology being able to solve this problem, when you don’t understand exactly what the problem that you’re facing is. And you don’t necessarily at this point understand where we may cross the tipping point, the point of no return, when you really have to step up this R & D funding. Or now you know the problem that the engineers have to solve, because it’s staring you in the face: By the time that that happens, it may be too late. If you get positive feedback loops — you know, reinforcement where one bad thing leads to another bad thing, leads to another bad thing, which then contributes to the original bad thing — you need so much more energy to push the system back into a state of normality than for this cycle to just keep on pushing it further and further away from what you previously were at.

So that throws up significant barriers to a technological fix. The other issue, just going back to what we were saying earlier, is technology does also breed fragility. We have a set of paradigms about how technologies are developed, how they interface with the economy that we face, which is always pushing for more growth and more efficiency. It has not got a very good track record of investing in resilience, investing in redundancy, investing in fail-safes, and so on. You typically need to have strong, externally enforced incentives for that to happen.

And if you’re busy saying this isn’t really a threat, this isn’t something we need to worry about, there’s a real risk that you’re not going to achieve that. And yes, you may be able to develop new technologies that start to work. But are they actually just storing up more problems for the future? We can’t wait until the story’s ended and then know whether these technologies really did make us safer in the end or more vulnerable.

Haydn Belfield: So I think I would have an overall skepticism about technology from a kind of, “Oh, it’s going to increase our resilience.” My skepticism in this case is just more practical. So it could very well be that we do develop — so there’s these things called negative emissions technologies, which suck CO2 out of the air — we could maybe develop that. Or things that could lower the temperature of the earth: maybe we can find a way to do that, throw the whole climate and weather into a chaotic system. Maybe tomorrow’s the day that we get the breakthrough with nuclear fusion. I mean, it could be that all of these things happen — it’d be great if they could. But I just wouldn’t put all my bets on it. The idea that we don’t need to prioritize climate change above all else, and make it a real central effort for societies, for companies, for governments, because we can just hope for some techno-fix to come along and save us — I just think it’s too risky, and it’s unwise. Especially because if we’re listening to the scientists, we don’t have that much longer. We’ve only got a few decades left, maybe even one decade, to really make dramatic changes. And we just won’t have invented some silver bullet within a decade’s time. Maybe technology could save us from climate change; I’d love it if it could. But we just can’t be sure about that, so we need to make other changes.

Simon Beard: That’s really interesting, Hayden, because when you list negative emissions technologies, or nuclear fusion, that’s not the sort of technology I’m talking about. I was thinking about technology as something that would basically just be used to make us more robust. Obviously, one of the things that you do if you think that climate change is an existential threat is you say, “Well, we really need to prioritize more investment into these potential technology solutions.” The belief that climate change is exponential threat is not committing you to trying to make climate change worse, or something like that.

You want to make it as small as possible, you want to reduce this impact as much as possible. That’s how you respond to climate change as an existential threat. if you don’t believe climate change is an existential threat, you would invest less in those technologies. Also, I do wanna say — and I mean, I think there’s some legitimate debate about this, but I don’t like the 12 years terminology, I don’t think we know nearly enough to support those kind of claims. The IPCC came up with this 12 years, but it’s not really clear what they meant by it. And it’s certainly not clear where they got it from. People have been saying, “Oh, we’ve got a year to fix the climate,” or something, for as long as I can remember discussions going on about climate change.

It’s one of those things where that makes a lot of sense politically, but those claims aren’t scientifically based. We don’t know. We need to make sure that that’s not true; We need to falsify these claims, either by really looking at it, and finding out that it genuinely is safer than we thought it was or by doing the technological development and greenhouse gas reduction efforts and other climate mitigation methods to make it safe. That’s just how it works.

Ariel Conn: Do you think that we’re seeing the kind of investment in technology, you know, trying to develop any of these solutions, that we would be seeing if people were sufficiently concerned about climate change as an existential threat?

Simon Beard: So one of the things that worries me is people always judge this by looking at one thing and saying, “Are we doing enough of that thing? Are we reducing our carbon dioxide emissions fast enough? Are people changing their behaviors fast enough? Are we developing technologies fast enough? Are we ready?” Because we know so little about the nature of the risk, we have to respond to this in a portfolio manner; We have to say, “What are all the different actions and the different things that we can take that will make us safer?” And we need to do all of those. And we need to do as much as we can of all of these.

And I think there is a definite negative answer to your question when you look at it like that, because people aren’t doing enough thinking and aren’t doing enough work about how we do all the things we need to do to make us safe from climate change. People tend to get an idea of what they think a safer world would look like, and then complain that we’re not doing enough of that thing, which is very legitimate and we should be doing more of all of these things. But if you look at it as an existential risk, and you look at it from an existential safety angle, there’s just so few people who are saying, “Let’s do everything we can to protect ourselves from this risk.”

Way too many people are saying, “I’ve had a great idea, let’s do this.” That doesn’t seem to me like safety-based thinking; That seems to me like putting all your eggs in one basket and basically generating the solution to climate change that’s most likely to be fragile, that’s most likely to miss something important and not solve the real problem and store up trouble for a future date and so on. We need to do more — but that’s not just more quantitatively, it’s also more qualitatively.

Haydn Belfield: I think just clearly we’re not doing enough. We’re not cutting emissions enough, we’re not moving to renewables fast enough, we’re not even beginning to explore possible solar geoengineering responses, we don’t have anything that really works to suck carbon dioxide or other greenhouse gases out of the air. Definitely, we’re not yet taking it seriously enough as something that could be a major contributor to the end of our civilization or the end of our entire species.

Ariel Conn: I think this connects nicely to another section of some of the work you’ve been doing. And that is looking at — I think there were seven critical systems that are listed as sort of necessary for humanity and civilization.

Simon Beard: Seven levels of critical systems.

Ariel Conn: Okay.

Simon Beard: We rely on all sorts of systems for our continued functioning and survival. And a sufficiently significant failure in any of these systems could be fatal to all of our species. We can kind of classify these systems at various levels. So at the bottom, there are the physical systems — that’s basically the laws of physics. Atoms operate, how subatomic particles operate, how they interact with each other: those are pretty safe. There are some advanced physics experiments that some people have postulated may be a threat to those systems. But they all seem pretty safe. 

We then kind of move up: We’ve got basic chemical systems and biochemical systems, how we generate enzymes and all the molecules that we use — proteins, lipids, and so on. Then we move up to the level of the cell; Then we move up to the level of the anatomical systems — the digestive system, the respiratory system — we need all these things. Then you look at the organism as a whole and how it operates. Then you look at how organisms interact with each other: the biosphere system, the biological system, ecological system.

And then as human beings, we’ve added this kind of seventh, even more emergent, system, which is not just how humans interact with each other, but the kind of systems that we have made to govern our interaction, and to determine how we work together with each other: political institutions, technology, the way we distribute resources around the planet, and so on. So there are a really quite amazing number of potential vulnerabilities that our species has. 

It’s many more than seven, but categorizing needs on the kind of the seven levels is helpful to not miss anything, because I think most people’s idea of an existential threat is something like a really big gun. Guns, we understand how they kill people, if you just had a really huge gun, and just blew a hole in everyone’s head. But that’s both missing things that are actually a lot more basic than the way that people normally die, but also a lot more sophisticated and emergent. All of these are potentially quite threatening.

Ariel Conn: So can you explain a little bit more detail how climate change affects these different levels?

Haydn Belfield: So I guess the way I’ll do is I’ll first talk a bit about natural feedback stuff, and then talk about the social feedback loops. Everyone listening to this will be familiar with feedback loops, like methane getting released from permafrost in the Arctic, or methane coming out of clathrates in the ocean, or there’s other kinds of feedback loops. So there’s one that was discovered only recently, very recent paper was about cloud formation. So if it gets to four degrees, these models show that it becomes much harder for clouds to form. And so you don’t get much sort of radiation bouncing off those clouds and you get very rapid additional heating up to 12 degrees, is what it said.

So the first way that climate change could affect these kinds of systems that we’re talking about is it just makes it anatomically way too hot: You get all these feedback, and it just becomes far too hot for anyone to survive sort of anywhere on the surface. It might get much too hot in certain areas of the globe for really civilization to be able to continue there, much like it’s very hard in the center of the Sahara to have large cities or anything like that. But that seems quite unlikely that climate change would ever get that bad. The kind of stuff that we’re much more concerned about is the more general effects that climate change, climate chaos, climate breakdown might have on a bunch of other systems.

So in this paper, we’ve broken it down into three. We’ve looked at the effects of climate change on the food/water/energy system, the ecological system, and on our political system and conflict. And climate change is likely to have very negative effects on all three of those systems. It’s likely to negatively affect crop yields; It’s likely to increase freak weather events, and there’s some possibility that you might have these sort of very freak weather events — droughts, or hurricanes is also one — in areas where we produce lots of our calories, so bread baskets around the world. So climate change is going to have very negative effects most likely on our food and energy and water systems.

Then separately, there’s ecological systems. People will be very familiar with climate change driving lots of habitat loss, and therefore the loss of species; People will be very familiar with coral reefs dying and bleaching and going away. This could also have very negative effects on us, because we rely on these ecological systems to provide what we call ecological services. Ecological services are things like pollination, so if all the bees died what would we do? Ecological services also include the fish that we catch and eat, or fresh, clean drinking water. So climate change is likely to have very negative effects on that whole set of systems. And then it’s likely to have negative effects on our political system.

If there are large areas of the world that are nigh on uninhabitable, because you can’t grow food or you can’t go out at midday, or there’s no clean water available, then you’re likely to see maybe state breakdown, maybe huge numbers of people leaving — much more than we’ve ever encountered before, sort of 10s or hundred millions of people dislocated and moving around the world. That’s likely to lead to conflict and war. So those are some ways in which climate change could have negative effects on three sets of systems that we crucially rely on as a civilization.

Ariel Conn: So in your work, you also talk about the global systems death spiral. Was that part of this?

Haydn Belfield: Yeah, that’s right. The global systems death spiral is a catchy term to describe the interaction between all these different systems. So not only would climate change have negative effects on our ecosystems, on our food and water and energy systems, the political system and conflict, but these different effects are likely to interact and make each other worse. So imagine our ecosystems are harmed by climate change: Well, that probably has an effect on food/water systems, because we rely on our ecosystems for these ecosystem services. 

So then, the bad effects on our food and water systems: Well, that probably leads to conflict. So some colleagues of ours at the Anglia Ruskin University have something called a global chaos map, which is a great name for a research project, where they try and link incidences of shocks to the food system and conflict — riots or civil wars. And they’ve identified lots and lots of examples of this. Most famously, the Arab Spring, which has now become lots of conflicts, has been linked to a big spike in food prices several years ago. So there’s that link there between food and water, insecurity and conflict. 

And then conflict leads back into ecosystem damage. Because if you have conflict, you’ve got weak governance, you’ve got weak governments trying to protect their ecosystems, and weak government has been identified as the strongest single predictor of ecosystem loss, biodiversity loss. They all interact with one another, and make one another worse. And you could also think about things going back the other way. So for example, if you’re in a war zone, if you’ve got conflict, you’ve got failing states — that has knock-on effects on the food systems, and the water systems that we rely on: We often get famines during wartime.

And then if they don’t have enough food to eat, they don’t have water to drink, maybe that has negative effects on our ecosystems, too, because people are desperate to eat anything. So what we’re trying to point out here is that the systems aren’t independent from one another — they’re not like three different knobs that are all getting turned up independently by climate change — but that they interact with one another in a way that could cause lots of chaos and lots of negative outcomes for world society.

Simon Beard: We did this kind of pilot study looking at the ecological system and the food system and the global political system and looking at the connections of those three, really just in one direction: looking at the impact of food insecurity on conflict, and conflict and political instability on the biosphere, and loss of biosphere on integrity of the food system. But that was largely determined by the fact that these were three connections that we either had looked at directly, or had close colleagues who had looked at, so we had quite good access to the resources.

As Hayden said, everything kind of also works in the other direction, most likely. And also, there are many, many more global systems that interact in different ways. Another trio that we’re very interested in looking at in the future is the connection between the biosphere and the political system, but this time, also, with some of the health systems, the emergence of new diseases, the ability to respond to public health emergencies, and especially when these things are looked at in kind of one health perspective, where plant health and animal health and human health are all actually very closely interacting with one another.

And then you kind of see this pattern where, yes, we could survive six degrees plus, and we could survive famine, and we could survive x, y, and z. But once these things start interacting, it just drives you to a situation where really everything that we take for granted at the moment up to and including the survival of the species — they’re all on the table, they’re all up for grabs once you start to get this destructive cycle between changes in the environment and changes in how human society interacts with the environment. It’s the very dangerous, potentially very self-perpetuating feedback loop, and that’s why we refer to it as a global systems death spiral: because we really can’t predict at this point in time where it will end. But it looks very, very bleak, and very, very hard to see how once you enter into this situation, you could then kind of dial it back and return to a safe operating environment for humanity and the systems that we rely on. 

There’s definitely a new stable state at the end of this spiral. So when you get feedback loops between systems, it’s not that they will just carry on amplifying change forever; They’re moving towards another kind of stable state, but you don’t know how long it’s going to take to get there, you don’t know what that steady state will be. So for the simulation with the death of clouds, this idea that purely physical feedback between rising global temperatures, changes in the water cycle, and cloud cover, then you end up with a world that’s much, much hotter and much more arid than the one we have at the moment, which could be a very dangerous state. For sort of perpetual human survival, we would need a completely different way of feeding ourselves and really interacting with the environment. 

You don’t know what sort of death traps or kill mechanisms lie along that path of change; You don’t know if there is, for instance, somewhere here, it’s going to trigger a nuclear war, or it’s going to trigger attempts to geoengineer the climate in a sort of bid to gain safety, but actually these turn out to have catastrophic consequences, or all the others that are unknown unknowns we want to make turn into known unknowns, and then turn into things that we can actually begin to understand and study. So in terms of not knowing where the bottom is, that’s potentially limitless as far as humanity is concerned. We know that it will have an end. Worst case scenario, that end is a very arid climate with a much less complex, much simpler atmosphere, which would basically need to be terraformed back into a livable environment in the way that we’re currently thinking maybe we could do that for Mars. But to get a global effort to do that, in an already sort of disintegrating Earth, I think would be an extremely tall order. There’s a huge range of different threats and different potential opportunities for an existential catastrophe to unravel within this kind of death spiral. And we think this really is a very credible threat.

Ariel Conn: How do we deal with all this uncertainty?

Haydn Belfield: More research needed, is the classic academic response to any time you ask that question. More research.

Simon Beard: That’s definitely the case, but there are also big questions about the kind of research. So mostly scientists want to study things that they already kind of understand: where you already have well established techniques, you have journals that people can publish their research in, you have an extensive peer review community, you can say, yes, you have done this study by the book, you get to publish it. That’s what all the incentives are aligned towards. 

And that sort of research is very important and very valuable, and I don’t want to say that we need less of that kind of research. But that kind of research is not going to deal with the sort of radical uncertainty that we’re talking about here. So we do need more creative science, we need science that is willing to engage in speculation, but to do so in an open and rigorous way. One of the things is you need scientists who are willing to come on the stand and say, “Look, here’s a hypothesis. I think it’s probably wrong, and I don’t yet know how to test it. But I want people to come out and help me find a way to test this hypothesis and falsify it.” 

There aren’t any scientific incentive structures at the moment that encourage that. That is not a way to get tenure, and it’s not a way to get a professorship or chair, or to take your paper published. That is a really stupid strategy to take if you want to be a successful scientist. So what we need to do is we need to create a safe sandbox for people who are concerned about this — and we know from our engagement that there are a lot of people who would really like to study this and really like to understand it better — for them to do that. So one of the big things that we’re really looking at here in CSER is how do we make the tools to make the tools that will then allow us to study this. How do we provide the methodological insights or the new perspectives that are needed to move towards establishing a science of social collapse or environmental collapse that we can actually use to then answer some of these questions.

So there are several things that we’re working on at the moment. One important thing, which I think is a very crucial step for dealing with the sort of radical uncertainty we face, is this classification. We’ve already talked about classifying different levels of critical system. That’s one part of a larger classification scheme that CSER has been developing to just look at all the different components of risk and say, “Well, there’s this and this and this. Once you start to sort of engage in that exercise and look at what are all the systems that might be vulnerable? What are all the possible vulnerabilities that exist within those systems? What are all the ways in which humanity has exposed these vulnerabilities that they could harness if things go wrong? And you map that out; You haven’t got to the truth, but you’ve moved a lot of things in the unknown category into the, “Okay, I now know all the ways that things could go wrong, and I know that I haven’t a clue how any of these things could happen.” Then you need to say, “Well, what are the techniques that seem appropriate?” 

So we think the planetary boundaries framework, albeit it doesn’t answer the question that we’re interested in, it offers a really nice approach to looking at this question about where tipping points arise, where systems move out of their ordinary operation. We want to apply that in new environments, we want to find new ways of using that. And there are other tools as well that we can take, for instance, from disaster studies and risk management studies, looking at things like fault tree analysis where you say, “What are all the things that might go wrong with this? And what are the levers that we currently have or the interventions that we could make to stop this from happening?” 

We also think that there’s a lot more room for people to share their knowledge and their thoughts and their fears and expectations to what we call structured expert solicitations, where you get people who have very different knowledge together, and you find a way that they can all talk to each other and they can all learn from each other. And often you get answers out of these sort of exercises that are very different to what any individual might put in at the beginning, but they represent a much more sort of complete, much more creative structure. And you can get those published because it’s a recognized scientific method, so structured expert solicitations on climate change got published in Nature last month. Which is great, because it’s a really under researched topic. But I think one of the things that really helped there was that they were using an established method.

What I really hope that CSER’s work going forward is going to achieve is just to make this space that we can actually work with many more of the people who we need to work with to answer these questions and understand the nature of this risk and pull them all together and make the social structures so that the kind of research that we really badly need at this point can actually start to emerge.

Ariel Conn: A lot of what you’re talking about doesn’t sound like something that we can do in the short term, that it will take at least a decade, if not more to get some of this research accomplished. So in the interest of speed — which is one of the uncertainties we have, we don’t seem to have a good grasp of how much time we have before the climate could get really bad — what do we do in the short term? What do we do for the next decade? What do non-academics do?

 

Haydn Belfield: The thing is, it’s kind of two separate questions, right? We certainly know all we need to know to take really drastic, serious action on climate change. What we’re asking is a slightly more specific question, which is how can climate change, climate breakdown, climate chaos contribute to existential risk. So we already know with very high certainty that climate change is going to be terrible for billions of people in the world, that it’s going to make people’s lives harder, it’s going to make them getting out of extreme poverty much harder.

 

And we also know that the people who have contributed the least to the problem are going to be the ones that are screwed the worst by climate change. And it’s just so unfair, and so wrong, that I think we know enough now to take serious action on climate change. And not only is it wrong, it’s not in the interest of rich countries to live in this world of chaos, of worse weather events, and so on. So I think we already know enough, we have enough certainty on those questions to act very seriously, to reduce our emissions very quickly, to invest in as much clean technology as we can, and to collaborate collectively around the world to make those changes. And what we’re saying though, is about the different, more unusual question of how it contributes to existential risk more specifically. So I think I would just make that distinction pretty clear. 

 

Simon Beard: So there’s a direct answer to your question and an indirect answer to your question. Direct answer to your question is all the things you know you should be doing. Fly less, preferably not at all; eat less meat, preferably not at all, and perfectly not dairy, either. Every time there’s an election, vote, but also ask all the candidates — all the candidates, don’t just go for the ones who you think will give you the answer you like — “I’m thinking of voting for you. What are you going to do about climate change?” 

 

There are a lot of people all over the political spectrum who care about climate change. Yeah, there are political slumps in who cares more, and so on. But every political candidate has votes that they could pick up if they did more on climate change, irrespective of their political persuasion. And even if you have a political conviction, so that you’re always going to vote the same way, you can nudge candidates to get those votes and to do more on climate change by just asking that simple question: “I’m thinking of voting for you. What are you going to do about climate change?” That’s a really low buy, it’s good for election; If they get 100 letters, all saying that, and they’re all personal letters, and not just some mass campaign, it really does change the way that people think about the problems that they face. But I also want to challenge you a bit on this, “This is going to take decades,” because it depends — depends how we approach it.

 

Ariel Conn: So one example of research that can happen quickly and action that can occur quickly is this example that you give early on in the work that you’re doing, comparing the need to study climate change as a contributor to existential risk as the work that was done in the 80s, looking at how nuclear weapons can create a nuclear winter, and how that connects to an existential risk. And so I was hoping you could also talk a little bit about that comparison.

 

Simon Beard: Yeah, so I think this is really important and I know a lot of the things that we’re talking about here, about critical global systems and how they interact with each other and so on — it’s long winded, and it’s technical, and it can sound a bit boring. But this was, for me, a really big inspiration as for why we’re trying to look at it in this way. So when people started to explode nuclear weapons in the Manhattan Project in the early 1940s, right from the beginning, they were concerned about the kind of threats, or the kind of risks that these posed, and firstly thought, well, maybe it would set light to the upper atmosphere. And there were big worries about the radiation. And then, for a time, there were worries just about the explosive capacity. 

 

This was enough to raise a kind of general sense of alarm and threat. But none of these were really credible. They didn’t last; They didn’t withstand scientific scrutiny for very long. And then Carl Sagan and some colleagues did this research in the early 1980s on modeling the climate impacts of nuclear weapons, which is not a really intuitive thing to do, right? When you’ve got the most explosive weapon ever envisaged, and it has all this nuclear fallout and so, and you think, what’s this going to do to the global climate, that doesn’t seem like that’s going to be where the problems lie.

 

But they discover when they look at that, that no, it’s a big thing. If you have nuclear strikes on cities, it sends a lot of ash into the upper atmosphere. And it’s very similar to what happens if you have a very large asteroid, or a very large set of volcanoes going off; The kind of changes that you see in the upper atmosphere are very similar, and you get this dramatic global cooling. And this then threatens — as a lot of mass extinctions have — threatens the underlying food source. And that’s how humans starve. And this comes out in 1983, this is kind of 40 years after people started talking about nuclear risk. And it changes the game, because all of a sudden, in looking at this rather unusual topic, they find a really credible way in which nuclear winter leads to everyone dying.

 

The research is still much discussed, and what kind of nuclear warhead, what kind of nuclear explosions, and how many and would they need to hit cities, or would they need to hit areas with particularly large sulphur deposits, or all of these things — these are still being discussed. But all of a sudden, the top leaders, the geopolitical leaders start to take this threat seriously. And we know Reagan was very interested and explored this a lot, the Russians even more so. And it really does seem to have kick started a lot of nuclear disarmament debate and discussion and real action.

 

And what we’re trying to do in reframing the way that people research climate change as an existential threat is to look for something like that: What’s a credible way in which this really does lead to an existential catastrophe for humanity? Because that hasn’t been done yet. We don’t have that. We feel like we have it because everyone knows the threat and the risk. But really, we’re just at this area of kind of vague speculation. There’s a lot of room for people to step up with this kind of research. And the historical evidence suggests that this can make a real difference.

 

Haydn Belfield: We tend to think of existential risks as one-off threats — some big explosion, or some big thing, like an individual asteroid that hits an individual species of dinosaurs and then kills it, right — we tend to think of existential risks as one singular event. But really, that’s not how most mass extinctions happen. That’s not how civilizational collapses have tended to happen over history. The way that all of these things have actually happened, when you go back to look at archeological evidence or you go back to look at the fossil evidence, is that there’s a whole range of different things — different hazards and different internal capabilities of these systems, whether they’re species or societies — and they get overcome by a range of different things. 

 

So, often in archeological history — in the Pueblo Southwest, for example — there’ll be one set of climatic conditions, and one external shock that faces the community, and they react fine to it. But then, in a few different years, the same community is faced by some similar threats, but reacts completely differently and collapses completely. It’s not that there’s these one singular, overwhelming events from outside, it’s that you have to look at all the different systems that this one particular society or whatever relies on. And you have to look at when all of those things overcome the overall resilience of a system. 

 

Or looking at species, like what happens when sometimes a species can recover from an external shock, and sometimes there’s just too many things, and the conditions aren’t right, and they get overcome, and they go extinct. That’s where looking at existential risk, and looking at the study of how we might collapse or how we might go extinct — that’s where the field needs to go: It needs to go into looking at what are all the different hazards we face, how do they interact with the vulnerabilities that we have, and the internal dynamics of our systems that we rely on, and the different resilience of those systems, and how are we exposed to those hazards in different ways, and having a much more sophisticated, complicated, messy look at how they all interact. I think that’s the way that existential risk research needs to go.

 

Simon Beard: I agree. I think that fits in with various things we said earlier.

 

Ariel Conn: So then my final question for both of you is — I mean, you’re not even just looking at climate change as an existential threat; I know you look at lots of things and how they contribute to existential threats — but looking at climate change, what gives you hope?

 

Simon Beard: At a psychological level, hope and fear aren’t actually big day-to-day parts of my life. Because working in existential risk, you have this amazing privilege that you’re doing something, you’re working to make that difference between human extinction and civilization collapse and human survival and flourishing. It’s a waste to have that opportunity and to get too emotional about it. It’s a waste firstly because it is the most fascinating problem. It is intellectually stimulating; It is diverse; It allows you to engage with and talk to the best people, both in terms of intelligence and creativity, but also in terms of drive and passion, and activism and ability to get things done.

 

But also because it’s a necessary task: We have to get on with it, we have to do this. So I don’t know if I have hope. But that doesn’t mean that I’m scared or anxious, I just have a strong sense of what I have to do. I have to do what I can to contribute, to make a difference, to maximize my impact. That’s a series of problems and we have to solve those problems. If there’s one overriding emotion that I have in relation to my work, and what I do, and what gets me out of bed, it’s curiosity — which is, I think, at the end of the day, one of the most motivating emotions that exists. People often say to me, “What’s the thing I should be most worried about: nuclear war, or artificial intelligence or climate change? Like, tell me, what should I be most worried about?” You shouldn’t worry about any of those things. Because worry is a very disabling emotion.

 

People who worry stay in bed. I haven’t got time to do that. I had heart surgery about 18 months ago, a big heart bypass operation. And they warned me before that, after this surgery, you’re going to feel emotional, it happens to everyone. It’s basically a near death experience. You have to be cooled down to a state that you can’t recover on your own; They have to heat you up. Your body kind of remembers these things. And I do remember a couple of nights after getting home from that. And I just burst into floods of tears thinking about this kind of existential collapse, and, you know, what it would mean for my kids and how we’d survive it, and it was completely overwhelming. As overwhelming as you’d expect it to be for someone who has to think about that. 

 

But this isn’t how we engage with it. This isn’t science fiction stories that we’re telling ourselves to feel scared or feel a rush. This is a real problem. And we’re here to solve that problem. I’ve been very moved the last month or so by all the stuff about the Apollo landing missions. And it’s reminded me, sort of a big inspiration of my life, one of these bizarre inspirations of my life, was getting Microsoft Encarta 95, which was kind of my first all-purpose knowledge source. And when you loaded it up — because it was the first one on CD ROM — they had these sound clips and they included that bit of JFK’s speech about we choose to go to the moon, not because it’s easy, but because it’s hard. And that has been a really inspiring quote for me. And I think I’ve often chosen to do things because they’re hard. 

 

And it’s been kind of upsetting — this is the first time this kind of moon landing anniversary’s come up — and I realized no, he was being completely literal. Like the reason that I chose to go to the moon was it was so hard that the Russians couldn’t do it. So they were confident that they were going to win the race. And that was all that mattered. But for me, I think in this case, we’re choosing to do this research and to do this work, not because it’s hard, but because it’s easy. Because understanding climate change, being curious about it, working out new ways to adapt, and to mitigate, and to manage the risk, is so much easier than living with the negative consequences of it. This is the best deal on the table at the moment. This is the way that we maximize the benefit for minimizing the cost.

 

This is not the great big structural change that completely messes up our entire society, and reduces us to some kind of Greek primitivism. That’s what happens if climate change kicks in. That’s when we start to see people reduced to subsistence level, agricultural, whatever it is. Understanding the risk and responding to it: this is the way that we keep all the good things that our civilization has given us. This is the way that we keep international travel, that we keep our technology, that we keep our food and getting nice things from all around the world. 

 

And yes, it does require some sacrifices. But these are really small change in the scale of things. And once we start to make them we will find ways of working around it. We are very creative, we are very adaptable, we can adapt to the changes that we need to make to mitigate climate change. And we’ll be good at that. And I just wish that anyone listening to this podcast had that mindset, didn’t think about fear or about blame, or shame or anger — that they thought about curiosity, and they thought about what can I do, and how good this is going to be, how bright and open our future is, and how much we can achieve as a species.

 

If we can just get over these hurdles, these mistakes that we made years ago, for various reasons — often a small number of people in the land, you know, that’s what determined that we have petrol cars rather than battery cars — and we can undo them; It’s in our power, it’s in our gift. We are the species that can determine our own fate; We get to choose. And that’s why we’re doing this research. And I think if lots of people — especially if lots of people who are well educated, maybe scientists, maybe people who are thinking about a career in science — view this problem in that light, as what can I do? What’s the difference I can make? We’re powerful. It’s a much less difficult problem to solve and a much better ultimate payoff that we’ll get than if we try and solve this any other way, especially if we don’t do anything.

 

Ariel Conn: That was wonderful.

 

Simon Beard: Yeah, I’m ready to storm the barricade.

 

Ariel Conn: All right, Haydn try to top that.

 

Haydn Belfield: No way. That’s great. I think Simon said all that needs to be said on that.

 

Ariel Conn: All right. Well, thank you both for joining us today.

 

Simon Beard: Thank you. It’s been a pleasure.

 

Haydn Belfield: Yeah, absolute pleasure.

 

 

 

 

FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell

Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate? 

In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).

The CTBT prohibits all signatories from testing nuclear weapons of any size (North Korea, India, and Pakistan are not signatories). But the CTBT never actually entered into force, in large part because the U.S. has still not ratified it, though Russia did.

The existence of the treaty, even without ratification, has been sufficient to establish the norms and taboos necessary to ensure an international moratorium on nuclear weapons tests for a couple decades. But will that last? Or will the U.S., Russia, or China start testing nuclear weapons again? 

This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.

Topics discussed in this episode: 

  • The validity of the U.S. allegations –Is Russia really testing weapons?
  • The International Monitoring System — How effective is it if the treaty isn’t in effect?
  • The modernization of U.S/Russian/Chinese nuclear arsenals and what that means
  • Why there’s a push for nuclear testing
  • Why opposing nuclear testing can help ensure the US maintains nuclear superiority 

References discussed in this episode: 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: Welcome to another episode of the FLI Podcast. I’m your host Ariel Conn, and the big question I want to delve into this month is: will the U.S. or Russia or China start testing nuclear weapons again? Now, at the end of May, the Director of the U.S. Defense Intelligence Agency, the DIA, gave a statement about Russian and Chinese nuclear modernization trends. I want to start by reading a couple short sections of his speech.

About Russia, he said, “The United States believes that Russia probably is not adhering to its nuclear testing moratorium in a manner consistent with the zero-yield standard. Our understanding of nuclear weapon development leads us to believe Russia’s testing activities would help it to improve its nuclear weapons capabilities.”

And then later in the statement that he gave, he said, “U.S. government information indicates that China is possibly preparing to operate its test site year-round, a development that speaks directly to China’s growing goals for its nuclear forces. Further, China continues to use explosive containment chambers at its nuclear test site and Chinese leaders previously joined Russia in watering down language in a P5 statement that would have affirmed a uniform understanding of zero-yield testing. The combination of these facts and China’s lack of transparency on their nuclear testing activities raises questions as to whether China could achieve such progress without activities inconsistent with the Comprehensive Nuclear-Test-Ban Treaty.”

Now, we’ve already seen this year that the Intermediate-Range Nuclear Forces Treaty, the INF, has started to falter. The U.S. seems to be trying to pull itself out of the treaty and now we have reason possibly to be a little worried about the Comprehensive Test-Ban Treaty. So to discuss what the future may hold for this test ban treaty, I am delighted to be joined today by Jeffrey Lewis and Alex Bell.

Jeffrey is the Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies at the Middlebury Institute. Before coming to CNS, he was the Director of the Nuclear Strategy and Nonproliferation Initiative at the New America Foundation and prior to that, he worked with the ADAM Project at the Belfer Center for Science and International Affairs, the Association of Professional Schools of International Affairs, the Center for Strategic and International Studies, and he was once a Desk Officer in the Office of the Under Secretary of Defense for Policy. But he’s probably a little bit more famous as being the founder of armscontrolwonk.com, which is the leading blog and podcast on disarmament, arms control, and nonproliferation.

Alex Bell is the Senior Policy Director at the Center for Arms Control and Non-Proliferation. Previously, she served as a Senior Advisor in the Office of the Under Secretary of State for Arms Control and International Security. Before joining the Department of State in 2010, she worked on nuclear policy issues at the Ploughshares Fund and the Center for American Progress. Alex is on the board of the British American Security Information Council and she was also a Peace Corps volunteer. And she is fairly certain that she is Tuxedo, North Carolina’s only nuclear policy expert.

So, Alex and Jeffrey, thank you so much for joining me today.

Jeffrey Lewis: It’s great to be here.

Ariel Conn: Let’s dive right into questions. I was hoping one of you or maybe both of you could just sort of give a really quick overview or a super brief history of the Comprehensive Nuclear-Test-Ban Treaty –– especially who has signed and ratified, and who hasn’t signed and/or ratified with regard to the U.S., Russia, and China.

Jeffrey Lewis: So, there were a number of treaties during the Cold War that restricted nuclear explosions, so you had to do them underground. But in the 1990s, the Clinton administration helped negotiate a global ban on all nuclear explosions. So that’s what the Comprehensive Nuclear-Test-Ban Treaty is. The comprehensive part is, you can’t do any explosions of any yield.

And a curious feature of this agreement is that for the treaty to come into force, certain countries must sign and ratify the treaty. One of those countries was Russia, which has both signed and ratified it. Another country was the United States. We have signed it, but the Senate did not ratify it in 1999, and I think we’re still waiting. China has signed it and basically indicated that they’ll ratify it only when the United States does. India has not signed and not ratified, and North Korea and Iran –– not signed and not ratified.

So it’s been 23 years. There’s a Comprehensive Test-Ban Treaty Organization, which is responsible for getting things ready to go when the treaty is ready; I’m actually here in Vienna at a conference that they’re putting on. But 23 years later, the treaty is still not in force even though we haven’t had any nuclear explosions in the United States or Russia since the end of the Cold War.

Ariel Conn: Yeah. So my understanding is that even though we haven’t actually ratified this and it’s not enforced, most countries, with maybe one or two exceptions, do actually abide by it. Is that true?

Alex Bell: Absolutely. There are 184 member states to the treaty, 168 total ratifications, and the only country to conduct explosive tests in the 21st century is North Korea. So while it is not yet in force, the moratorium against explosive testing is incredibly strong.

Ariel Conn: And do you remain hopeful that that’s going to stay the case, or do comments from people like Lieutenant General Ashley have you concerned?

Alex Bell: It’s a little concerning that the nature of these accusations that came from Lieutenant General Ashley didn’t seem to follow the pattern of how the U.S. government historically has talked about compliance issues that it has seen with various treaties and obligations. We have yet to hear a formal statement from the Department of State who actually has the responsibility to manage compliance issues, nor have we heard from the main part of the Intelligence Community, the Office of the Director for National Intelligence. It’s a bit strange and it has had people thinking, what was the purpose of this accusation if not to sort of move us away from the test ban?

Jeffrey Lewis: I would add that during the debate inside the Trump administration, when they were writing what was called the Nuclear Posture Review, there was a push by some people for the United States to start conducting nuclear explosions again, something that it had not done since the early 1990s. So on the one hand, it’s easy to see this as a kind of straight forward intelligence matter: Are the Russians doing it or are they not?

But on the other hand, there has always been a group of people in the United States who are upset about the test moratorium, and don’t want to see the test ban ratified, and would like the United States to resume nuclear testing. And those people have, since the 1990s, always pointed at the Russians, claiming that they must be doing secret tests and so we should start our own.

And the kind of beautiful irony of this is that when you read articles from Russians who want to start testing –– because, you know, their labs are like ours, they want to do nuclear explosions –– they say, “The Americans are surely getting ready to cheat. So we should go ahead and get ready to go.” So you have these people pointing fingers at one another, but I think the reality is that there are too many people in the United States and Russia who’d be happy to go back to a world in which there was a lot of nuclear testing.

Ariel Conn: And so do we have reason to believe that the Russians might be testing low-yield nuclear weapons or does that still seem to be entirely speculative?

Alex Bell: I’ll let Jeffrey go into some of the historical concerns people have had about the Russian program, but I think it’s important to note that the Russians immediately denied these accusations with the Foreign Minister, Lavrov, actually describing them as delusional and the Deputy Foreign Minister, Sergei Ryabkov, affirmed that they’re in full and absolute compliance with the treaty and the unilateral moratorium on nuclear testing that is also in place until the treaty enters into force. He also penned an op-ed a number of years ago affirming that the Russians believed that any yield on any tests would violate the agreement.

Jeffrey Lewis: Yeah, you know, really from the day the test ban was signed, there have been a group of people in the United States who have argued that the U.S. and Russia have different definitions of zero –– which I don’t find very credible, but it’s a thing people say –– and that the Russians are using this to conduct very small nuclear explosions. This literally was a debate that tore the U.S. Intelligence Community apart during the Clinton administration and these fears led to a really embarrassing moment.

There was a seismic event, some ground motion, some shaking near the Russian nuclear test site in 1997 and the Intelligence Community decided, “Aha, this is it. This is a nuclear test. We’ve caught the Russians,” and Madeline Albright démarched Moscow for conducting a clandestine nuclear test in violation of the CTBT, which it had just signed, and it turned out it was an earthquake out in the ocean.

So there have been a group of people who have been making this claim for more than 20 years. I have never seen any evidence that would persuade me that this is anything other than something they say because they just don’t trust the Russians. I suppose it is possible –– even a stopped watch is right twice a day. But I think before we take any actions, it would behoove us to figure out if there are any facts behind this. Because when you’ve heard the same story for 20 years with no evidence, it’s like the boy who cried wolf. It’s kind of hard to believe

Alex Bell: And that gets back to the sort of strange way that this accusation was framed: not by the Department of State; It’s not clear that Congress has been briefed about it; It’s not clear our allies were briefed about it before Lieutenant General Ashley made these comments. Everything’s been done in a rather unorthodox way and for something as serious as a potential low-yield nuclear test, this really needs to be done according to form.

Jeffrey Lewis: It’s not typical if you’re going to make an accusation that the country is cheating on an arms control treaty to drive a clown car up and then have 15 clowns come out and honk some horns. It makes it harder to accept whatever underlying evidence there may be if you choose to do it in this kind of ridiculous fashion.

Alex Bell: And that would be for any administration, but particularly, an administration that has made a habit of getting out of agreements sort of habitually now.

Jeffrey Lewis: What I loved about the statement that the Defense Intelligence Agency released –– so after the DIA director made this statement, and it’s really worth watching because he reads the statement, which is super inflammatory and there was a reporter in the audience who had been given his remarks in advance. So someone clearly leaked the testimony to make sure there was a reporter there and the reporter asks a question, and then Ashley kind of freaks out and walks back what he said.

So DIA then releases a statement where they double down and say, “No, no, no, he really meant it,” but it starts with the craziest sentence I’ve ever seen, which is “The United States government, including the Intelligence Community, assesses,” which if you know anything about the way the U.S. government works is insane because only the Intelligence Community is supposed to assess. This implies that John Bolton had an assessment, and Mike Pompeo had an assessment, and just the comical manner in which it was handled makes it very hard to take seriously or to see it as anything other than just nakedly partisan assault on the test moratorium and the test ban.

Ariel Conn: So I want to follow up about what the implications are for the test ban, but I want to go back real quick just to some of the technical side of identifying a low-yield explosion. I actually have a background in seismology, so I know that it’s not that big of a challenge for people who study seismic waves to recognize the difference between an earthquake and a blast. And so I’m wondering how small a low yield test actually is. Is it harder to identify, or are there just not seismic stations that the U.S. has access to, or is there something else involved?

Jeffrey Lewis: Well so these are called hydronuclear experiments. They are so incredibly small. They are, on the order in the U.S., there’s something like four pounds of explosive, so basically less explosion than the actual conventional explosions that are used to detonate the nuclear weapon. Some people think the Russians have a slightly bigger definition that might go up to 100 kilograms, but these are mouse farts. They are so small that unless you have the seismic station sitting right next to it, you would never know.

In a way, I think that’s a perfect example of why we’re so skeptical because when the test ban was negotiated, there was this giant international monitoring system put into place. It is not just seismic stations, but it is hydroacoustic stations to listen underwater, infrasound stations to listen for explosions in the air, radionuclide stations to detect any radioactive particles that happen to escape in the event of a test. It’s all of this stuff and it is incredibly sensitive and can detect incredibly small explosions down to about 1,000 tons of explosive and in many cases even less.

And so what’s happened is the allegations against the Russians, every time we have better monitoring and it’s clear that they’re not doing the bigger things, then the allegations are they’re doing ever smaller things. So, again, the way in which it was rolled out was kind of comical and caused us, at least me, to have some doubts about it. It is also the case that the nature of the allegation –– that it’s these tiny, tiny, tiny, tiny experiments, which U.S. scientists, by the way, have said they don’t have any interest in doing because they don’t think they are useful –– it’s almost like the perfect accusation and so that also to me is a little bit suspicious in terms of the motives of the people claiming this is happening.

Alex Bell: I think it’s also important to remember when dealing with verification of treaties, we’re looking for things that would be militarily significant. That’s how we try to build the verification system: that if anybody tried to do anything militarily significant, we’d be able to detect that in enough time to respond effectively and make sure the other side doesn’t gain anything from the violation.

So you could say that experiments like this that our own scientists don’t think are useful are not actually militarily significant, so why are we bringing it up? Do we think that this is a challenge to the treaty overall or do we not like the nature of Russia’s violations? And further, if we’re concerned about it, we should be talking to the Russians instead of about them.

Jeffrey Lewis: I think that is actually the most important point that Alex just made. If you actually think that the Russians have a different definition of zero, then go talk to them and get the same definition. If you think that the Russians are conducting these tests, then talk to the Russians and see if you can get access. If the United States were to ratify the test ban and the treaty were to come into force, there is a provision for the U.S. to ask for an inspection. It’s just a little bit rich to me that the people making this allegation are also the people who refuse to do anything about it diplomatically. If they were truly worried, they’d try to fix the problem.

Ariel Conn: Regarding the fact that the Test-Ban Treaty isn’t technically in force, are a lot of the verification processes still essentially in force anyway?

Alex Bell: The International Monitoring System, as Jeff pointed out, was just sort of in its infancy when the treaty was negotiated and now it’s become this marvel of modern technology capable of detecting tests at even very low yields. And so it is up and running and functioning. It was monitoring the various North Korean nuclear tests that have taken place in this century. It also was doing a lot of additional science like tracking radio particulates that came from the Fukushima disaster back in 2011.

So it is functioning. It is giving readings to any party to the treaty, and it is particularly useful right now to have an independent international source of information of this kind. They specifically did put out a very brief statement following this accusation from the Defense Intelligence Agency saying that they had detected nothing that would indicate a test. So that’s about as far as I think they could get, as far as a diplomatic equivalent of, “What are you talking about?”

Jeffrey Lewis: I Googled it because I don’t remember it off the top of my head, but it’s 321 monitoring stations and 16 laboratories. So the entire monitoring system has been built out and it works far better than anybody thought it would. It’s just that once the treaty comes into force, there will be an additional provision, which is: in the event that the International Monitoring System, or a state party, has any reason to think that there is a violation, that country can request an inspection. And the CTBTO trains to send people to do onsite inspections in the event of something like this. So there is a mechanism to deal with this problem. It’s just that you have to ratify the treaty.

Ariel Conn: So what are the political implications, I guess, of the fact that the U.S. has not ratified this, but Russia has –– and that it’s been, I think you said 23 years? It sounds like the U.S. is frustrated with Russia, but is there a point at which Russia gets frustrated with the U.S.?

Jeffrey Lewis: I’m a little worried about that, yeah. The reality of the situation is I’m not sure that the United States can continue to reap the benefits of this monitoring system and the benefits of what I think Alex rightly described as a global norm against nuclear testing and sort of expect everybody else to restrain themselves while in the United States we refuse to ratify the treaty and talk about resuming nuclear testing.

And so I don’t think it’s a near term risk that the Russians are going to resume testing, but we have seen… We do a lot of work with satellite images at the Middlebury Institute and the U.S. has undertaken a pretty big campaign to keep its nuclear test site modern and ready to conduct a nuclear test on as little as six months’ notice. In the past few years, we’ve seen the Russians do the same thing.

For many years, they neglected their test site. It was in really poor shape and starting in about 2015, they started putting money into it in order to improve its readiness. So it’s very hard for us to say, “Do as we say, not as we do.”

Alex Bell: Yeah, I think it’s also important to realize that if the United States resumes testing, everyone will resume testing. The guardrails will be completely off and that doesn’t make any sense because having the most technologically advanced and capable nuclear weapons infrastructure like we do, we’re benefitted from a global ban on explicit testing. It means we’re sort of locking in our own superiority.

Ariel Conn: So we’re putting that at risk. So I want to expand the conversation from just Russia and the U.S. to pull China in as well because the talk that Ashley gave was also about China’s modernization efforts. And he made some comments that sounded almost like maybe China is considering testing as well. I was sort of curious what your take on his China comments are.

Jeffrey Lewis: I’m going to jump in and be aggressive on this one because my doctoral dissertation was on the history of China’s nuclear weapons program. The class I teach at the Middlebury Institute is one in which we look at declassified U.S. intelligence assessments and then we look at Chinese historical materials in order to see how wrong the intelligence assessments were. This specifically covers U.S. assessments of China’s nuclear testing, and the U.S. just has an awful track record on this topic.

I actually interviewed the former head of China’s nuclear weapons program once, and I was talking to him about this because I was showing him some declassified assessments and I was sort of asking him about, you know, “Had you done this or had you done that?” He sort of kind of took it all in and he just kind of laughed, and he said, “I think many of your assessments were not very accurate.” There was sort of a twinkle in his eye as he said it because I think he was just sort of like, “We wrote a book about it, we told you what we did.”

Anything is possible, and the point of these allegations is events are so small that they are impossible to disprove, but to me, that’s looking at it backwards. If you’re going to cause a major international crisis, you need to come to the table with some evidence, and I just don’t see it.

Alex Bell: The GEM, the Group of Eminent Members, which is an advisory group to the CTBTO, put it best when they said the most effective way to sort of deal with this problem is to get the treaty into force. So we could have intrusive short notice onsite inspections to detect and deter any possible violations.

Jeffrey Lewis: I actually got in trouble, I got to hushed because I was talking to a member and they were trying to work on this statement and they needed the member to come back in.

Ariel Conn: So I guess when you look at stuff like this –– so, basically, all three countries are currently modernizing their nuclear arsenals. Maybe we should just spend a couple minutes talking about that too. What does it mean for each country to be modernizing their arsenal? What does that sort of very briefly look like?

Alex Bell: Nuclear weapons delivery systems, nuclear weapons do age. You do have to maintain them, like you would with any weapon system, but fortunately, from the U.S. perspective, we have exceedingly capable scientists who are able to extend the life of these systems without testing. Jeffrey, if you want to go into what other countries are doing.

Jeffrey Lewis: Yeah. I think the simplest thing to do is to talk about, at least for the nuclear warheads part, I think as Alex mentioned, all of the countries are building new submarines, and missiles, and bombers that can deliver these nuclear weapons. And that’s a giant enterprise. It costs many billions of dollars every year. But when you actually look at the warheads themselves can tell you what we do in the United States. In some cases, we build new versions of existing designs. In almost all cases, we replace components as they age.

So the warhead design might stay the same, but piece by piece things get replaced. And because we’ve been replacing those pieces over time, if they have to put a new fuse in for a nuclear warhead, they don’t go back and build the ’70s era fuse. They build a new fuse. So even though we say that we’re only replacing the existing components and we don’t try to add new capabilities, in fact, we add new capabilities all the time because as all of these components get better than the weapons themselves get better, and we’re altering the characteristics of the warheads.

So the United States has a warhead on its submarine-launched ballistic missiles, and the Trump administration just undertook a program to give it a capability so that we can turn down the yield. So if we want to make it go off with a very small explosion, they can do that. It’s a full plate of the kinds of changes that are being made, and I think we’re seeing that in Russia and China too.

They are doing all of the same things to preserve the existing weapons they have. They rebuild designs that they have, and I think that they tinker with those designs. And that is constrained somewhat by the fact that there is no explosive testing –– that makes it harder to do those things, which is precisely why we wanted this ban in the first place –– but everybody is playing with their nuclear weapons.

And I think just because there’s a testing moratorium, the scientists who do this, some of them, because they want to go back to nuclear testing or nuclear explosions, they say, “If we could only test with explosions, that would be better.” So there’s even more they want to do, but let’s not act like they don’t get to touch the bombs, because they play with them all the time.

Alex Bell: Yeah. It’s interesting you brought up the low yield option for our submarine-launched ballistic missiles because the House of Representatives actually in the defense appropriations and authorization process that it’s going through right now actually blocked further funding and the deployment of this particular type of warhead because, in their opinion, the President already had plenty low-yield nuclear options, thank you very much. He doesn’t need anymore.

Jeffrey Lewis: Of course, I don’t think this president needs any nuclear options, but-

Alex Bell: But it just shows there’s definitely a political and oversight feature that comes into this modernization debate. The idea that even if the forces that Jeffrey talked about who’ve always wanted to return to testing, even if they could prevail upon a particular administration to go in that direction, it’s unlikely Congress would be as sanguine about it.

Nevada, where our former nuclear testing site is, now the Nevada National Security Site –– it’s not clear that Nevadans are going to be okay with a return to explosive nuclear testing, nor will the people of Utah who sit downwind from that particular site. So there’s actually a “not in my backyard” kind of feature to the debate about further testing.

Jeffrey Lewis: Yeah. The Department of Energy has actually taken… Anytime they do a conventional explosion at the Nevada site, they keep it a secret because they were going to do a conventional explosion 10 or 15 years ago and people got wind of it and were outraged because they were terrified the conventional explosion would kick up a bunch of dust and that there might still be radioactive particulates.

I’m not sure that that was an accurate worry, but I think it speaks to the lack of trust that people around the test site have, given some of the irresponsible things that the U.S. nuclear weapons complex has done over the years. That’s a whole other podcast, but you don’t want to live next to anything that NNSA overseas.

Alex Bell: There’s also a proximity issue. Las Vegas is incredibly close to that facility. Back in the day when they did underground testing there, it used to shake the buildings on the Strip. And Las Vegas has only expanded from 20, 30 years ago, so you’re going to have a lot of people that would be very worried.

Ariel Conn: Yeah. So that’s actually a question that I had. I mean, we have a better idea today of what the impacts of nuclear testing are. Would Americans approve of nuclear weapons being tested on our ground?

Jeffrey Lewis: Probably if they didn’t have to live next to them.

Alex Bell: Yeah. I’ve been to some of the states where we conducted tests other than Nevada. So Colorado, where we tried to do this brilliant idea of whether we could do fracking via nuclear explosion. You can see the problems inherent in that idea. Alaska, New Mexico, obviously, where the first nuclear test happened. We also tested weapons in Mississippi. So all of these states have been affected in various ways and radio particulates from the sites in Nevada have drifted as far away from Maine, and scientists have been able to trace cancer clusters half a continent away.

Jeffrey Lewis: Yeah, I would add that –– Alex mentioned testing in Alaska –– so there was a giant test in 1971 in Alaska called Cannikin. It was five megatons. So a megaton is 1,000 kilotons. Hiroshima was 20 kilotons and it really made some Canadians angry and the consequence of the angry Canadians was they founded Greenpeace. So the whole iconic Greenpeace on a boat was originally driven by a desire to stop U.S. nuclear testing in Alaska. So, you know, people get worked up.

Ariel Conn: Do you think someone in the U.S. is actively trying to bring testing back? Do you think that we’re going to see more of this or do you think this might just go away?

Jeffrey Lewis: Oh yeah. There was a huge debate at the beginning of the Trump administration. I actually wrote this article making fun of Rick Perry, the Secretary of Energy, who I have to admit has turned out to be a perfectly normal cabinet secretary in an administration that looks like the Star Wars Cantina.

Alex Bell: It’s a low bar.

Jeffrey Lewis: It’s a low bar, and maybe just barely, but Rick got over it. But I was sort of mocking him and the article was headlined, “Even Rick Perry isn’t dumb enough to resume nuclear testing,” and I got notes, people saying, “This is not funny. This is a serious possibility.” So, yeah, I think there has long been a group of people who did not want to end testing. U.S. labs refuse to prepare for the end of testing. So when the U.S. stopped, it was Congress just telling them to stop. They have always wanted to go back to testing, and these are the same people I think who are accusing the Russians of doing things, I think as much so that they can get out of the test ban as anything else.

Alex Bell: Yeah, I would agree with that assessment. Those people have always been here. It’s strange to me because most scientists have affirmed that we know more about our nuclear weapons now not blowing them up than we did before because of the advanced computer modeling, technological advances of the Stockpile Stewardship program, which is the program that extends the life of these warheads. They get to do a lot of great science, and they’ve learned a lot of things about our nuclear forces that we didn’t know before.

So it’s hard to make a case that it is absolutely necessary or would ever be absolutely necessary to return to testing. You would have to totally throw out our obligations that we have to things like the nuclear non-proliferation treaty, which is to pursue the cessation of an arms race in good faith, and a return to testing I think would not be very good faith.

Ariel Conn: Maybe we’ve sort of touched on this, but I guess it’s still not clear to me. Why would we want to return to testing? Especially if, like you said, the models are so good?

Jeffrey Lewis: I think you have to approach that question like an anthropologist. Because some countries are quite happy living under a test ban for exactly the reason that you pointed out, that they are getting all kinds of money to do all kinds of interesting science. And so Chinese seem pretty happy about it; The UK, actually –– I’ve met some UK scientists who are totally satisfied with it.

But I think the culture in the U.S. laboratories, which had really nothing to do with the reliability of the weapons and everything to do with the culture of the lab, was like the day that a young designer became a man or a woman was the day that person’s design went out into the desert and they had to stand there and be terrified it wasn’t going to work, and then feel the big rumble. So I think there are different ways of doing science. I think the labs in the United States were and are sentimentally attached to solving these problems with explosions.

Alex Bell: There’s also sort of a strange desire to see them. My first trip out to the test site, I was the only woman on the trip and we were looking at the Sedan Crater, which is just this enormous crater from an explosion underground that was much bigger than we thought it was going to be. It made this, I think it’s seven football fields across, and to me, it was just sort of horrifying, and I looked at it with dread. And a lot of the people who were on the trip reacted entirely differently with, “I thought it would be bigger,” and, “Wouldn’t it be awesome to see one of these go off, just once?” and had a much different take on what these tests were for and what they sort of indicated.

Ariel Conn: So we can actually test nuclear weapons without exploding them. Can you talk about what the difference is between testing and explosions, and what that means?

Jeffrey Lewis: The way a nuclear weapon works is you have a sphere of fissile material –– so that’s plutonium or highly enriched uranium –– and that’s surrounded by conventional explosives. And around that, there are detonators and electronics to make sure that the explosives all detonate at the exact same moment so that they spherically compress or implode the plutonium or highly enriched uranium. So when it gets squeezed down, it makes a big bang, and then if it’s a thermonuclear weapon, then there’s something called a secondary, which complicates it.

But you can do that –– you can test all of those components, just as long as you don’t have enough plutonium or highly enriched uranium in the middle to cause a nuclear explosion. So you can fill it with just regular uranium, which won’t go critical, and so you could test the whole setup that way for all of the things in a nuclear weapon that would make it a thermonuclear weapon. There’s a variety of different fusion research techniques you can do to test those kinds of reactions.

So you can really simulate everything, and you can do as many computer simulations as you want, it’s just that you can’t put it all together and get the big bang. And so the U.S. has built this giant facility at Livermore called NIF, the National Ignition Facility, which is a many billion-dollar piece of equipment, in order to sort of simulate some of the fusion aspects of a nuclear weapon. It’s an incredible piece of equipment that has taught U.S. scientists far more than they ever knew about these processes when they were actually exploding things. It’s far better for them, and they can do that. It’s completely legal.

Alex Bell: Yeah, the most powerful computer in the world belongs to Los Alamos. Its job is to help simulate these nuclear explosions and process data related to the nuclear stockpile.

Jeffrey Lewis: Yeah, I got a kick –– I always check in on that list, and it’s almost invariably one of the U.S. nuclear laboratories that has the top computer. And then one time I noticed that the Chinese had jumped up there for a minute and it was their laboratory.

Alex Bell: Yup, it trades back and forth.

Jeffrey Lewis: Good times.

Alex Bell: A lot of the data that goes into this is observational information and technical readings that we got from when we did explosive testing. And our testing record is far more extensive than any other country, which is one of the reasons why we have sort of this advantage that would be locked in, in the event of a CTBT entering into force.

Ariel Conn: Yeah, I thought that was actually a really interesting point. I don’t know if there’s more to elaborate on it, but the idea that the U.S. could actually sacrifice some of its nuclear superiority by ––

Alex Bell: Returning to testing?

Ariel Conn: Yeah.

Alex Bell: Yeah, because if we go, everyone goes.

Ariel Conn: There were countries that still weren’t thrilled even with the testing that is allowed. Can you elaborate on that a little bit?

Alex Bell: Yes. A lot of countries, particularly the countries that back the Treaty on the Prohibition of Nuclear Weapons, which is a new treaty that does not have any nuclear weapon states as a part of it, but it’s a total ban on the possession and use of nuclear weapons, and those countries are particularly frustrated with what they see as the slow pace of disarmament by the nuclear weapon states.

The Nonproliferation Treaty, which is sort of the glue that holds all this together, was indefinitely extended back in 1995. The price for that from the non-nuclear weapon states was the commitment of nuclear weapon states to sign and ratify a comprehensive test ban. So 25 years later almost, they’re still waiting.

Ariel Conn: I will add that, I think as of this week, I believe three of the United States –– California, New Jersey and Oregon –– have passed resolutions supporting the U.S. joining the treaty that actually bans nuclear weapons, that recent one.

Alex Bell: Yeah. It’s been interesting, while it’s something that the verification measures –– Jeffrey might have some thoughts on this too –– to me, principles aside, the verification measures in the Treaty on the Prohibition of Nuclear Weapons makes it sort of an unviable treaty. But from a messaging perspective, you’re seeing kind of the first time since the Cold War where citizenry around the world is saying, “You have to get rid of these weapons. They’re no longer acceptable. They’ve become liabilities, not assets.”

So while I don’t think the treaty itself is a workable treaty for the United States, I think that the sentiment behind it is useful in persuading leaders that we do need to do more on disarmament.

Jeffrey Lewis: I would just say that I think just like we saw earlier, there’s a lot of the U.S. wanting to have its cake and eat it too. And so the Nonproliferation Treaty, which is the big treaty that says, “Countries should not be able to acquire nuclear weapons,” it also commits the United States and the other nuclear powers to work toward disarmament. That’s not something they take seriously.

Just like with nuclear testing where you see this, “Oh, well, maybe we could edge back and do it,” you see the same thing just on disarmament issues generally. So having people out there who are insisting on holding the most powerful countries to account to make sure that they do their share, I also think is really important.

Ariel Conn: All right. So I actually think that’s sort of a nice note to end on. Is there anything else that you think is important that we didn’t get into or that just generally is important for people to know?

Alex Bell: I would just reiterate the point that if the U.S. government is truly concerned that Russia is conducting tests at even very low yields, that we need to be engaged in a conversation with them, that a global ban on nuclear explosive testing is good for every country in this world and we shouldn’t be doing things to derail the pursuit of such a treaty.

Ariel Conn: Agreed. All right, well, thank you both so much for joining today.

As always, if you’ve been enjoying the podcast, please take a moment to like it, share it, and maybe even leave a good review and I will be back again next month with another episode of the FLI Podcast.

FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

As we grapple with questions about AI safety and ethics, we’re implicitly asking something else: what type of a future do we want, and how can AI help us get there?

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.

Topics discussed in this episode include:

  • Hopes for the future of AI
  • AI-human collaboration
  • AI’s influence on art and creativity
  • The UN AI for Good Summit
  • Gaps in AI safety
  • Preparing AI for uncertainty
  • Holding AI accountable

Publications and resources discussed in this episode include:

Ariel: Hello and welcome to another episode of the FLI podcast. I’m your host Ariel Conn, and today we’ll be looking at how to address safety and ethical issues surrounding artificial intelligence, and how we can implement safe and ethical AIs both now and into the future. Joining us this month are Ashley Llorens and Francesca Rossi who will talk about what they’re seeing in academia, industry, and the military in terms of how AI safety is already being applied and where the gaps are that still need to be addressed.

Ashley is the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory where he directs research and development in machine learning, robotics, autonomous systems, and neuroscience all towards addressing national and global challenges. He has served on the Defense Science Board, the Naval Studies Board of the National Academy of Sciences, and the Center for a New American Security’s AI task force. He is also a voting member of the Recording Academy, which is the organization that hosts the Grammy Awards, and I will definitely be asking him about that later in the show.

Francesca is the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab. She is an advisory board member for FLI, a founding board member for the Partnership on AI, a deputy academic director of the Leverhulme Centre for the Future of Intelligence, a fellow with AAAI and EurAI (that’s e-u-r-a-i), and she will be the general chair of AAAI in 2020. She was previously Professor of Computer Science at the University of Padova in Italy, and she’s been president of IJCAI and the editor-in-chief of the Journal of AI Research. She is currently joining us from the United Nations AI For Good Summit, which I will also ask about later in the show.

So Ashley and Francesca, thank you so much for joining us today.

Francesca: Thank you.

Ashley: Glad to be here.

Ariel: Alright. The first question that I have for both of you, and Ashley, maybe I’ll direct this towards you first: basically, as you look into the future and you look at artificial intelligence becoming more of a role in our everyday lives — before we look at how everything could go wrong, what are we striving for? What do you hope will happen with artificial intelligence and humanity?

Ashley: My perspective on AI is informed a lot by my research and experiences at the Johns Hopkins Applied Physics Lab, which I’ve been at for a number of years. My earliest explorations had to do with applications of artificial intelligence to robotics systems, in particular underwater robotics systems, systems where signal processing and machine learning are needed to give the system situational awareness. And of course, light doesn’t travel very well underwater, so it’s an interesting task to make a machine see with sound for all of its awareness and all of its perception.

And in that journey, I realized how hard it is to have AI-enabled systems capable of functioning in the real world. That’s really been a personal research journey that’s turned into an institution-wide research journey for Johns Hopkins APL writ large. And we’re a large not-for-profit R & D organization that does national security, space exploration, and health. We’re about 7,000 folks or so across many different disciplines, but many scientists and engineers working on those kinds of problems — we say critical contributions to critical challenges.

So as I look forward, I’m really looking at AI-enabled systems, whether they’re algorithmic in cyberspace or they’re real-world systems that are really able to act with greater autonomy in the context of these important national and global challenges. So for national security: to have robotic systems that can be where people don’t want to be, in terms of being under the sea or even having a robot go into a situation that could be dangerous so a person doesn’t have to. And to have that system be able to deal with all the uncertainty associated with that.

You look at future space exploration missions where — in terms of AI for scientific discovery, we talk a lot about that — imagine a system that can perform science with greater degrees of autonomy and figure out novel ways of using its instruments to form and interrogate hypotheses when billions of miles away. Or in health applications where we can have systems more ubiquitously interpreting data and helping us to make decisions about our health to increase our lifespan, or health span as they say.

I’ve been accused of being a techno-optimist, I guess. I don’t think technology is the solution to everything, but it is my personal fascination. And in general, just having this AI capable of adding value for humanity in a real world that’s messy and sloppy and uncertain.

Ariel: Alright. Francesca, you and I have talked a bit in the past, and so I know you do a lot of work with AI safety and ethics. But I know you’re also incredibly hopeful about where we can go with AI. So if you could start by talking about some of the things that you’re most looking forward to.

Francesca: Sure. Partially focused on the need of developing autonomous AI systems that can act where humans cannot go, for example, and that’s definitely very, very important. I would like to focus more on the need also of AI systems that can actually work together with humans, augmenting our own capabilities to make decisions or to function in our work environment or in our private environment. That’s the focus of and the purpose of AI that I see, that I work on, and I focus on what are the challenges in making this system really work well with humans.

This means of course that while it may seem that in some sense it’s easier to develop an AI system that works together with humans because there is complementarity — some things are made by humans, some things are made by the machine. But actually, there are several additional challenges because you want these two entities, the human and the machine, to actually become a real team and work together and collaborate together to achieve a certain goal. You want these machines to be able to communicate, interact in a very natural way with human beings and you want these machines to be not just reactive to commands, but also proactive at trying to understand what the human being needs in that moment, in that context in order to provide all the information and knowledge that it needs from the data that surrounds whatever task is going to be addressed.

That’s the focus also of what IBM Business Model is, because of course IBM releases AI to be used in other companies so that their professional people can use it to do better the job that they’re doing. And it has many, many different interesting research directions. The one that I’m mostly focused on is around value alignment. How do you make sure that these systems know and are aware of the values that they should follow and of the ethical principles that they should follow, while trying to help human beings do whatever they need to do? And there are many ways that you can do that and many ways to model them to reason with these ethical principles and so on.

Being here in Geneva at AI For Good, I mean, in general, I think that here for example the emphasis is — and rightly so — about the sustainable development goals of the UN: these 17 goals that define a vision of the future, the future that we want. And we’re trying to understand how we can leverage technologies such as AI to achieve that vision. The vision can be slightly nuanced and different, but to me, the development of advanced AI is not the end goal, but is only a way to get to the vision of the future that I have. And so, to me, this AI For Good Summit and the 17 sustainable development goals define a vision of the future that is important to have when one has in mind how to improve technology.

Ariel: For listeners who aren’t as familiar with the sustainable development goals, we can include links to what all of those are in the podcast description.

Francesca: I was impressed at this AI For Good Summit. This Summit started three years ago with kind of 400 people. Then last year it was like 500 people, and this year there are 3,200 registered participants. To really give you an idea of how more and more everybody’s interested into these subjects.

Ariel: Have you also been equally impressed by the topics that are covered?

Francesca: Well, I mean, it started today. So I just saw in the morning there are five different parallel sessions that will go throughout the following two days. One is AI education and learning. One is health and wellbeing. One is AI human dignity and inclusive society. One is scaling AI for good. And one is AI for space. These five themes will go throughout two days together with many other smaller ones. But for what I’ve seen this morning, it’s really a very high level of the discussion. It’s going to be very impactful. Each event is unique, has its own specificity, but this event is unique because it’s focused on a vision of the future, which in this case are the sustainable development goals.

Ariel: Well, I’m really glad that you’re there. We’re excited to have you there. And so, you’re talking about moving towards futures where we have AIs that can do things that either humans can’t do or don’t want to do or isn’t safe, visions where we can achieve more because we’re working with AI systems as opposed to just humans trying to do things alone. But we still have to get to those points where this is being implemented safely and ethically.

I’ll come back to the question of what we’re doing right so far, but first, what do you see as the biggest gaps in AI safety and ethics? And this is a super broad question, but looking at it with respect to, say, the military or industry or academia. What are some of the biggest problems you see in terms of us safely applying AI to solve problems?

Ashley: It’s a really important question. My answer is going to center around uncertainty and dealing with that in the context of the operation of the system, and let’s say the implementation or the execution of the ethics of the system as well. But first, backing up to Francesca’s comment, I just want to emphasize this notion of teaming and really embrace this narrative in my remarks here.

I’ve heard it said before that every machine is part of some human workflow. I think a colleague Matt Johnson at the Florida Institute for Human and Machine Cognition says that, which I really like. And so, just to make clear, whether we’re talking about the cognitive enhancements, an application of AI where maybe you’re doing information retrieval, or even a space exploration example, it’s always part of a human-machine team. In the space exploration example, the scientists and the engineers are on the earth, maybe many light hours away, but the machines are helping them do science. But at the end of the day, the scientific discovery is really happening on earth with the scientists. And so, whether it’s a machine operating remotely or by cognitive assistance, it’s always part of a human-machine team. That’s just something I wanted to amplify that Francesca said.

But coming back to the gaps, a lot of times I think what we’re missing in our conversations is getting some structure around the role of uncertainty in these agents that we’re trying to create that are going to help achieve that bright future that Francesca was referring to. To help us think about this at APL, we think about agents as needing to perceive, decide, act in teams. This is a framework that just helps us understand these general capabilities that we’ll need and to start thinking about the role of uncertainty, and then combinations of learning and reasoning that would help agents to deal with that. And so, if you think about an agent pursuing goals, the first thing it has to do is get an understanding of the world states. This is this task of perception.

We often talk about, well, if an agent sees this or that, or if an agent finds itself in this situation, we want it to behave this way. Obviously, the trolley problem is an example we revisit often. I won’t go into the details there, but the question is, I think, given some imperfect observation of the world, how does the structure of that uncertainty factor into the correct functioning of the agent in that situation? And then, how does that factor into the ethical, I’ll say, choices or data-driven responses that an agent might have to that situation?

Then we talk about decision making. An agent has goals. In order to act on its goals, it has to decide about how certain sequences of actions would affect future states of the world. And then again how, in the context of an uncertain world, is the agent going to go about accurately evaluating possible future actions when it’s outside of a gaming environment, for example. How does uncertainty play into that and its evaluation of possible actions? And then in the carrying out of those actions, there may be physical reasoning, geometric reasoning that has to happen. For example, if an agent is going to act in a physical space, or reasoning about a cyber-physical environment where there’s critical infrastructure that needs to be protected or something like that.

And then finally, to Francesca’s point, the interactions, or the teaming with other agents that may be teammates or actually may be adversarial. And so, how does the reasoning about what my teammates might be intending to do, what the state of my teammates might be in terms of cognitive load if it’s a human teammate, what might the intent of adversarial agents be in confounding or interfering with the goals of the human-machine team?

And so, to recap a little bit, I think this notion of machines dealing with uncertainty in real world situations is one of the key challenges that we need to deal with over the coming decades. And so, I think having more explicit conversations about how uncertainty manifests in these situations, how you deal with it in the context of the real world operation of an AI-enabled system, and then how we give structure to the uncertainty in a way that should inform our ethical reasoning about the operation of these systems. I think that’s a very worthy area of focus for us over the coming decades.

Ariel: Could you walk us through a specific example of how an AI system might be applied and what sort of uncertainties it might come across?

Ashley: Yeah, sure. So think about the situation where there’s a dangerous environment, let’s say, in a policing action or in a terrorist situation. Hey, there might be hostiles in this building, and right now a human being might have to go into that building to investigate it. We’ll send a team of robots in there to do the investigation of the building to see if it’s safe, and you can think about that situation as analogous for a number of possible different situations.

And now, let’s think about the state of computer vision technology, where straight pattern recognition is hopefully a fair characterization of the state of the art, where we know we can very accurately recognize objects from a given universe of objects in a computer vision feed, for example. Well, now what happens if these agents encounter objects from outside of that universe of training classes? How can we start to bound the performance of the computer vision algorithm with respect to objects from unknown classes? You can start to get a sense from that progression, just from the perception part of that problem, from recognize, of these 200 possible objects, tell me which class it comes from, to having to do vision type tasks in environments that would present many new and novel objects that they may have to perceive and reason about.

You can think about that perception task now as extending to agents that might be in that environment and trying to ascertain from partial observations of what the agents might look like, partial observations of the things they might be doing to try to have some assessment of this is a friendly agent or this is an unfriendly agent, to reasoning about affordances of objects in the environment that might present our systems with ways of dealing with those agents that conform to ethical principles.

That was not a very, very concrete example, but hopefully starts to get one level deeper into the kinds of situations we want to put systems into and the kinds of uncertainty that might arise.

Francesca: To tie to what Ashley just said, we definitely need a lot more ways to have realistic simulations of what can happen in real life. So testbeds, sandboxes, that is definitely needed. But related to that, there is also this ongoing effort — which has already resulted in tools and mechanisms, but many people are still working on it — which is to understand better the error landscape that the machine learning approach may have. We know machine learning always has a small percentage of error in any given situation and that’s okay, but we need to understand what’s the robustness of the system in terms of that error, and also we need to understand the structure of that error space because this information can inform us on what are the most appropriate or less appropriate use cases for the system.

Of course, going from there, this understanding of the error landscape is just one aspect of the need for transparency on the capabilities and limitations of the AI systems when they are deployed. It’s a challenge that spans from academia or research centers to, of course, the business units and the companies developing and delivering AI systems. So that’s why at IBM we are working a lot on this issue of collecting information during the development and the design phase around the properties of the systems, because we think that understanding of this property is very important to really understand what should or should not be done with the system.

And then, of course, there is, as you know, a lot of work around understanding other properties of the system. Like, fairness is one of the values that we may want to inject, but of course it’s not as simple as it looks because there are many, many definitions of fairness and each one is more appropriate or less appropriate in certain scenarios and certain tasks. It is important to identify the right one at the beginning of the design and the development process, and then to inject mechanisms to detect and mitigate bias according to that notion of fairness that we have decided is the correct one for that product.

And so, this brings us also to the other big challenge, which is to help developers understand how to define these notions, these values like fairness that they need to use in developing the system — how to define them not just by themselves within the tech company, but also communicating with the communities that are going to be impacted by these AI product, and that may have something to say on what is the right definition of fairness that they care about. That’s why, for example, another thing that we did, besides developing research and also products, but we also invest a lot in educating developers in trying to help them understand in their everyday jobs how to think about these issues, whether it’s fairness, robustness, transparency, and so on.

And so, we built this very small booklet — we call it the Everyday AI Ethics Guide for Designers and Developers — that raises a lot of questions that should be in their mind in their everyday job. Because as you know, for example, if you don’t think about bias or fairness during these development phases and you just check whether your product is fair or not or when it’s ready to be deployed, then you may discover that actually you need to start from scratch again if you discover that it doesn’t have the right notion of fairness.

Another effort that we really care a lot about in this effort to build teams of humans and machines is the issue of explainability, to make sure that it is possible to understand why these systems are recommending certain decisions. Explainability is something that, especially in this environment of teaming AI machines, is very important, because without this capability of AI systems of explaining why they are recommending certain decision, then the human being part of the team will not in the long run trust the AI system, so will not adopt it possibly. And so we will also lose the positive and beneficial effect of the AI system.

The last thing that I want to say is that this education of developers extends actually much beyond the developers to also the policy makers. That’s why it’s important to have a lot of interaction with policy makers that need to really be educated about the state of the art, about the challenges, about the limits of current AI, in order to understand how to best drive the current technology, to be more and more advanced, but also beneficial and driven towards the beneficial efforts. And what are the right mechanisms to drive the technology into the direction that we want? Still needs a lot more multi-stakeholder discussion to really achieve the best results, I think.

Ashley: Just picking up on a couple of those themes that Francesca raised: first, I just want to touch on simulations. At the applied physics laboratory, one of the core things we do is develop systems for the real world. And so, as the tools of artificial intelligence are evolving, the art and the science of systems engineering is starting to morph into this AI systems engineering regime. And we see simulation as key, more key than it’s ever been, into developing real world systems that are enabled by AI.

One of the things we’re really looking into now is what we call live virtual constructive simulations. These are simulations that you can do distributed learning for agents in a constructive mode where you have highly parallelized learning, but where you actually have links and hooks for live interactions with humans to get the human-machine teaming. And then finally, bridging the gap between simulation and real world where some of the agents represented in the context of the human-machine teaming functionality can be virtual and some can actually be represented by real systems in the real world. And so, we think that these kinds of environments, these live virtual constructive environments, will be important for bridging the gap from simulation to real.

Now, in the context of that is this notion of sharing information. If you think about the complexity of the systems that we’re building, and the complexity and the uncertainty of the real world conditions — whether that’s physical or cyber or what have you — it’s going to be more and more challenging for a single development team to analytically characterize the performance of the system in the context of real-world environment. And so, I think as a community we’re really doing science; We’re performing science, fielding these complex systems in these real-world environments. And so, the more we can make that a collective scientific exploration where we’re setting hypotheses, performing these experiments — these experiments of deploying AI in real world situations — the more quickly we’ll make progress.

And then, finally, I just wanted to talk about accountability, which I think builds on this notion of transparency and explainability. From what I can see — and this is something we don’t talk about enough, I think — is I think we need to change our notion of accountability when it comes to AI-enabled systems. I think our human nature is we want individual accountability for individual decisions and individual actions. If an accident happens, our whole legal system, our whole accountability framework is, “Well, tell me exactly what happened that time,” and I want to get some accountability based on that and I want to see something improve based on that. Whether it’s a plane crash or a car crash, or let’s say there’s corruption in a Fortune 500 company — we want see the CFO fired and we want to see a new person hired.

I think when you look at these algorithms, they’re driven by statistics, and the statistics that drive these models are really not well suited for individual accountability. It’s very hard to establish the validity of a particular answer or classification or something that comes out of the algorithm. Rather, we’re really starting to look at the performance of these algorithms over a period of time. It’s hard to say, “Okay, this AI-enabled system: tell me what happened on Wednesday,” or, “Let me hold you accountable for what happened on Wednesday.” And more so, “Let me hold you accountable for everything that you did during the month of April that resulted in this performance.”

And so, I think our notion of accountability is going to have to embrace this notion of ensemble validity, validity over a collection of activities, actions, decisions. Because right now, I think if you look at the underlying mathematical frameworks for these algorithms, they’re not well supported for this notion of individual accountability for decisions.

Francesca: Accountability is very important. It needs a lot more discussion. This is one of the topic also that we have been discussing in this initiative by the European Commission in defining the AI Ethics Guidelines for Europe, and accountability is one of the seven requirements. But it’s not easy to define what it means. What Ashley said is one possibility: Change our idea of accountability from one specific instance to over several instances. That’s one possibility, but I think that that’s something that needs a lot more discussion with several stakeholders.

Ariel: You’ve both mentioned some things that sound like we’re starting to move in the right direction. Francesca, you talked about getting developers to think about some of the issues like fairness and bias before they start to develop things. You talked about trying to get policy makers more involved. Ashley, you mentioned the live virtual simulations. Looking at where we are today, what are some of the things that you think have been most successful in moving towards a world where we’re considering AI safety more regularly, or completely regularly?

Francesca: First of all, we’ve gone a really long way in a relatively short period of time, and the Future of Life Institute has been instrumental in building the community, and everybody understands that the only approach to address this issue is a multidisciplinary, multi-stakeholder approach. The Future of Life Institute, with the first Puerto Rico conference, showed very clearly that this is the approach to follow. So I think that in terms of building the community that discusses and identifies the issues, I think we have done a lot.

I think that at this point, what we need is greater coordination and also redundancy removal among all these different initiatives. I think we have to find, as a community, the main issues and the main principles and guidelines that we think are needed for the development of more advanced forms of AI, starting from the current state of the art. If you look at the values, at these guidelines or lists of principles around AI ethics from the values initiatives, they are of course different from each other but they have a lot in common. So we really were able to identify these issues, and this identification of the main issues is important as we move forward to more advanced versions of AI.

And then, I think another thing that also we are doing in a rather successful though not complete way is trying to move from research to practice. From high level principles to concrete, develop, and deploy the products that can embed these principles and guidelines into not just the scientific papers that are published, but also into the platform, the services, and the tool kits that companies use with their clients. We needed an initial phase where there were high level discussions about guidelines and principles, but now we are in the second phase where these go and percolate down to the business units and to how products are built and deployed.

Ashley: Yeah, just building on some of Francesca’s comments, I’ve been very inspired by the work of the Future of Life Institute and the burgeoning, I’ll say, emerging AI safety community. Similar to Francesca’s comment, I think that the real frontier here is now taking a lot of that energy, a lot of that academic exploration, research, and analysis and starting to find the intersections of a lot of those explorations with the real systems that we’re building.

You’re definitely seeing within IBM, as Francesca mentioned, within Microsoft, within more applied R & D organizations like Johns Hopkins APL, where I am, internal efforts to try to bridge the gap. And what I really want to try to work to catalyze in the coming years is a broader, more community-wide intersection between the academic research community looking out over the coming centuries and the applied research community that’s looking out over the coming decades, and find the intersection there. How do we start to pose a lot of these longer term challenge problems in the context of real systems that we’re developing?

And maybe we get to examples. Let’s say, for ethics, beyond the trolley problem and into posing problems that are more real-world or closer, better analogies to the kinds of systems we’re developing, the kinds of situations they will find themselves in, and start to give structure to some of the underlying uncertainty. Having our debates informed by those things.

Ariel: I think that transitions really nicely to the next question I want to ask you both, and that is, over the next 5 to 10 years, what do you want to see out of the AI community that you think will be most useful in implementing safety and ethics?

Ashley: I’ll probably sound repetitive, but I really think focusing in on characterizing — I think I like the way Francesca put it — the error landscape of a system as a function of the complex internal states and workings of the system, and the complex and uncertain real-world environments, whether cyber or physical that the system will be operating in, and really get deeper there. It’s probably clear to anyone that works in the space that we really need to fundamentally advance the science and the technology. I’ll start to introduce the word now: trust, as it pertains to AI-enabled systems operating in these complex and uncertain environments. And again, starting to better ground some of our longer-term thinking about AI being beneficial for humanity and grounding those conversations into the realities of the technologies as they stand today and as we hope to develop and advance them over the next few decades.

Francesca: Trust means building trust in the technology itself — and so the things that we already mentioned like making sure that it’s fair, value aligned, robust, explainable — but also building trust in those that produce the technology. But then, I mean, this is the current topic: How do we build trust? Because without trust we’re not going to adopt the full potential of the beneficial effect of the technology. It makes sense to also think in parallel, and more in the long-term, what’s the right governance? What’s the right coordination of initiatives around AI and AI ethics? And this is already a discussion that is taking place.

And then, after governance and coordination, it’s also important with more and more advanced versions of AI, to think about our identity, to think about the control issues, to think in general about this vision of the future, the wellbeing of the people, of the society, of the planet. And how to reverse engineer, in some sense, from a vision of the future to what it means in terms of a behavior of the technology, behavior of those that produce the technology, and behavior of those that regulate the technology, and so on.

We need a lot more of this reverse engineering approach, where instead of starting from the current state of the art of the technology and saying, “Okay, these are the properties that I think I want in this technology: fairness, robustness, transparency, and so on, because otherwise I don’t like this technology to be deployed without these properties.” And then see what happens in the next version, more advanced version of the technology, and think about possibly new properties and so on. This is one approach, but the other approach is that, “Okay, this is the vision of life in, I don’t know, 50 years from now. How do I go from that to the kind of the technology, to the direction that I want to push the technology towards to achieve that vision?

Ariel: We are getting a little bit short on time, and I did want to follow up with Ashley about his other job. Basically, Ashley, as far as my understanding, you essentially have a side job as a hip hop artist. I think it would be fun to just talk a little bit in the last couple of minutes that we have about how both you and Francesca see artificial intelligence impacting these more creative fields. Is this something that you see as enhancing artists’ abilities to do more? Do you think there’s a reason for artists to be concerned that AI will soon be a competition for them? What are your thoughts for the future of creativity and AI?

Ashley: Yeah. It’s interesting. As you point out, over the last decade or so, in addition to furthering my career as an engineer, I’ve also been a hip hop artist and I’ve toured around the world and put out some albums.I think where we see the biggest impact of technology on music and creativity, I think, is, one, in the democratization of access to creation. Technology is a lot cheaper. Having a microphone and a recording setup or something like that, from the standpoint of somebody that does vocals like me, is much more accessible to many more people. And then, you see advances and — you know, when I started doing music I would print CDs and press vinyl. There was no iTunes. And just, iTunes has revolutionized how music is accessed by people, and more generally how creative products are accessed by people in streaming, etc. So I think looking backward, we’ve seen most of the impact of technology on those two things: access to the creation and then access to the content.

Looking forward, will those continue to be the dominant factors in terms of how technology is influencing the creation of music, for example? Or will there be something more? Will AI start to become more of a creative partner? We’ll see that and it will be evolutionary. I think we already see technology being a creative partner more and more so over time. A lot of the things that I studied in school — digital signal processing, frequency, selective filtering — a lot of those things are baked into the tools already. And just as we see AI helping to interpret other kinds of signal processing products like radiology scans, we’ll see more and more of that in the creation of music where an AI assistant — for example, if I’m looking for samples from other music — an AI assistant that can comb through a large library of music and find good samples for me. Just as we do with Instagram filters — an AI suggesting good filters for pictures I take on my iPhone — you can see in music AI suggesting good audio filters or good mastering settings or something, given a song that I’m trying to produce or goals that I have for the feel and tone of the product.

And so, already I think as an evolutionary step, not even a revolutionary step, AI becoming more present in the creation of music. I think maybe, as in other application areas, we may see, again, AI being more of a teammate, not only in the creation of the music, but in the playing of the music. I heard an article or a cast on NPR about a piano player that developed an AI accompaniment for himself. And so, as he played in a live show, for example, there would be an AI accompaniment and you could dial back the settings on it in terms of how aggressive it was in rhythm and time, where it situated with respect to the lead performer. Maybe in hip hop we’ll see AI hype men or AI DJs. It’s expensive to travel overseas, and so somebody like me goes overseas to do a show, and instead of bringing a DJ with me, I have an AI program that can select my tracks and add cuts at the right places and things like that. So that was a long-winded answer, but there’s a lot there. Hopefully that was addressing your question.

Ariel: Yeah, absolutely. Francesca, did you have anything you wanted to add about what you think AI can do for creativity?

Francesca: Yeah. I mean, of course I’m less familiar of what AI is already doing right now, but I am aware of many systems from companies into the space of delivering content or music or so on, systems where the AI part is helping humans develop their own creativity even farther. And as Ashley said, I mean, I hope that in the future AI can help us be more creative — even people that maybe are less able than Ashley to be creative themselves. And I hope that this will enhance the creativity of everybody, because this will enhance the creativity, yes, in hip hop or in making songs or in other things, but also I think it will help to solve some very fundamental problems because having a population which is more creative, of course, is more creative in everything.

So in general, I hope that AI will help us human beings be more creative in all aspects of our life besides entertainment — which is of course very, very important for all of us for the wellbeing and so on — but also in all the other aspects of our life. And this is the goal that I think — going to the beginning where I said AI’s purpose should be the one of enhancing our own capabilities. And of course, creativity is also a very important capability that human beings have.

Ariel: Alright. Well, thank you both so much for joining us today. I really enjoyed the conversation.

Francesca: Thank you.

Ashley: Thanks for having me. I really enjoyed it.

Ariel: For all of our listeners, if you have been enjoying this podcast, please take a moment to like it or share it and maybe even give us a good review. And we will be back again next month.

FLI Podcast: The Unexpected Side Effects of Climate Change With Fran Moore and Nick Obradovich

It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.

In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.

Topics discussed in this episode include:

  • How getting used to climate change may make it harder for us to address the issue
  • The social cost of carbon
  • The effect of temperature on mood, exercise, and sleep
  • The effect of temperature on public safety and democratic processes
  • Why it’s hard to get people to act
  • What we can all do to make a difference
  • Why we should still be hopeful

Publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hello, and a belated happy Earth Day to everyone. I’m Ariel Conn, your host of The Future of Life podcast. And in honor of Earth Day this month, I’m happy to have two climate-related scientists joining the show. We’ve all heard about the devastating extreme weather that climate change will trigger; We’ve heard about melting ice caps, rising ocean levels, warming oceans, flooding, wildfires, hurricanes, and so many other awful natural events.

And it’s not hard to imagine how people living in these regions will be negatively impacted. But climate change won’t just affect us directly. It will also impact the economy, agriculture, our mental health, our sleep patterns, how we exercise, food safety, the effectiveness of policing, and more.

So today, I have two scientists joining me to talk about some of those issues. Doctor Nick Obradovich is a research scientist at the MIT Media Lab. He studies the way that climate change is likely impacting humanity now and into the future. And Doctor Fran Moore is an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. Her work sits at the intersection of climate science and environmental economics and is focused on understanding how climate change will affect the social and natural systems that people value.

So Nick and Fran, thank you so much for joining us.

Nick: Thanks for having us.

Fran: Thank you.

Ariel: Now, before we get into some of the topics that I just listed, I want to first look at a paper you both published recently called “Rapidly Declining Remarkability of Temperature Anomalies May Obscure Public Perception of Climate Change.” And essentially, as you describe in the paper, we’re like frogs in boiling water. As long as the temperatures continue to increase, we forget that it used to be cooler and we recalibrate what we consider to be normal for weather. So what may have been considered extreme 15 years ago, we now think of as normal.

Among other things, this can make trying to address climate change more difficult. I want both of you now to talk more about what the study was and what it means for how we address climate change. But first, if you could just talk about what prompted this study.

Fran: So I’ve been interested for a long time in the question of: as the climate changes and people are gradually exposed to this new weather in their everyday life that used to be very unusual but because of climate change more and more typical, how do we think about defining things like extreme events under those kind of conditions?

I think researchers have this intuition that there’s something about human perception and judgment that goes into that or that there’s some kind of limit of how humans kind of understand the weather that define what we think of as normal and extreme, but no one had really been able to measure it. What I think is really cool in this study, and working with Nick and our other coworkers, we’re able to use data from Twitter to actually measure what people think of as remarkable, and then we can show that that changed quickly over time.

Ariel: I found this use of social media to be really interesting. Can you talk a little bit about how you used Twitter? And I was also curious if that — aside from being a new source of information — does it also present limitations in any way or is it just exciting new information?

Nick: The crux of this insight was that we talk about the weather all the time. It’s sort of the way to pass time in casual conversation, to say hi to people, to awkwardly change the topic — if someone has said something a little awkward, start talking about the weather. And we realized that Twitter is a great source for what people are talking about, and I had been collecting billions of tweets over the last number of years. And Fran and I met, and then we got talking about this idea and we were like, “Huh, you know, I bet you could use Twitter to measure how people are talking about the weather.” And then Fran had the excellent insight that you could also use it to get a metric of how remarkable people find the weather by how unusually much they’re talking about unusual weather. And so that was kind of the crux of the insight there.

And then really what we did is we said, “Okay, what terms exist in the English language that might likely refer to weather when people are talking about the weather?” And we combed through the billions of tweets that I had in my store and found all of the tweets plausibly about the weather and used that for our analysis and then mapped that to the historical temperatures that people had experienced and also the rates of warming over time that the locations that people lived in had experienced.

Ariel: And what was the timeframe that you were looking at?

Fran: So it’s about three years: from March of 2014 to the end of 2016. But then we’re able to combine that with weather data that goes back to 1980. So what we can then look at — we can match the tweeting behavior going on in this relatively recent time period, but we can look at how is that explained by all the patterns of temperature change across these counties.

So what we found that, firstly, maybe exactly what you would expect, right, which is that the rate at which people tweet about particular temperatures depends on what is typical for that location, for that time of year. And so if you have very cold weather but that very cold weather is basically what you should be expecting, you’re going to tweet about that less than if that very cold weather is atypical.

But then what we were able to show is that what people think of as “usual” that defines this tweeting behavior changes really quickly, so that if you have these unusual temperatures multiple years in a row the tweeting response quickly starts to decline. So what that indicates is that people are adjusting their ideas of normal weather very quickly. And we’re actually able to use the tweets to directly estimate the rate at which this updating happens and, to our best estimate, we think that people are using approximately the last two to eight years as a baseline for establishing normal temperatures for that location for that time of year. When people think of, look at the weather outside, and they’re evaluating is it hot, is it cold, the reference point they’re using is set by the fairly recent past.

Ariel: What does this mean as we’re trying to figure out ways to address climate change?

Nick: When we saw this result, we were a bit troubled because it was faster than we would perhaps hope. I’m a political scientist by training, and I saw this and I said, “This is not ideal,” because if you have people getting used to a climate that is changing on geologically rapid scales but perhaps on human time scales somewhat slow — if people get used to that as it changes, then some of the things that we know helps to drive political action, policy, and political attention is just awareness of a problem. And so if you’re having people’s expectations adapt pretty quickly to climate change, then all of a sudden a hundred-degree day in North Dakota would have been very unusual in 2000 but maybe it’s fairly normal in 2030. And so as a result, people aren’t as aware of the signal that climate change is producing. And that could have some pretty troubling political implications.

Fran: My takeaway from this is that I think it certainly points to the risk that these conditions that are geologically or even historically very, very unusual — that they are not perceived as such. We’re really limited by our human perception, and that’s even within individuals, right — what we’re estimating is something that happens within an individual’s lifetime.

So what it means is that you can’t just assume that as climate change gets worse it’s going to automatically rise to the top of the political agenda in terms of urgency. And that, like a lot of other chronic, serious social problems we have, that it takes a lot of work on the part of activists and norm entrepreneurs to do something about climate change. And that just because it’s happening and it’s becoming, at least statistically or scientifically, increasingly clear that it’s happening, that won’t necessarily translate into people wanting to do something about it.

Ariel: And so you guys were looking more at what we might consider sort of abnormalities in relatively normal weather: if it’s colder in May than we’d expect or it’s hotter in January than we’d expect. But that’s not the same as some of the extreme weather events that we’ve also seen. I don’t know if this is sort of a speculative question, but do you think the extreme weather events could help counter our normalization of just changing temperatures or do you think we would eventually normalize the extreme weather events as well?

Nick: That’s a great question. So one of the things we didn’t look at is, for example, giant hurricanes, big wildfires, and things like that that are all likely to increase in frequency and severity in the future. So it could certainly be the case that the increase in frequency and intensity of those events offsets the adaptation, as you suggest. We actually are trying to think about ways to measure how people might adapt to other climate-driven phenomena aside from just regular, day-to-day temperature.

I hope that’s the case, right? Because if we’re also adapting to sea level rise pretty rapidly as it goes along and we’re also adapting to increased frequency of wildfires and things like that, a few things might happen; one being that if we’re getting used to semi-regular flooding, for example, we don’t move as quickly as we need to — up to the point where basically cities start getting inundated, and that could be very problematic. So I hope that what you suggest actually turns out to be the case.

Fran: I think that this is a question we get a lot, like, “Oh, well temperature is one thing, but really the thing that’s really going to spur people is these hurricanes or floods or these wildfires.” And I think that’s a hypothesis, but I would say it’s as yet untested. And sure, a hurricane is an extreme event, but when they start happening frequently, is that going to be subject to the same kind of normalization phenomenon that we show here? I would say I don’t know, and it’s possible it would look really different.

But I think it’s also possible that it wouldn’t, and that when you start seeing these happen on a very regular basis, that they become normalized in a very similar way to what you see here. And it might be that they spur some kind of adaptation or response policy, but the idea that they would automatically spur a lot of mitigation policy I think is something that people seem to think might be true, but I would say that we need some more empirical evidence.

Nick: I like to think of humans as an incredibly adaptable species. I think we’re a great species for that reason. We’re arguably the most successful ever. But our adaptability in this instance may perhaps prove to be part of our undoing, just in normalizing worsening conditions as they deteriorate around us. I hope that the hypothesis that Fran lays out ends up being the case: that, as the climate gets weirder and weirder, there is enough signal that people become concerned enough to do something about it. But it is just an empirical hypothesis at this point.

Fran: What I thought was a really neat thing that we were able to do in this paper was ask: are people just not talking about these conditions because they’ve normalized them and they’re no longer interesting or have people actually been able to take action to reduce the negative consequences of these conditions? And so to do that we used sentiment analysis. So this is something that Nick and our other author Patrick Baylis have used before: Just based on the words that are being used in the tweets, you can measure the overall mood being conveyed or the kind of emotional state of people sending those tweets and what very hot and very cold temperatures have negative effects on sentiment. And we find that those effects persist even if people stop talking about these unusual temperatures.

What that’s saying is that this is not a good news story of effective adaptation, that people are able to reduce the negative consequences of these temperatures. Actually, they’re still being very negatively affected by them — and they’re just not talking about them anymore. And that’s kind of the worst of both worlds.

Ariel: So I want to actually follow up with that because I had a question about that paper that you just referenced. And if I was reading it correctly, it sort of seemed like you’re saying that we basically get crankier as the weather falls onto either extreme of our preferred comfort zone. Is that right? Are we just going to be crankier as climate gets worse?

Nick: So that was the paper that Patrick Baylis and I had with a number of other co-authors, and the key point about that paper is that we were looking at historical contemporaneous weather and we weren’t looking for adaptation over time with that analysis. So what we found is that at certain level of temperature, for example when it’s really hot outside, people’s sentiment goes down — their mood is worsened. When it’s really cold outside, we also found that people’s sentiment was worsened; and we found that, for example, lots of precipitation made people unhappy as well.

But with that paper what we didn’t do was examine the degree to which — changes in the weather over time, people got used to those. And so that’s what we were able to do in this paper with Fran, and what we saw was, as Fran points out, troubling, which is that people weren’t substantially adapting to these temperature shocks over time, to longer term changes in climate —  they just weren’t talking about them as much.

So if you think though that there is no adaptation, then yeah, if the world becomes much hotter, on the hot end of things — so in the summer, in the northern hemisphere for example — people will probably be a bit grumpier. Importantly though, on the other side of things, in the wintertime, if you have warming, you might expect that people are in somewhat better moods because they’re able to enjoy nicer weather outside. So it is a little bit of a double-edged sword in that way, but again important that we don’t see that people are adapting, which is pretty critical.

Ariel: Okay. So we can potentially expect at least the possibility of decrease in life satisfaction just because of weather, without us even really appreciating that it’s the weather that’s doing it to us?

Nick: Yes, during hotter periods. The converse is that during the wintertime, in the northern hemisphere, we would have to say that warming temperatures, people would probably enjoy for the most part. If it was supposed to be 35 degrees Fahrenheit outside and it’s now 45 Fahrenheit, that’s a bit more pleasant. Now you can go with a lighter jacket.

So there will be those small positive benefits — although, as Fran is probably going to talk about here in a little bit, there are other big countervailing negatives that we need to consider too.

Fran: What I like about this paper that Nick and Patrick wrote previously on sentiment, they have these comparisons to it being a Monday or to home team loss. Sometimes it’s hard to put these measures in perspective, and so Mondays on average make people miserable and it being very, very hot out also makes people miserable in kind of similar ways to it being a Monday.

Nick: Yeah. We found that particularly cold temperatures, for example, were a similar magnitude of effect on positive sentiment. A reduced positive sentiment of a magnitude that was equivalent to a small earthquake in your location and things like that. So the magnitude effects of the weather are much larger than we necessarily thought that they would be, which we thought was I guess interesting. But also there was a whole big literature from psychology and economics and political science that had looked at weather and various outcomes and found that sometimes the effect sizes were very large and sometimes the effect sizes were effectively zero. So we tried to basically just provide the answer to that question in that paper: The weather matters.

Ariel: I want to go back to the idea of whether or not extreme events will be normalized, because I tend to be slightly cynical — and maybe this is hopeful for once — that the economic cost of the extreme events is not something we would normalize too, that we would not get used to having to spend billions of dollars a year, whatever it is, to rebuild cities.

And Fran, I think that touches on some of your work if I’m correct, in that you look at what some of these costs of climate change would be. So first, is that correct? Is that one of the things that you look at?

Fran: Yeah. A large component of my work has been on improving the representation of climate change damages, so kind of what we know from the physical sciences about how climate change affects the things that we care about and including the representation of that in the thing called the social cost of carbon, which is a measure that’s very relevant for the regulatory and policy analysis for climate change.

Ariel: Can you explain what the social cost of carbon is? What is being measured?

Fran: So if you think about when we emit a ton of CO2, right, and that ton of CO2 goes off into the elements of the earth and it’s going to affect the climate, that change in the climate is going to have consequences around the world in many different sectors and is going to stay in the atmosphere for a long time. And so those effects are going to persist far out into the future.

What the social cost of carbon is, it’s really just an accounting exercise that tries to quantify what are all those impacts and then add them all up together and put them in common units and assign that as a cost of that ton of CO2 that you emitted. You can see in that description why this is an ambitious exercise in that we’re talking about, theoretically there should be all these climate change impacts around the world for all time. And then there’s another step too, which is in order to aggregate these to add them up, you need to put everything into common units. So the units that we use are dollars, so that’s a critical economic valuation step in order to think about these things that happen in agriculture or they happen along coastlines or they affect mortality risk and how do you take all them and then put them into some kind of common unit and value them all.

And so depending on what type of impact you’re talking about, that’s more or less challenging. But it’s an important number because at least in the United States, we have a requirement that all regulations have to have passed a cost-benefit analysis. So in order to do a cost-benefit analysis of climate regulation, you need to understand what are the benefits of not emitting CO2? So pretty much any policy that’s affecting emissions needs to account for these damages in some way. That’s why this is very directly relevant to policy.

Ariel: I want to keep looking at what this means. In one of your papers you have a sentence that reads, “impacts on the agriculture increase from net benefits $2.7 ton per carbon to net cost of $8.5 per ton of CO2.” I think that seemed like a really good example for you to explain what these costs actually mean?

Fran: Yeah. This was an exercise I did a couple of years ago with coauthors Tom Hertel and Uris Baldos and Delavane Diaz. The idea was that we know now a lot about how climate change affects crop yields. There’s been an awful lot of work on that in economics and agricultural sciences. But that was essentially not represented in the social cost of carbon, where our estimates of climate change damages really came from studies that were either in the late 80s or the early 90s, and really our understanding of how climate change will affect agriculture has really changed since then.

What those numbers represent, the benefits of $2.7 per ton is what is currently represented in the models that calculate the social cost of carbon. So the fact that it’s negative, that indicates that these models were thinking that agriculture on net is going to benefit from climate change. This is largely because a combination of CO2 fertilization and a fair bit of assumption that in most of the world crops are going to benefit from higher temperatures. Now we know that’s more or less not the case.

When we look at how we think temperature and CO2 is going to affect the major crops around the world, we use these estimates from the IPCC, and then we introduce those into an economic model. This is a valuation set. That economic model will kind of account for the fact that countries can shift what they grow, they can change their consumption patterns, they can change their trading partners. A lot of these economic adjustments that we know can be done, and this modeling accounts for all of that. We find a fairly large negative effect of climate change on agriculture, which amounts to about $9 per ton of CO2, and those are kind of discounted paths. So you emit a ton of CO2 today, that’s the dollar value today of all the future damages that ton of CO2 will have via the agricultural sector.

Ariel: As a reminder, how many tons of CO2 were emitted, say, last year, or the year before? Something that we know?

Fran: We do know that. I’m not sure I can tell you that off the top of my head. I would caution you that you also don’t want to take this number and just multiply it by the total tons emitted, because this is a marginal value. This is merely about do we emit this ton or not? It’s really not a value that can be used for saying, “Okay, well the total damages from climate change are X.” There’s distinction between total damages and marginal damages, and the social cost of carbon number is very much about marginal damages.

So it’s like at the margin, how much should we tax CO2? It’s really not going to tell you, should we be on a two-degree pathway, or should we be on a four-degree pathway, or should we be on a 1.5-degree pathway? That you need a really different analysis for.

Ariel: I want to ask one more follow-up question to this, and then I want to get onto some of the other papers of Nick’s. What are the cost estimates that we’re looking at right now? What are you comfortable saying that we’re, I don’t know, losing this much money, we’re going to pay this much money, we’re going to negatively be impacted by X number of dollars?

Fran: The exercise that the Obama administration went through, a fairly comprehensive exercise to take the existing models and standardize them in certain ways to try and say, “What is the social cost of carbon value that we should use?” They have a number that’s around $40 per ton of CO2. If you take that number as a benchmark, there’s obviously a lot of uncertainty around it, and I think it’s fair to say a lot of that uncertainty is on the high end rather than on the low end. So if you think about probability distribution around that existing number, I would say there’s a lot of reasons why it might be higher than $40 per ton, and there’s a few, but not a ton, of reasons why it might be lower.

Ariel: Nick, was there anything you wanted to add to what Fran has just been talking about?

Nick: Yeah. The only thing I would say is I totally agree that the uncertainty is on the upper bound of the estimate of the social cost of carbon, and possibly on the extreme upper bound. So there are unknowns that we can’t estimate from the historical data in terms of being able to figure out what happens in the natural system and how that translates through to the social system and the social costs. We and Fran are basically just doing the best we can with the historical evidence that we can bring to bear on the question, but there are giant “unknown unknowns,” to quote Donald Rumsfeld.

Ariel: I want to sort of quantify this ever so slightly. I Googled it, and it looks like we are emitting in the tens of billions of tons of carbon each year? Does that sound right?

Fran: Check that it’s carbon and not CO2. I think it’s eight to nine gigatons of carbon.

Ariel: Okay.

Nick: CO2 equivalence.

Ariel: Anyway, it’s a lot.

Nick: It’s a lot, yeah.

Ariel: That’s the point.

Nick: It’s a lot; It’s increasing. I think 2018 was an increased blip in terms of the rate of emissions. We need to be decreasing, and we’re still increasing. Not great.

Ariel: All right. We’ll take a quick break from the economic side of things and what this will financially cost us, and look at some of the human impacts that we many not necessarily be thinking about, but which Nick has been looking into. I’m just going to go through a list of very quick questions that I asked about a few papers that I looked at.

The first one I looked at is apparently — and this makes sense when I think about it — climate change is going to impact our physical activity, because it’s too hot in places, or things like that. I was wondering if you could talk a little bit about the research you did into that and what you think the health implications are.

Nick: Yeah, totally. So I like to think about the climate impacts that are not necessarily easily and readily and immediately translated into dollar value because I think really we live in a pretty complex system, and when you turn up the temperature on that complex system, it’s probably going to affect basically everything. The question is what’s going to be affected and how much are the important things going to be affected? And so a lot of my work has focused on identifying things that we hadn’t yet thought about as social scientists in doing the social impact estimates in the cost of carbon and just raising questions about those areas.

Physical activity was one. The idea to look at that actually came from back in 2015 — there was a big heat wave in San Diego when I was living there, and I was in a regular running regimen. I would go running at 4:00 or 5:00 PM, but there were a number of weeks, definitely strings of days, where it was 100 degrees or more in October in San Diego, which is very unusual. At 4:00 PM it would be 100 degrees and kind of humid, so I just didn’t run as much for a couple of weeks, and that threw off my whole exercise schedule. I was like, “Huh, that’s an interesting impact of heat that I hadn’t really heard about.”

So I was like, “Well, I know this big data set that collects people’s reported physical activity over time, and has a decade worth of data on randomly sampled US, I think about a million randomly sampled US citizens.” Over a million. So I had those data, and I was like, “Well, I wonder if you see the weather and the climate that these people are living in, does that influence their exercise patterns?” What we found was a little bit surprising to me because I had thought about it on the hot end: “Oh, I stopped running because it was too hot.” But the reality is that temperature, and also rainfall, impact our physical activity patterns across the full distribution.

When it’s really cold outside, people don’t report being very physically active and one of the main reasons for that is one of the primary ways Americans get physical activity is by going outside for a run or a jog or a walk. When it’s very nasty outside, people report not being as physically active. We saw on the cold end of the distribution that as temperatures warmed up, people exercised more. That was actually up to a relatively high peak in that function. It was an inverted U shape, and the peak was relatively high in terms of temperature. It was somewhere around 84 degrees fahrenheit.

What we realized actually is that at least in the US, at least in some of the northern latitudes in the US, people might exercise more as temperatures warm up to a point. They might exercise more in the wintertime, for example. That was this small little silver lining in what is otherwise, from my research and from Fran’s research and most research on this topic, a cascade of negative news that is likely to result from climate change. But the health impacts of being more physically active are positive. It’s one of the most important things we can do for our health. So a small, positive impact of warming temperatures offset by all the other things that we’ve found.

Ariel: I know from personal experience I definitely don’t like to run in the winter. I don’t like ice, so that makes sense.

Nick: Ice, frostbite.

Ariel: Yeah.

Nick: All these things are … yeah. So just observationally, if I look out my window, and there’s a running path near me, I see dramatically more people on a sunny, mild day than I do during the middle of the winter. That’s how most people get their exercise. A lot of people, we know from the public health literature, if they’re not going out for a walk or a stroll, they’re not really getting any physical activity at all.

Ariel: Okay. So potential good news.

Nick: A little bit. Just a little bit.

Fran: Yeah. Nick moved from San Diego to Boston, so I think he’s got a better appreciation of the benefits of warmer wintertime temperatures.

Nick: I do! Although, and this is an important limitation in that study, is we didn’t really, again, look at adaptation over time. And what I found moving to Boston was that I got used to the cold winters much faster than I thought I would coming from San Diego, and now do go running in the wintertime here, though I thought I would barely be able to go outside. So perhaps that’s a positive thing in terms of our ability to adapt on the hotter end as well, and perhaps that undercuts a little bit the degree to which warming during the winter might increase physical activity.

This is a broader and more general point. A lot of these studies — it’s pretty hard to look at long-term adaptation over time because some of the data sets that we have just don’t give us enough span of time to really see people adapt their behaviors within person. So, many of the studies are kind of estimating the direct effect of temperature, for example, on physical activity, and not estimating how much long-term warming has changed people’s physical activity patterns. There are some studies that do that with respect to some outcomes — for example, agricultural yields. But it’s less common to do that with some of the public health-related outcomes and psychological-related outcomes.

Ariel: I want to ask about some of these other studies you’ve done as well, but do you think starting these studies now will help us get more research into this in the future?

Nick: Yeah. I think the more and the better data that we have, the better we’re going to be able to answer some of these questions. For example, the physical activity paper, also we did a sleep paper — the self-report data that we used in those papers are indeed just self-report data. So we’re able to get access to what are called actigraph data, or data that come from monitors like Fitbit and actually track people’s sleep and physical activity. We’re working on those follow-up studies, and the more data that we have and the longer that we have those data, the more we can identify potential adaptation over time.

Ariel: The sleep study was actually where I was going to go next. It seemed nicely connected to the physical activity one. Basically we’ve been told for years to get eight hours of sleep and to try to set the temperatures in our rooms to be cooler so that our quality of sleep is better. But it seems that increasing temperatures from climate change might affect that. So I was hoping you could weigh in on that too.

Nick: Yeah. I think you said it pretty well. The results in that paper basically indicate that higher nighttime temperatures outside, higher ambient temperatures outside, increase the frequency that people report a bad night of sleep. Basically what we say is absent adaptation, climate change might worsen human sleep in the future.

Now, one of the primary ways you adapt, as you just mentioned, is by turning the AC on, keeping it cooler in the room in the summertime, and trying to fight the fact that it’s — as it was in San Diego — it’s 90 degrees and humid at 12:00 AM. The problem with that is that a lot of our electricity grid is currently still on carbon. Until we decarbonize the grid, if we’re using more air conditioning to make it cooler and make it comfortable in our rooms in the summers, we are emitting more carbon.

That poses something else that Fran and I have talked about and others are starting to work on: the idea that it’s not a one-way street. In other words, if the climate system is changing, and it’s changing our behaviors in order to adapt to it, or just frankly changing our behaviors, we are potentially altering the amount of carbon that we put back into the system and the positive feedback loop that’s driven by humans this time, as opposed to permafrost and things like that. So, it’s a big, complex equation. And that makes estimating the social cost of carbon all the harder because it’s no longer just this one-way street. But if it means emitting carbon through behavioral effects of emitting that carbon causes the emission of more carbon, then you have a harder-to-estimate function.

Fran: Yeah, you’re right, and it is hard. I often get questions of like, “Oh, is this in the social cost of carbon? Is this?” And usually the answer is no.

Ariel: Yeah. I guess I’ve got another one sort of like that. I mean, I think studies indicate pretty well right now that if you don’t get enough sleep, you’re not as productive at work, and that’s going to cost the economy as well. Is stuff like that also being considered or taken into account?

Fran: I think in general, I think researchers’ ideas a few decades ago was very much that there were a very limited set of pathways by which a developed economy could be affected by climate. We could enumerate those, and they were things like agriculture or forestry and coastline affected by sea level rise. The newer work that’s being done now, like Nick’s papers that we just talked about, and a lot of other work, is showing that actually we seem to be very sensitive to temperature on a number of fronts, and that has these quite pervasive economic effects.

Fran: And so, yeah, the sleep question is a huge one, right? If you don’t get a good night’s sleep, that affects how much you can learn in school the next day, it affects your productivity at work the next day. So we do see evidence that temperature affects labor productivity in developed countries. Even in sectors that you think should be relatively well insulated against them, let’s say because there’s work that’s being done inside, there’s evidence too that high temperatures affect how well students can learn in school and their test scores. That has potentially a very long term effect on their educational trajectory in life and their ability to accumulate human capital and their earning potential in the future.

Fran: And so, these newer findings I think are suggesting that even developed economies are sensitive in ways that we’re only beginning to learn to climate change, and pretty much none of that is currently represented in our current estimates of the social cost of carbon.

Nick: Yeah, that’s a great point. And to add an example to that, I did a study last year in which I looked at government productivity, so government workers’ productivity. Because we had seen a number of these studies, as Fran mentioned, that private sector productivity was declining, and I was wondering if government workers that are tasked with overseeing our safety, especially in times of heat stress and other forms of stress, if those workers themselves were affected by heat stress and other forms of environmental stress.

We indeed found that they were, so we found that police officers were less likely to stop people in traffic stops even though there was an increased risk of traffic fatalities and also crime increases with higher temperatures as well. We found that food safety inspectors were less likely to do inspections. The probability of an inspection declined as the temperature increased, though the risk of violation conditional on an inspection happening increased. So it’s more likely that there’s a food safety problem when it’s hot out, but food safety inspectors were less likely to go out and do inspections.

That’s another thing that fits into, “Okay, we’re affected in really complex ways.” Maybe it’s the case that the food safety inspectors were less likely to go do their job because they were really tired because they didn’t sleep well the night before, or perhaps because they were grumpy because it was really hot outside. We don’t know exactly, but these systems are indeed really complicated and probably a lot of things are in play all at once.

Ariel: Another one that you have looked that I think is also important to consider in this whole complex system that’s being impacted by climate change is democratic processes.

Nick: Yeah, yeah. I’m a political scientist by training, and what we political scientists do is think a lot about politics, the democratic process, voting, turnout, and one of things that we know best in political science is this thing called retrospective voting or perhaps economic voting — basically the idea that people vote largely based on either how well they individually are doing, or how well they perceive their society is doing under the current incumbent. So in the US for example, if the economy is doing well the incumbent faces better prospects than if the economy is doing poorly. If individuals perceive that they are doing well, the incumbent faces better prospects.

I basically just sat down and thought for a while, and was like, you know, climate change across all these dimensions is likely to worsen both economic well-being, and also just personal, psychological, and physiological well-being. I wonder if it’s the case that it might somewhat disrupt the way that democracies function, and the way that elections function in democracies. For example, if you’re exposed to hotter temperatures there are lots of reasons to suspect that you might perceive being yourself less well-off — and whoever’s in office, you might just be a little bit less likely to vote for them in the next election.

So I put together a bunch of election results from a variety of countries around the world, a variety of democratic institutions around the world, and looked at the effect of hotter temperatures on the incumbent politicians’ prospects in the upcoming elections: So, what were the effects of the temperatures prior to the election on the electoral success of that incumbent? And what I found was that as you had unusual increases in temperature the year prior to an election, and as those got hotter on the distribution — so hotter places — you saw that the incumbent prospects declined in that election. Incumbent politicians were more likely to get thrown out of office when temperatures were unusually warm, especially in hotter places.

And that, as a political scientist, is a little bit troubling because it could be two things. It could be the case that politicians are being thrown out of office because they don’t respond well to the stressors associated with added temperature. So they could, for example, if there was a heatwave, and it caused some crop losses, maybe those politicians didn’t do a good enough job helping the people who lost those crops. But it also might just be the case that people are grumpier, and they’re not feeling as good, and there’s really no way the politician can respond, or the politician has limited resources and can only respond so much.

And if that’s the driving function then what you see is this exogenous shock leading to an ouster of a democratically elected politician, perhaps not directly related to the performance of that politician. And that can lead to added electoral churn; If you see increased rates of electoral churn where politicians are losing office with increasing frequency, it can shorten the electoral time horizons that politicians have. If they think that every election they stand a real good chance of losing office they may be less likely to pursue policies that have benefits over two or three election cycles. That was the crux of that paper.

Ariel: Fran, did you have anything you wanted to add to that?

Fran: I think it’s a really really fascinating question. This is one of my favorite of Nick’s papers. We think about how these really fundamental institutions that we think when we go to the ballot box, and we do our election, there’s a lot of factors that go into that, right? Even the very fact that you can pick up any kind of temperature signal on that is surprising to me, and I think it’s a really important finding. And then trying to pin down these mechanisms I think is interesting for trying to play out the scenarios of how does climate change proceed in terms of the effects of changing the political environment in which we’re operating, and having, like Nick said, these potentially long term effects on the types of issues politicians are willing to work on. It’s really important, and I think it’s something that needs more work.

Nick: Fran makes an excellent point embedded in there, which is the understanding of what we call causal mediation. In other words, if you see that hot temperatures lead to a reduction in GDP growth, why is that? What exactly is causing that? GDP growth is this huge aggregate of all of these different things. Why might temperature be causing that? Or even, for example, if you see that temperature is affecting people’s sleep quality, why is that the case? Is it because it’s influencing the degree to which people are stressed out during the day because they’re grumpier, they’re having more negative interactions, and then they’re thinking about that before they fall asleep? Is it due to purely physiological reasons, circadian rhythm and sleep cascades?

The short of it is, we don’t actually have very good answers to most of these questions for most of the climates impacts that we’ve looked at, and it’s pretty critical to have better answers, largely because if you want to adapt to coming climate changes, you’d like to spend your policy money on the things that are most important in those equations for reducing GDP growth or causing mental health outcomes or worsening people’s mood. You’d like to really be able to tell people precisely what they can do to adapt, and also spend money precisely where it’s needed, and it’s just strictly difficult science to be able to do that well.

Ariel: I want to actually go back real quick to something that you had said earlier, too: the idea that if politicians know that they’re unlikely to get elected during the next cycle, they’re also unlikely to plan long term. And I think especially when we’re looking at a situation like climate change where we need politicians who can plan long term, it seems like can this actually exacerbate our short-term thinking?

Nick: Yeah. That’s what I was concerned about, and still something that I am concerned about. As you get more and more extremes that are occurring more and more regularly and politicians are either responding well or not responding well to those extremes it may be somewhat like our weather and expectations paper — similar underlying psychological dynamics — which is just that people become more and more focused on their recent past, and their recent experience in history, and what’s going on now.

And if that’s the case then if you’re a politician, and you’ve had a bunch of hurricanes, or you’re dealing with the aftermath of hurricanes in your district, really should you be spending your policy efforts on carbon mitigation, or should you be trying to make sure that all of your constituents right now are housed and fed? That’s a little bit of a false dichotomy there, but it isn’t fully a false dichotomy because politicians only have so many resources, and they only have so much time. So as their risk of losing election goes up due to something that is more immediate, politicians will tend to focus on those risks as opposed to longer-term risks.

Ariel: I feel like in that example, too, in defense of the politicians, if you actually have to deal with people who are without homes and without food, that is sort of the higher priority.

Nick: Totally. I mean, I did a bunch of field work in Sub-Saharan Africa for my graduate studies and spent a lot of time in Malawi and South Africa, and talking to politicians there about how they felt about climate change, and specifically climate change mitigation policy. And half the time that I asked them they just looked at me as if I was crazy, and would explicitly say, like, “You must be crazy if you think that we have a  time horizon that gives us 20 years to worry about how our people are doing 20 years from now when they can’t feed themselves, and don’t have running water, and don’t have electricity right now. We’re working on the day to day things, the long term perspective just gets thrown out the window.” I think to a lesser degree that operates in every democratic polity.

Fran: This gets back to that question that we were talking about earlier: Are extreme events kind of fundamentally different in motivating action to reduce emissions? And this is exactly the reason why I’m not convinced that it’s the case, in that when you have the repeated extreme events, yes, there’s a lot of focus on rebuilding or restoring or kind of recovering from those events — potentially at the detriment of longer-term, less immediate action that would affect the long-term probability of getting those events in the future, which is reducing emissions.

And so I think it’s a very complex, causal argument to make in the face of a hurricane or a catastrophe that you need to be reducing emissions to address that, right, and that’s why I’m not convinced that just getting more and more disasters is going to automatically lead to more action on climate change. I think it’s actually almost this kind of orthogonal process that generates the political will to do something about climate change.

Having these disasters and operating in this very resource-constrained world — that’s a world in which action on climate change might be less likely, right? Doing some things that are quite costly involve a lot of political will and political leadership, and doing that in an environment where people are feeling vulnerable and feeling kind of exposed to natural disasters I think is actually going to be more difficult.

Nick: Yeah. So that’s an excellent point, Fran. I think you could see both things operating, which is I think you could see that people aren’t necessarily adapting their expectations to giant wildfires every single summer, that they realize that something is off and weird about that, but that they just simply can’t direct that attention to doing something about climate change because literally their house just burnt down. So they’re not going to be out in the streets lobbying their politicians as directly because they have more things to worry about. That is troubling to me, too.

Ariel: So that, I think, is a super, super important point, and now I have something new to worry about. It makes sense that the local communities that are being directly impacted by these horrific events have to deal with what’s just happened to them, but do we see an increase in external communities looking at what’s happening and saying, “Oh, we’ve got to stop this, and because we weren’t directly impacted we actually can do something?”

Nick: Anecdotally, somewhat yes. I mean, for example, if you look at the last couple of summers and the wildfire season, when there are big wildfire outbreaks the news media does a better than average job at linking that extreme weather to climate change, and starting to talk about climate change.

So if it is the case that people consume that news media and are now thinking about climate change more, that is good. And I think actually from some of the more recent surveys we’ve actually seen an uptick in awareness about climate change, worry about climate change, and willingness to list it as a top priority. So there are some positive trends on that front.

The bigger question is still an empirical one, though, which is what happens when you have 10 years of wildfires every summer. Maybe people are now not talking about it as much as they did in the very beginning.

Ariel: So I have two final questions for both of you. The first is: is there something that you think is really important for people to know or understand that we didn’t touch on?

Nick: I would say this, and this is maybe more extreme than Fran would say, but we are in really big trouble. We are in really, really big trouble. We are emitting more and faster than we were previously. We are probably dramatically underestimating the social cost of carbon because of all the reasons that we noted here and for many more, and the one thing that I kind of always tell people is don’t be lulled by the relatively banal feeling of your sleep getting disrupted, because if your sleep is disrupted it’s because everything is being disrupted, and it’s going to get worse.

We’ve not seen even a small fraction of  the likely total cost of climate change, and so yeah, be worried, and ideally use that worry in a productive way to lobby your politicians to do something about it.

Fran: I would say we talked about the social cost of carbon and the way it’s used, and I think sometimes it does get criticized because we know there’s a lot of things that it doesn’t capture, like what Nick’s been talking about, but I also know that we’re very confident that it’s greater than zero at this point, and substantially greater than zero, right? So the question of, should it be 40 dollars a ton, or should it be 100 dollars a ton, or should it be higher than that, is frankly quite irrelevant when right now we’re really not putting any price on carbon, we’re not doing any kind of ambitious climate policy.

Sometimes I think people get bogged down in these arguments of, is it bad, or is it catastrophic, and frankly either way we should be doing something to reduce our emissions, and they shouldn’t be going up, they should be going down, and we should be doing more than we’re doing right now. And arguing about where we end that process, or when we end that process of reducing our emissions is really not a relevant discussion to be having right now because right now everyone can agree that we need to start the process.

And so I think not getting too hung up on should it be two degrees, should it be 1.5, but just really focused on let’s do more, and let’s do it now, and let’s start that, and see where that gets us, and once we start that process and can begin to learn from it, that’s going to take us a long way to being where we want to be. I think these questions of, “Why aren’t we doing more than we’re doing now?” are the most important and some of the most interesting around climate change right now.

Nick: Yeah. Let’s do everything we can to avoid four or five degrees Celsius, and we can quibble over 1.5 or two later. Totally agree.

Ariel: Okay. So I’m going to actually add a question. So we’ve got two more questions for real this time I think. What do we do? What do you suggest we do? What can a listener right now do to help?

Fran: Vote. Make climate change your priority when you’re thinking about candidates, when you’re engaged in the democratic process, and when you’re talking to your elected representative — reach out to them, and make sure they know that this is the priority for you. And I would also say talk to your friends and family, right? Like these scientists or economists talking about this, that’s not something that’s going to reach everyone, right, but reaching out to your network of people who value your opinion, or just talking about this, and making sure people realize this is a critical issue for our generation, and the decisions we take now are going to shape the future of the planet in very real ways, and collectively we do have agency to do something about it.

Nick: Yes. I second all of that. I think the key is that no one can convince your friends and family that climate change is a threat perhaps better than you, the listener, can. Certainly Fran and I are not going to be able to convince your friends, and that’s just the way that humans work. We trust those that we are close to and trust. So if we want to get a collective movement to start doing something about carbon, it’s going to have to happen via the political process, and it’s also just going to have to happen in our social networks, by actually going out there and talking to people about it. So let’s do that.

Ariel: All right. So final question, now that we’ve gone through all these awful things that are going to happen: what gives you hope?

Fran: If we think about a world that solves this problem, that is a world that has come together to work on a truly global problem. The reason why we’ll solve this problem is because we recognize that we value the future, that we value people living in other countries, people around the world, and that we value nature and nonhuman life on the planet, and that we’ve taken steps to incorporate those values into how we organize our life.

When we think about that, that is a very big ask, right? We shouldn’t underestimate just how difficult this is to do, but we should also recognize that it’s going to be a really amazing world to live in. It’s going to provide a kind of foundation for all kinds of cooperation and collective action I think on other issues to build a better world.

Recognizing that that’s what we’re working towards, these are the values that we want to reflect in our society, and that is a really positive place to be, and a place that is worth working towards — that’s what’s giving me hope.

Nick: That’s a beautiful answer, Fran. I agree with that. It would be a great world to live in. The thing that I would say is giving me hope is actually if I had looked forward in 2010 and said, “Okay, where do I think that renewables are going to be? Where do I think that the electrification of vehicles is going to be?” I would have guessed that we would not be anywhere close to where we are right now on those fronts.

We are making much more progress on getting certain aspects of the economy and our lives decarbonized than I thought we would have been, even without any real carbon policy on those fronts. So that’s pretty hopeful for me. I think that as long as we can continue that trend we won’t have everything go poorly, but I also hesitate to hinge too much of our fate on the hope that technological advances from the past will continue at the same rate into the future. At the end of the day we probably really do need some policy, and we need to get together and engage in collective action to try and solve this problem. I hope that we can.

Ariel: I hope that we can, too. So Nick and Fran, thank you both so much for joining us today.

Nick: Thanks for having me.

Fran: Thanks so much for the interesting conversation.

Ariel: Yeah. I enjoyed this, thank you.

As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us no your preferred podcast platform.

 

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women’s International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions — especially for the military — and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.   

Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

Topics discussed in this episode include:

  • The value of verification, regardless of the challenges
  • The 1979 Sverdlovsk anthrax outbreak
  • The use of “rainbow” herbicides during the Vietnam War, including Agent Orange
  • The Yellow Rain Controversy

Publications and resources discussed in this episode include:

  • The Sverdlovsk anthrax outbreak of 1979, Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Ilona Popova, Alexis Shelokov, and Olga Yampolskaya, Science, 18 November 1994, Vol. 266, pp 1202-1208.
  • Preliminary Report- Herbicide Assessment Commission of the American Association for the Advancement of Science, Matthew Meselson, A. H. Westing, J. D. Constable, and Robert E. Cook, 30 December 1970, private circulation, 8 pp. Reprinted in Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6806-6807.
  • “Background Material Relevant to Presentations at the 1970 Annual Meeting of the AAAS”, Herbicide Assessment Commission of the AAAS, with A.H. Westing and J.D. Constable, December 1970, private circulation, 48 pp. Reprinted in the Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6807-6813.
  • “The Yellow Rain Affair: Lessons from a Discredited Allegation”, with Julian Perry Robinson Terrorism, War, or Disease? eds. A.L. Clunan, P.R. Lavoy, and SB Martin, Stanford University Press, Stanford, California. 2008, pp 72-96.
  • Yellow Rain by Thomas D. Seeley, Joan W. Nowicke, Matthew Meselson, Jeanne Guillemin and Pongthep Akratanakul, Scientific American, September 1985, Vol. 253, pp 128-137.

Click here for Part 1: From DNA to Banning Biological Weapons with Matthew Meselson and Max Tegmark

Four-ship formation on a defoliation spray run. (U.S. Air Force photo)

Ariel: Hi everyone. Ariel Conn here with the Future of Life Institute. And I would like to welcome you to part two of our two-part FLI podcast with special guest Matthew Meselson and special guest/co-host Max Tegmark. You don’t need to have listened to the first episode to follow along with this one, but I do recommend listening to the other episode, as you’ll get to learn about Matthew’s experiment with Franklin Stahl that helped prove Watson and Crick’s theory of DNA and the work he did that directly led to US support for a biological weapons ban. In that episode, Matthew and Max also talk about the value of experiment and theory in science, as well as how to get some of the world’s worst weapons banned. But now, let’s get on with this episode and hear more about some of the verification work that Matthew did over the years to help determine if biological weapons were being used or developed illegally, and the work he did that led to the prohibition of Agent Orange.

Matthew, I’d like to ask about a couple of projects that you were involved in that I think are really closely connected to issues of verification, and those are the Yellow Rain Affair and the Russian Anthrax incident. Could you talk a little bit about what each of those was?

Matthew: Okay, well in 1979, there was a big epidemic of anthrax in the Soviet city of Sverdlovsk, just east of the Ural mountains, in the beginning of Siberia. We learned about this epidemic not immediately but eventually, through refugees and other sources, and the question was, “What caused it?” Anthrax can occur naturally. It’s commonly a disease of bovids, that is cows or sheep, and when they die of anthrax, the carcass is loaded with the anthrax bacteria, and when the bacteria see oxygen, they become tough spores, which can last in the earth for a long, long time. And then if another bovid comes along and manages to eat something that’s got those spores, he might get anthrax and die, and the meat from these animals who died of anthrax, if eaten, can cause gastrointestinal anthrax, and that can be lethal. So, that’s one form of anthrax. You get it by eating.

Now, another form of anthrax is inhalation anthrax. In this country, there were a few cases of men who worked in leather factories with leather that had come from anthrax-affected animals, usually imported, which had live anthrax spores on the leather that got into the air of the shops where people were working with the leather. Men would breathe this contaminated air and the infection in that case was through the lungs.

The question here was, what kind of anthrax was this: inhalational or gastrointestinal? And because I was by this time known as an expert on biological weapons, the man who was dealing with this issue at the CIA in Langley, Virginia — a wonderful man named Julian Hoptman, a microbiologist by training — asked me if I’d come down and work on this problem at the CIA. He had two daughters who were away at college, and so he had a spare bedroom, so I actually lived with Julian and his wife. And in this way, I was able to talk to Julian night and day, both at the breakfast and dinner table, but also in the office. Of course, we didn’t talk about classified things except in the office.

Now, we knew from the textbooks that the incubation period for inhalation anthrax was thought to be four, five, six, seven days; Between the time you inhale it, four, five days later, if you hadn’t yet come down with it, you probably wouldn’t. Well, we knew from classified sources that people were dying of this anthrax over a period of six weeks, April all the way into the middle of May 1979. So, if the incubation period was really that short, you couldn’t explain how that would be airborne because a cloud goes by right away. Once it’s gone, you can’t inhale it anymore. So that made the conclusion that it was airborne difficult to reach. You could still say, well maybe it got stirred up again by people cleaning up the site, maybe the incubation period is longer than we thought, but there was a problem there.

And so the conclusion of our working group was that it was probable that it was airborne. In the CIA, at that time at least, in a conclusion that goes forward to the president, you couldn’t just say, “Well maybe, sort of like, kind of like, maybe if …” Words like that just didn’t work, because the poor president couldn’t make heads nor tails. Every conclusion had to be called “possible,” “probable,” or “confirmed.” Three levels of confidence.

So, the conclusion here was that it was probable that it was inhalation, and not ingestion. The Soviets said that it was bad meat, but I wasn’t convinced, mainly because of this incubation period thing. So I decided that the best thing to do would be to go and look. Then you might find out what it really was. Maybe by examining the survivors or maybe by talking to people — just somehow, if you got over there, with some kind of good luck, you could figure out what it was. I had no very clear idea, but when I would meet any high level Soviet, I’d say, “Could I come over there and bring some colleagues and we would try to investigate?”

The first time that happened was with a very high-level Soviet who I met in Geneva, Switzerland. He was a member of what’s called the Military Industrial Commission in the Soviet Union. They decided on all technical issues involving the military, and that would have included their biological weapons establishments, and we knew that they had a big biological laboratory in the city of Sverdlovsk, there was no doubt about that. So, I told them, “I want to go in and inspect. I’ll bring some friends. We’d like to look.” And he said, “No problem. Write to me.”

So, I wrote to him, and I also went to the CIA and said, “Look, I got to have a map because maybe they’d let me go there and take me to the wrong place, and I wouldn’t know it’s the wrong place, and I wouldn’t learn anything. So, the CIA gave me a map — which turned out to be wrong, by the way — but then I got a letter back from this gentleman saying no, actually they couldn’t let us go because of the shooting down of the Korean jet #007, if any of you remember that. A Russian fighter plane shot down a Korean jet — a lot of passengers on it and they all got killed. Relations were tense. So, that didn’t happen.

Then the second time, an American and the Russian Minister of Health got a Nobel prize. The winner over there was the minister of health named Chazov, and the fellow over here was Bernie Lown in our medical school, who I knew. So, I asked Bernie to take a letter when he went next time to see his friend Chazov in Moscow, to ask him if he could please arrange that I could take a team to Sverdlovsk, to go investigate on site. And when Bernie came back from Moscow, I asked him and he said, “Yeah. Chazov says it’s okay, you can go.” So, I sent a telex — we didn’t have email — to Chazov saying, “Here’s the team. We want to go. When can we go?” So, we got back a telex saying, “Well, actually, I’ve sent my right-hand guy who’s in charge of international relations to Sverdlovsk, and he looked around, and there’s really no evidence left. You’d be wasting your time,” which means no, right? So, I telexed back and said, “Well, scientists always make friends and something good always comes from that. We’d like to go to Sverdlovsk anyway,” and I never heard back. And then, the Soviet Union collapses, and we have Yeltsin now, and it’s the Russian Republic.

It turns out that a group of — I guess at that time they were still Soviets — Soviet biologists came to visit our Fort Detrick, and they were the guests of our Academy of Sciences. So, there was a welcoming party, and I was on the welcoming party, and I was assigned to take care of one particular one, a man named Mr. Yablokov. So, we got to know each other a little bit, and at that time we went to eat crabs in a Baltimore restaurant, and I told him I was very interested in this epidemic in Sverdlovsk, and I guess he took note of that. He went back to Russia and that was that. Later, I read in a journal that the CIA produced, abstracts from the Russian literature press, that Yeltsin had ordered his minister, or his assistant for Environment and Health, to investigate the anthrax epidemic back in 1979, and the guy who he appointed to do this investigation for him was my Mr. Yablokov, who I knew.

So, I sent a telex to Mr. Yablokov saying, “I see that President Yeltsin has asked for you to look into this old epidemic and decide what really happened, and that’s great, I’m glad he did that, and I’d like to come and help you. Could I come and help you?” So, I got back a telex saying, “Well, it’s a long time ago. You can’t bring skeletons out of the closet, and anyway, you’d have to know somebody there.” Basically it was a letter that said no. But then my friend Alex Rich of Cambridge Massachusetts, a great molecular biologist and X-ray crystallographer at MIT, had a party for a visiting Russian. Who is the visiting Russian but a guy named Sverdlov, like Sverdlovsk, and he’s staying with Alex. And Alex’s wife came over to me and said, “Well, he’s a very nice guy. He’d been staying with us for several days. I make him breakfast and lunch. I make the bed. Maybe you could take him for a while.”

So we took him into our house for a while, and I told him that I had been given a turn down by Mr. Yablokov, and this guy whose name is Sverdlov, which is an immense coincidence, said, “Oh, I know Yablokov very well. He’s a pal. I’ll talk to him. I’ll get it fixed so you can go.” Now, I get a letter. In this letter, handwritten by Mr. Yablokov, he said, “Of course, you can go, but you’ve got to know somebody there to invite you.” Oh, who would I know there?

Well, there had been an American Physicist, a solid-state physicist named Ellis who was there on a United States National Academy of Sciences–Russian Academy of Sciences Exchange Agreement doing solid-state physics with a Russian solid-state physicist there in Sverdlovsk. So, I called Don Ellis and I asked him, “That guy who you cooperated with in Sverdlovsk — whose name was Gubanov — I need someone to invite me to go to Sverdlovsk, and you probably still maintain contact with him over there in Sverdlovsk, and you could ask him to invite me.” And Don said, “I don’t have to do that. He’s visiting me today. I’ll just hand him the telephone.”

So, Mr. Gubanov comes on the telephone and he says, “Of course I’ll invite you, my wife and I have always been interested in that epidemic.” So, a few days later, I get a telex from the rector of the university there in Sverdlovsk, who was a mathematical physicist. And he says, “The city is yours. Come on. We’ll give you every assistance you want.” So we went, and I formed a little team, which included a pathologist, thinking maybe we’ll get ahold of some information of autopsies that could decide whether it was inhalation or gastrointestinal. And we need someone who speaks Russian; I had a friend who was a virologist who spoke Russian. And we need a guy who knows a lot about anthrax, and veterinarians know a lot about anthrax, so I got a veterinarian. And we need an anthropologist who knows a lot about how to work with people and that happened to be my wife, Jeanne Guillemin.

So, we all go over there, we were assigned a solid-state physicist, a guy named Borisov, to take us everywhere. He knew how to fix everything. Cars that wouldn’t work, and also the KGB. He was a genius, and became a good friend. It turns out that he had a girlfriend, and she, by this time, had been elected to be a member of the Duma. In other words, she’s a congresswoman. She’s from Sverdlovsk. She had been a friend of Yeltsin. She had written Yeltsin a letter, which my friend Borisov knew about, and I have a photocopy of the letter. What it says is, “Dear Boris Nikolayevich,”that’s Yeltsin, “My constituents here at Sverdlovsk want to know if that anthrax epidemic was caused by a government activity or not. Because if it was, the families of those who died — they’re entitled to double pension money, just like soldiers killed in war.” So, Yeltsin writes back, “We will look into it.” And that’s why my friend Yablokov got asked to look into it. It was decided eventually that it was the result of government activity — by Yeltsin, he decided that — and so he had to have a list of the people who were going to get the extra pensions. Because otherwise everybody would say, “I’d like to have an extra pension.” So there had to be a list.

So she had this list with 68 names of the people who had died of anthrax during this time period in 1979. The list also had the address where they lived. So,now my wife, Jeanne Guillemin, Professor of Anthropology at Boston College, goes door-to-door — with two Russian women who were professors at the university and who knew English so they could communicate with Jeanne — knocks on the doors: “We would like to talk to you for a little while. We’re studying health, we’re studying the anthrax epidemic of 1979. We’re from the university.”

Everybody let them in except one lady who said she wasn’t dressed, so she couldn’t let anybody in. So in all the other cases, they did an interview and there were lots of questions. Did the person who died have TB? Was that person a smoker? One of the questions was where did that person work, and did they work in the day or the night? We asked that question because we wanted to make a map. If it had been inhalation anthrax, it had to be windborne, and depending on the wind, it might have been blown in a straight line if the wind was of a more or less unchanging direction.

If, on the other hand, it was gastrointestinal, people get bad meat from black market sellers all over the place, and the map of where they were wouldn’t show anything important, they’d just be all over the place. So, we were able to make a map when we got back home, we went back there a second time to get more interviews done, and Jeanne went back a third time to get even more interviews done. So, finally we had interviews with families of nearly all of those 68 people, and so we had 68 map locations: where they lived, and where they worked, and whether it was day or night. Nearly all of them were daytime workers.

When we plotted where they lived, they lived all over the southern part of the city of Sverdlovsk. When we plotted where they were likely would have been in the daytime, they all fell in to one narrow zone with one point at the military biological lab. The lab was inside the city. The other point was at the city limit: The last case was at the edge of the city limit, the southern part. We also had meteorological information, which I had brought with me from the United States. We knew the wind direction every three hours, and there was only one day when the wind was constantly blowing in the same direction, and that same direction was exactly the direction along which the people who died of anthrax lived.

Well, bad meat does not blow around in straight lines. Clouds of anthrax spores do. It was rigorous: We could conclude from this, with no doubt whatsoever, that it had been airborne, and we published this in Science magazine. It was really a classic of epidemiology, you couldn’t ask for anything better. Also, the autopsy records were inspected by the pathologist along with our trip, and he concluded from the autopsy specimens that it was inhalation. So, there was that evidence, too, and that was published in the PNAS. So, that really ended the mystery. The Soviet explanation was just wrong, and the CIA explanation, which was only probable: it was confirmed.

Max: Amazing detective story.

Matthew: I liked going out in the field, using whatever science I knew to try and deal with questions of importance to arms control, especially chemical and biological weapons arms control. And that happened to me on three occasions, one I just told you. There were two others.

Ariel: So, actually real quick before you get into that. I just want to mention that we will share or link to that paper and the map. Because I’ve seen the map that shows that straight line, and it is really amazing, thank you.

Matthew: Oh good.

Max: I think at the meta level this is also a wonderful example of what you mentioned earlier there, Matthew, about verification. It’s very hard to hide big programs because it’s so easy for some little thing to go wrong or not as planned and then something like this comes out.

Matthew: Exactly. By the way, that’s why having a verification provision in the treaty is worth it even if you never inspect. Let’s say that the guys who are deciding whether or not to do something which is against the treaty, they’re in a room and they’re deciding whether or not to do it. Okay? Now it is prohibited by a treaty that provides for verification. Now they’re trying to make this decision and one guy says, “Let’s do it. They’ll never see it. They’ll never know it.” Another guy says, “Well, there is a provision for verification. They may ask for a challenge inspection.” So, even the remote possibility that, “We might get caught,” might be enough to make that meeting decide, “Let’s not do it.” If it’s not something that’s really essential, then there is a potential big price.

If, on the other hand, there’s not even a treaty that allows the possibility of a challenge inspection, if the guy says, “Well, they might find it,” the other guy is going to say, “How are they going to find it? There’s no provision for them going there. We can just say, if they say, ‘I want to go there,’ we say, ‘We don’t have a treaty for that. Let’s make a treaty, then we can go to your place, too.’” It makes a difference: Even a provision that’s never used is worth having. I’m not saying it’s perfection, but it’s worth having. Anyway, let’s go on to one of these other things. Where do you want me to go?

Ariel: I’d really love to talk about the Agent Orange work that you did. So, I guess if you could start with the Agent Orange research and the other rainbow herbicides research that you were involved in. And then I think it would be nice to follow that up with, sort of another type of verification example, of the Yellow Rain Affair.

Matthew: Okay. The American Association for the Advancement of Science, the biggest organization of science in the United States, became, as the Vietnam War was going on, more and more concerned that the spraying of herbicides in Vietnam might cause ecological or health harm. And so at successive national meetings, there were resolutions to have it looked into. And as a result of one of those resolutions, the AAAS asked a fellow named Fred Tschirley to look into it. Fred was at the Department of Agriculture, but he was one of the people who developed the military use of herbicides. He did a study, and he concluded that there was no great harm. Possibly to the mangrove forest, but even then they would regenerate.

But at the next annual meeting, there was more appealing on the part of the membership, and now they wanted the AAAS to do its own investigation, and the compromise was they’d do their own study to design an investigation, and they had to have someone to lead that. So, they asked a fellow named John Cantlon, who was provost of Michigan State University, would he do it, and he said yes. And after a couple of weeks, John Cantlon said, “I can’t do this. I’m being pestered by the left and the right and the opponents on all sides and it’s just, I can’t do it. It’s too political.”

So, then they asked me if I would do it. Well, I decided I’d do it. The reason was that I wanted to see the war. Here I’d been very interested in chemical and biological weapons; very interested in war, because that’s the place where chemical and biological weapons come into play. If you don’t know anything about war, you don’t know what you’re talking about. I taught a course at Harvard for over two years on war, but that wasn’t like being there. So, I said I’d do it.

I formed a little group to do it. A guy named Arthur Westing, who had actually worked with herbicides and who was a forester himself and had been in the army in Korea, and I think had a battlefield promotion to captain. Just the right combination of talents. Then we had a chemistry graduate student, a wonderful guy named Bob Baughman. So, to design a study, I decided I couldn’t do it sitting here in Cambridge, Massachusetts. I’d have to go to Vietnam and do a pilot study in order to design a real study. So, we went to Vietnam — by the way, via Paris, because I wanted to meet the Vietcong people, I wanted them to give me a little card we could carry in our boots that would say, if we were captured, “We’re innocent scientists, don’t imprison us.” And we did get such little cards that said that. We were never captured by the Vietcong, but we did have some little cards.

Anyway, we went to Vietnam and we found, to my surprise, that the military assistance command, that is the United States Military in Vietnam, very much wanted to help our investigation. They gave us our own helicopter. That is, they assigned a helicopter and a pilot to me. And anywhere we wanted to go, I’d just call a certain number the night before and then go to Tan Son Nhut Air Base, and there would be a helicopter waiting with a pilot instructed FAD — fly as directed.

So, one of the things we did was to fly over a valley on which herbicides had been sprayed to kill the rice. John Constable, the medical member of our team, and I did two flights of that so we could take a lot of pictures. And the man who had designed this mission, a chemical corps captain named Captain Franz, had designed the mission and requested it and gotten permission through a series of review processes that it was really an enemy crop production area, not an area of indigenous Montagnard people growing food for their own eating, but rather enemy soldiers growing it for themselves.

So we took a lot of pictures and as we flew, Colonel Franz said, “See down there, there are no houses. There’s no civilian population. It’s just military down there. Also, the rice is being grown on terraces on the hillsides. The Montagnard people don’t do that. They just grow it down in the valley. They don’t practice terracing. And also, the extent of the rice fields down there — that’s all brand new. Fields a few years ago were much, much smaller in area. So, that’s how we know that it’s an enemy crop production area.” And he was a very nice man, and we believed him. And then we got home, and we had our films developed.

Well, we had very good cameras and although you couldn’t see from the aircraft, you could certainly see in the film: The valley was loaded with little grass shacks with yellow roofs — meaning that they were built recently, because you have to replace the roofs every once in a while with straw and if it gets too old, it turns black, but if there’s yellow, it means that somebody is living in those. And there were hundreds and hundreds of them.

We got from the Food and Agriculture Organization in Rome how much rice you need to stay alive for one year, and what area in hectares of dry rice — because this isn’t patty rice, it’s dry rice — you’d need to make that much rice, and we measured the area that was under cultivation from our photographs, and the area was just enough to support that entire population, if we assumed that there were five people who needed to be fed in every one of the houses that we counted.

Also, we could get from the French aerial photography that they had done in the late 1940s, and it turns out that the rice fields had not expanded. They were exactly the same. So it wasn’t that the military had moved in and made bigger rice fields: They were the same. So, everything that Colonel Franz said was just wrong. I’m sure he believed it, but it was wrong.

So, we made great big color enlargements of our photographs — we took photographs all up and down this valley, 15 kilometers long — and we made one set for Ambassador Bunker; one copy for General Abrams — Creighton Abrams was the head of our military assistance command; and one set for Secretary of State Rogers; along with a letter saying that this one case that we saw may not be typical, but in this one case, this crop destruction program was achieving the opposite of what it intended. It was denying food to the civilian population and not to the enemy. It was completely mistaken. So, as a result, I think, of that, but I have no proof, only the time connection, but right after that in early November — we’d sent the stuff in early November — Ambassador Bunker and General Abrams ordered a new review of the crop destruction program. Was it in response to our photographs and our letter? I don’t know, but I think it was.

The result of that review was a recommendation by Ambassador Bunker and General Abrams to stop the herbicide program immediately. They sent this recommendation back in a top secret telegram to Washington. Well, the top-secret telegram fell into the hands of the Washington Post, and they published it. Well, now here are the Ambassador and the General on the spot, saying to stop doing something in Vietnam. How on earth can anybody back in Washington gainsay them? Of course, President Nixon had to stop it right away. There’d be no grounds. How could he say, “Well, my guys here in Washington, in spite of what the people on the spot say, tell us we should continue this program.”

So that very day, he announced that the United States would stop all herbicide operations in Vietnam in a rapid and orderly manner. That very day happened to be the day that I, John Constable, and Art Westing were on the stage at the annual meeting in Chicago of the AAAS, reporting on our trip to Vietnam. And the president of AAAS ran up to me to tell me this news, because it just came in while I was talking, giving our report. So, that’s how it got stopped, and thanks to General Abrams.

By the way, the last day I was in Vietnam, General Abrams had just come back from Japan — he’d had an operation for gallbladder, and he was still convalescing. We spent all morning talking with each other. And he asked me at one point, “What about the military utility of the herbicides?” And of course, I said I had no idea what it was, or not. And he said, “Do you want to know what I think?” I said, “Yes, sir.” He said, “I think it’s shit.” I said, “Well, why are we doing it here?” He said, “You don’t understand anything about this war, young man. I do what I’m ordered to do from Washington. It’s Washington who tells me to use this stuff, and I have to use it because if I didn’t have those 55-gallon drums of herbicides offloaded on the decks at Da Nang and Saigon, then they’d make walls. I couldn’t offload the stuff I need over those walls. So, I do let the chemical corps use this stuff.” He said, “Also, my son, who is a captain up in I Corps, agrees with me about that.”

I wrote something about this recently, which I sent to you, Ariel. I want to be sure my memory was right about the conversation with General Abrams — who, by the way, was a magnificent man. He is the man who broke through at the Battle of the Bulge in World War II. He’s the man about whom General Patton, the great tank general, said, “There’s only one tank officer greater than me, and it’s Abrams.”

Max: Is he the one after whom the Abrams tank is named?

Matthew: Yes, it was named after him. Yes. He had four sons, they all became generals, and I think three of them became four-stars. One of them who did become a four-star is still alive in Washington. He has a consulting company. I called him up and I said, “Am I right, is this what your dad thought and what you thought back then?” He said, “Hell, yes. It’s worse than that.” Anyway, that’s what stopped the herbicides. They may have stopped anyway. It was dwindling down, no question. Now the question of whether dioxin and herbicides have caused too many health effects, I just don’t know. There’s an immense literature about this and it’s nothing I can say we ever studied. If I read all the literature, maybe I’d have an opinion.

I do know that dioxin is very poisonous, and there’s a prelude to this order from President Nixon to stop the use of all herbicides. That’s what caused the United States to stop the use of Agent Orange specifically. That happened first, before I went to Vietnam. That happened for a funny reason. A Harvard student, a Vietnamese boy, came to my office one day with a stack of newspapers from Saigon in Vietnamese. I couldn’t read them, of course, but they all had pictures of deformed babies, and this student claimed that this was because of Agent Orange, that the newspaper said it was because of Agent Orange.

Well, deformed babies are born all the time and I appreciated this coming from him, but there’s nothing I could do about it. But then I got from a graduate student here — Bill Haseltine, now become a very wealthy man — he had a girlfriend and she was working for Ralph Nader one summer, and she somehow got a purloined copy of a study that had been ordered by the NIH of the possible keratogenic, mutagenic, and carcinogenic effects of common herbicides, pesticides, and fungicides.

This company, called the Bionetics company, had this huge contract that tests all these different compounds, and they concluded from this that there was only one of these chemicals that did anything that might be dangerous for people. That was 2,4,5-T, trichlorophenoxyacetic acid. Well, that’s what Agent Orange is made out of. So, I had this report that had not yet been released to the public saying that this could cause birth defects in humans if it did the same thing as it did in guinea pigs and mice. I thought, the White House better know about this. That’s pretty explosive: claims in the newspapers in Saigon and scientific suggestions that this stuff might cause birth defects.

So, I decided to go down to Washington and see President Nixon’s science advisor. That was Lee DuBridge, physicist. Lee DuBridge had been the president of Caltech when I was a graduate student there and so he knew me, and I knew him. So, I went down to Washington with some friends, and I think one of the friends was Arthur Galston from Yale. He was a scientist who worked on herbicides, not on the phenoxyacetic herbicides but other herbicides. So we went down to see the President’s science advisor, and I showed them these newspapers and showed him the Bionetics report. He hadn’t seen it, it was at too low a level of government for him to see it and it had not yet been released to the public. Then he did something amazing, Lee DuBridge: He picked up the phone and he called David Packard, who was the number two at the Defense Department. Right then and there, without consulting anybody else, without asking the permission of the President, they canceled Agent Orange.

Max: Wow.

Matthew: That was the end Agent Orange. Now, not exactly the end. I got a phone call from Lee DuBridge a couple of days later when I was back at Harvard. He says, “Matt, the DuPont people have come to me. It’s not Agent Orange itself, it’s an impurity in Agent Orange called dioxin, and they know that dioxin is very toxic, and the Agent Orange that they make has very little dioxin in it because they know it’s bad and they make the stuff at low temperature, when dioxin is a by-product, that’s made in very small amount. These other companies like Diamond Shamrock and other companies, Monsanto, who make Agent Orange for the military, it must be their Agent Orange. It’s not our Agent Orange.

So, in other words the question was, we just use the Dow Agent Orange — maybe that’s safe. But the question is does the Dow Agent Orange cause defects in mice? So, a whole new series of experiments were done with Agent Orange containing much less dioxin in it. It still made birth defects. So, since it still made birth defects in one species of rodent, you could hardly say, “Well, it’s okay then for humans.” So, that really locked it, closed it down, and then even the Department of Agriculture prohibited the use in the United States, except on land that would have been unlikely to get into the human food chain. So, that ended the use of Agent Orange.

That had happened already before we went to Vietnam. They were then using only Agent White and Agent Blue, two other herbicides, but Agent Orange had been knocked out ahead of time. But that was the end of the whole herbicide program. It was two things: the dioxin concern, on the one hand, stopping Agent Orange, and the decision of President Nixon; and militarily Bunker and Abrams had said, “It’s no use, we want to get it stopped, it’s doing more harm than good. It’s getting the civilian population against us.”

Max: One reaction I have to these fascinating stories is how amazing it is that back in those days politicians really trusted scientists. You could go down to Washington, there would be a science advisor. You know, we even didn’t have a presidential science advisor for a while now during this administration. Do you feel that the climate has changed somehow in the way politicians view scientists?

Matthew: Well, I don’t have a big broad view of the whole thing. I just get the impression, like you do, that there are more politicians who don’t pay attention to science than there used to be. There are still some, but not as many, and not in the White House.

Max: I would say we shouldn’t particularly just point fingers at any particular administration, I think there has been a general downward trend for people’s respect for scientists overall. If you go back to when you were born, Matthew, and when I was born, I think generally people thought a lot more highly about scientists contributing very valuable things to society and they were very interested in them. I think right now there are much more people who can name — If you ask the average person how many famous movie stars can they name, or how many billionaires can they name, versus how many Nobel laureates can they name, the answer is going to be kind of different from the way it was a long time ago. It’s very interesting to think about what we can do to more help people appreciate the things that they do care about, like living longer and having technology and so on, are things that they, to a large extent, owe to science. It isn’t just the nerdy stuff that isn’t relevant to them.

Matthew: Well, I think movie stars were always at the top of the list. Way ahead of Nobel Prize winners and even of billionaires, but you’re certainly right.

Max: The second thing that really strikes me, which you did so wonderfully there, is that you never antagonized the politicians and the military, but rather went to them in a very constructive spirit and said look, here are the options. And based on the evidence, they came to your conclusion.

Matthew: That’s right. Except for the people who actually were doing these programs — that was different, you couldn’t very well tell them that. But for everybody else, yes, it was a help. You need to offer help, not hindrance.

The last thing was the Yellow Rain. That, too, involved the CIA. I was contacted by the CIA. They had become aware of reports from Southeast Asia, particularly from Thailand, Hmong tribespeople who were living in Laos, coming out of Laos across the Mekong into Thailand, and telling stories of being poisoned by stuff dropped from airplanes. Stuff that they called kemi or yellow rain.

At first, I thought maybe there was something to this, there are some nasty chemicals that are yellow. Not that lethal, but who knows, maybe there is exaggeration in their stories. One of them is called adamsite, it’s yellow, it’s an arsenical. So we decided we’d have a conference, because there was a  mystery: What is this yellow rain? We had a conference. We invited people from the intelligence community, from the state department. We invited anthropologists. We invited a bunch of people to ask, what is this yellow rain?

By this time, we knew that the samples that had been turned in contained pollen. One reason we knew that was that the British had samples of this yellow rain and they had shown that it contains pollen. They had looked at the samples of the yellow rain brought in by the Hmong tribespeople, given to British officers — or maybe Americans, I don’t know — but found its way into the hands of British intelligence, who bring these samples back to Porton and they’re examined in various ways, but also under the microscope. And the fellow who looked at them under the microscope happened to be a beekeeper. He knew just what pollen grains look like. And he knew that there was pollen, and then they sent this information to the United States, and we looked at the samples of yellow rain we had, and they all contained — all these yellow samples contained pollen.

The question was, what is it? It’s got pollen in it. Maybe it’s very poisonous. The Montagnard people say it falls from the sky. It lands on leaves and on rocks. The spots were about two millimeters in diameter. It’s yellow or brown or red, different colors. What is it? So, we had this meeting in Cambridge, and one of the people there, Peter Ashton, is a great botanist, his specialty is the trees of Southeast Asia and in particular the great dipterocarp trees, which are like the oaks in our part of the world. And he was interested in the fertilization of these dipterocarps, and the fertilization is done by bees. They collect pollen, though, like other bees.

And so the hypothesis we came to at the end of this day-long meeting was that maybe this stuff is poisonous, and the bees get poisoned by it because it falls on everything, including flowers that have pollen, and the bees get sick, and these yellow spots, they’re the vomit of the bees. These bees are smaller individually than the yellow spots, but maybe several bees get together and vomit on the same spot. Really a crazy idea. Nevertheless, it was the best idea we could come up with that explained why something could be toxic but have pollen in it. It could be little drops, associated with bees, and so on.

A couple of days later, both Peter Ashton, the botanist, and I, noticed on the backs of our cars on the windshields, the rear windshields, yellow spots loaded with pollen. These were being dropped by bees,  these were the natural droppings of bees, and that gave us the idea that maybe there was nothing poisonous in this stuff. Maybe it was the natural droppings of bees that the people in the villages thought was poisonous, but that wasn’t. So, we decided we better go to Thailand and find out what’s happening.

So, a great bee biologist named Thomas Seeley, who’s now at Cornell — he was at Yale at that time — and I flew over to Thailand, and went up into the forest to see if bees defecate in showers. Now why did we do that? It’s because friends here said, “Matt, this can’t be the source of the yellow rain that the Hmong people complained about, because bees defecate one by one. They don’t go out in a great armada of bees and defecate all at once. Each bee goes out and defecates by itself. So, you can’t explain the showers — they’d only get tiny little driblets, and the Hmong people say they’re real showers, with lots of drops falling all at once.”

So, Tom Seeley and I went to Thailand, where they also had this kind of bee. So, we went there, and it turns out that they defecate all at once, unlike the bees here. Now they do defecate in showers here too, but they’re small showers. That’s because the number of bees in a nest here is rather small, but they do come out on the first warm days of spring, when there’s now pollen and nectar to be harvested, but those showers are kind of small. Besides that, the reason that there are showers at all even in New England is because the bees are synchronized by winter. Winter forces them to stay in their nest all winter long, during which they’re eating the stored-up pollen and getting very constipated. Now, when they fly out, they all fly out, they’re all constipated, and so you get a big shower. Not as big as the natives in Southeast Asia reported, but still a shower.

But in southeast Asia, there are no seasons. Too near the equator. So, there’s nothing that would synchronize the defecation of bees, and that’s why we had to go to Thailand to see if — even though there’s no winter to synchronize their defecation flights — if they nevertheless do go out in huge numbers and all at once.

So, we’re in Thailand and we go up into the Khao Yai National Park and find places where there are clearings in the forests where you could see up into the sky, where if there were bees defecating their feces would fall to the ground, not get caught up in the trees. And we put down big pieces, one meter square, of white paper, and anchored them with rocks, and went walking around in the forest some more, and come back and look at our pieces of white paper every once in a while.

And then suddenly we saw a large number of spots on the paper, which meant that they had defecated all at once. They weren’t going around defecating one by one by one. There were great showers then. That’s still a question: Why they don’t go out one by one? And there are some good ideas why, I won’t drag you into that. It’s the convoy principle, to avoid getting picked off one by one by birds. That’s why people think that they go out in great armadas of constipated bees.

So, this gave us a new hypothesis. The so-called yellow rain is all a mistake. It’s just bees defecating, which people confuse and think is poisonous. Now, that still doesn’t prove that there wasn’t a poison. What was the evidence for poison? The evidence was that the Defense Intelligence Agency was sending samples of this yellow rain and also samples of human blood and other materials to a laboratory in Minnesota that knew how to analyze for the particular toxin that the Defense establishment thought was the poison. It’s a toxin called trichothecene mycotoxins, there’s a whole family of them. And this lab reported positive findings in the samples from Thailand but not in controls. So that seemed to be real proof that there was poison.

Well, this lab is a lab that also produced trichothecene mycotoxins, and the way they analyzed for them was by mass spectroscopy, and everybody knows that if you’re going to do mass spectroscopy, you’re going to be able to detect very, very, very tiny amounts of stuff, and so you shouldn’t both make large quantities and try to detect small quantities in the same room, because there’s the possibility of cross contamination. I have an internal report from the Defense Intelligence Agency saying that that laboratory did have numerous false positive, and that probably all of their results were bedeviled by contamination from the trichothecenes that were in the lab, and also because there may have been some false reading of the mass spec diagram.

The long and short of it is that when other laboratories tried to find trichothecenes in their samples: the US Army looked at at least 80 samples and found nothing. The British looked at at least 60 samples, found nothing. The Swedes looked at some number of samples, I don’t know the number, but found nothing. The French looked at a very few samples at their military analytical lab, and the French found nothing. No lab could confirm it. There was one lab at Rutgers that thought it could confirm it, but I believe that they were suffering from contamination also, because they were a lab that worked with trichothecenes also.

So, the long and short of it is that the chemical evidence was no good, and finally the ambassador there decided that we should have another look — Ambassador Dean. And that the military should send out a team that was properly equipped to check up on these stories, because up until then there was no dedicated team. There were teams that would come up briefly, listen to the refugees’ stories, collect samples, and go back. So Ambassador Dean requested a team that would stay there. So out comes a team from Washington, stays there longer than a year. Not just a week, but longer than a year, and they tried to re-locate the Hmong people in the camps who had told these stories in the refugee camps.

They couldn’t find a single one who would tell the same story twice. Either because they weren’t telling the same story twice, or because the interpreter interpreted the same story differently. So, whatever it was. Then they did something else. They tried to find people who were in the same location at the same time as was claimed there was such attacks, and those people never confirmed the attack. They could never find any confirmation by interrogation of people.

Then also, there was a CIA unit out there in that theater questioning captured prisoners of war and also people who surrendered from the North Vietnamese army: the people who were presumably behind the use of this toxic stuff. And they interrogated hundreds of people, and one of these interrogators wrote an article in an Intelligence Agency Journal, but an open journal, saying that he doubted that there was anything to the yellow rain because they had interrogated so many people including chemical corps people from the North Vietnamese Army, that he couldn’t believe that there really was anything going on.

So we did some more investigating of various kinds, not just going to Thailand, but doing some analysis of various things. We looked at the samples — we found bee hairs in the samples. We found that the bee pollen in the samples of the alleged poison had no protein inside. You can stain pollen grains with something called Coomassie brilliant blue, and these pollen grains that were in the samples handed in by the refugees, that were given to us by the army and by the Canadians, by the Australians, they didn’t stain blue. Why not? Because if a pollen grain passes through the gut of a bee, the bee digests out all of the good protein that’s inside the pollen grain, as its nutrition.

So, you’d have to believe that the Soviets were collecting pollen not from plants, which is hard enough, but had been regurgitated by bees. Well, that’s insane. You could never get enough to be a weapon by collecting bee vomit. So the whole story collapsed, and we’ve written a longer account of this. The United States government has never said we were right, but a few years ago said that maybe they were wrong. So that’s at least something.

So one case we were right, and the Soviets were wrong. Another case, the Soviets were wrong, and we were right, and the third case, the herbicides, nobody was right or wrong. It was just that it was, in my view, by the way, it was useless militarily. I’ll tell you why.

If you spray the deep forest, hoping to find a military installation that you can now see because there are no more leaves, it takes four or five weeks for the leaves to fall off. So, you might as well drop little courtesy cards that say, “Dear enemy. We have now sprayed where you are with herbicide. In four or five weeks we will see you. You may choose to stay there, in which case, we will shoot you. Or, you have four or five weeks to move somewhere else, in which case, we won’t be able to find you. You decide.” Well, come on, what kind of a brain came up with that?

The other use was along roadsides, for convoys to be safer from snipers who might be hidden in the woods. You knock the leaves off the trees and you can see deeper into the woods. That’s right, but you have to realize the fundamental law of physics, which is that if you can see from A to B, B can see back to A, right? If there’s a clear light path from one point to another, there’s a clear light path in the other direction.

Now think about it. You are a sniper in the woods, and the leaves now have not been sprayed. They grow right up to the edge of the forest and a convoy is coming down the road. You can stick your head out a little bit but not for very long. They have long-range weapons; When they’re right opposite you, they have huge firepower. If you’re anywhere nearby, you could get killed.

Now, if we get rid of all the leaves, now I can stand way back into the forest, and still sight you between the trunks. Now, that’s a different matter. A very slight move on my part determines how far up the road and down the road I can see. By just a slight movement of my eye and my gun, I can start putting you under fire a couple kilometers up the road — you won’t even know where it’s coming from. And I can keep you under fire a few kilometers down the road, when you pass me by. And you don’t know where I am anymore. I’m not right up by the roadside, because the leaves would otherwise keep me from seeing anything. I’m back in there somewhere. You can pour all kinds of fire, but you might not hit me.

So, for all these reasons, the leaves are not the enemy. The leaves are the enemy of the enemy. Not of us. We’d like to get rid of the trunks — that’s different, we do that with bulldozers. But getting rid of the leaves leaves a kind of a terrain which is advantageous to the enemy, not to us. So, on all these grounds, my hunch is that by embittering the civilian population — and after all our whole strategy was to win the hearts and minds — by embittering the native population by wiping out their crops with drifting herbicide, the herbicides helped us lose the war, not win it. We didn’t win it. But it helped us lose it.

But anyway, the herbicides got stopped in two steps. First Agent Orange, because of dioxin and the report from the Bionetics Company, and second because Abrams and Bunker said, “Stop it.” We now have a treaty, by the way, the ENMOD treaty, that makes it illegal under international law to do any kind of large-scale environmental modification as a weapon of war. So, that’s about everything I know.

And I should add: you might say, how could they interpret something that’s common in that region as a poison? Well, in China, in 1970, I believe it was, the same sort of thing happened, but the situation was very different. People believed that yellow spots were falling from the sky, they were fallout from nuclear weapons tests being conducted by the Soviet Union, and they were poisonous.

Well, the Chinese government asked a geologist from a nearby university to go investigate, and he figured out — completely out of touch with us, he had never heard of us, we had never heard of him — that it was bee feces that were being misinterpreted by the villagers as fallout from nuclear weapons test done by Russians.

It was exactly the same situation, except that in this case there was no reason whatsoever to believe that there was anything toxic there. And why was it that people didn’t recognize bee droppings for what they were? After all, there’s lots of bees out there. There are lots of bees here, too. And if in April, or near that part of spring, you look at the rear windshield of your car, if you’ve been out in the countryside or even here in midtown, you will see lots of these spots, and that’s what those spots are.

When I was trying to find out what kinds of pollen were in the samples of the yellow rain — the so-called yellow rain — that we had, I went down to Washington. The greatest United States expert on pollen grains and where they come from was at the Smithsonian Institution, a woman named Joan Nowicki. I told her that bees make spots like this all the time and she said, “Nonsense. I never see it.” I said, “Where do you park your car?” Well there’s a big parking lot by the Smithsonian, we go down there, and her rear windshield was covered with these things. We see them all the time. They’re part of what we see, but we don’t take any account of.

Here at Harvard there’s a funny story about that. One of our best scientists here, Ed Wilson, studies ants — but also bees — but mostly ants. But he knows a lot about bees. Well, he has an office in the museum building, and lots of people come to visit the museum at Harvard, a great museum, and there’s a parking lot for them. Now there’s a graduate student who has, in those days, bee nests up on top of the museum building. He’s doing some experiments with bees. But these bees defecate, of course. And some of the nice people who come to see Harvard Museum park their cars there and some of them are very nice new cars, and they come back out from seeing the museum and there’s this stuff on their windshields. So, they go to find out who is it that they can blame for this and maybe do something about it or pay them get it fixed or I don’t know what — anyway, to make a complaint. So, they come to Ed Wilson’s office.

Well, this graduate student is a graduate student of Ed Wilson, and of course, he knows that he’s got bee nests up there, and so the secretary of Ed Wilson knows what this stuff is. And the graduate student has the job of taking a rag with alcohol on it and going down and gently wiping the bee feces off of the windshields of these distressed drivers, so there’s never any harm done. But now, when I had some of this stuff that I’d collected in Thailand, I took two people to lunch at the faculty club here at Harvard, and some leaves with these spots on them under a plastic petri dish, just to see if they would know.

Now, one of these guys, Carroll Williams, knew all about insects, lots of things about insects, and Wilson of course; and we’re having lunch and I bring out this petri dish with the leaves covered with yellow spots and asked them, two professors who are great experts on insects, what the stuff is, and they hadn’t the vaguest idea. They didn’t know. So, there can be things around us that we see every day, and even if we’re experts we don’t know what it is. We don’t notice it. It’s just part of the environment. We don’t notice it. I’m sure that these Hmong people were getting shot at, they were getting napalmed, they were getting everything else, but they were not getting poisoned. At least not by bee feces. It was all a big mistake.

Max: Thank you so much, both for this fascinating conversation and all the amazing things you’d done to keep science a force for good in the world.

Ariel: Yes. This has been a really, really great and informative discussion, and I have loved learning about the work that you’ve done, Matthew. So, Matthew and Max, thank you so much for joining the podcast.

Max: Well, thank you.

Matthew: I enjoyed it. I’m sure I enjoyed it more than you did.

Ariel: No, this was great. It’s truly been an honor getting to talk with you.

If you’ve enjoyed this interview, let us know! Please like it, share it, or even leave a good review. I’ll be back again next month with more interviews with experts.  

 

FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.  

In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.

Topics discussed in this episode include:

  • Watson and Crick’s double helix hypothesis
  • The value of theoretical vs. experimental science
  • Biological weapons and the U.S. biological weapons program
  • The Biological Weapons Convention
  • The value of verification
  • Future considerations for biotechnology

Publications and resources discussed in this episode include:

Click here for Part 2: Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

Ariel: Hi everyone and welcome to the FLI podcast. I’m your host, Ariel Conn with the Future of Life Institute, and I am super psyched to present a very special two-part podcast this month. Joining me as both a guest and something of a co-host is FLI president and MIT physicist Max Tegmark. And he’s joining me for these two episodes because we’re both very excited and honored to be speaking with Dr. Matthew Meselson. Matthew not only helped prove Watson and Crick’s hypothesis about the structure of DNA in the 1950s, but he was also instrumental in getting the U.S. to ratify the Geneva Protocol, in getting the U.S. to halt its Agent Orange Program, and in the creation of the Biological Weapons Convention. He is currently Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University where, among other things, he studies the role of sexual reproduction in evolution. Matthew and Max, thank you so much for joining us today.

Matthew: A pleasure.

Max: Pleasure.

Ariel: Matthew, you’ve done so much and I want to make sure we can cover everything, so let’s just dive right in. And maybe let’s start first with your work on DNA.

Matthew: Well, let’s start with my being a graduate student at Caltech.

Ariel: Okay.

Matthew: I had a been a freshman at Caltech but I didn’t like it. The teaching at that time was by rote except for one course, which was Linus Pauling’s course, General Chemistry. I took that course and I did a little research project for Linus, but I decided to go to graduate school much later at the University of Chicago because there was a program there called Mathematical Biophysics. In those days, before the structure of DNA was known, what could a young man do who liked chemistry and physics but wanted to find out how you could put together the atoms of the periodic chart and make something that’s alive?

There was a unit there called Mathematical Biophysics and the head of it was a man with a great big black beard, and that all seemed very attractive to a kid. So, I decided to go there but because of my freshman year at Caltech I got to know Linus’ daughter, Linda Pauling, and she invited me to a swimming pool party at their house in Sierra Madre. So, I’m in the water. It’s a beautiful sunny day in California, and the world’s greatest chemist comes out wearing a tie and a vest and looks down at me in the water like some kind of insect and says, “Well, Matt, what are you going to do next summer?”

I looked up and I said, “I’m going to the University of Chicago to Nicolas Rashevsky” that’s the man with the black beard. And Linus looked down at me and said, “But Matt, that’s a lot of baloney. Why don’t you come be my graduate student?” So, I looked up and said, “Okay.” That’s how I got into graduate school. I started out in X-ray crystallography, a project that Linus gave me to do. One day, Jacques Monod from the Institut Pasteur in Paris came to give a lecture at Caltech, and the question then was about the enzyme beta-galactosidase, a very important enzyme because studies of the induction of that enzyme led to the hypothesis of messenger RNA, also how genes are turned on and off. A very important protein used for those purposes.

The question of Monod’s lecture was: is this protein already lurking inside of cells in some inactive form? And when you add the chemical that makes it be produced, which is lactose (or something like lactose), you just put a little finishing touch on the protein that’s lurking inside the cells and this gives you the impression that the addition of lactose (or something like lactose) induces the appearance of the enzyme itself. Or the alternative was maybe the addition to the growing medium of lactose (or something like lactose) causes de novo production, a synthesis of the new protein, the enzyme beta-galactosidase. So, he had to choose between these two hypotheses. And he proposed an experiment for doing it — I won’t go into detail — which was absolutely horrible and would certainly not have worked, even though Jacques was a very great biologist.

I had been taking Linus’ course on the nature of the chemical bond, and one of the key take-home problems was: calculate the ratio of the strength of the Deuterium bond to the Hydrogen bond. I found out that you could do that in one line based on the — what’s called the quantum mechanical zero point energy. That impressed me so much that I got interested in what else Deuterium might have about it that would be interesting. Deuterium is heavy Hydrogen, with a neutron in the nucleus. So, I thought: what would happen if you exchange the water in something alive with Deuterium? And I read that there was a man who tried to do that with a mouse, but that didn’t work. The mouse died. Maybe because the water wasn’t pure, I don’t know.

But I had found a paper that you could grow bacteria, Escherichia coli, in pure heavy water with other nutrients added but no light water. So, I knew that you could make DNA from that as you could probably make DNA or also beta-galactosidase a little heavier by having it be made out of heavy Hydrogen rather than light. There’s some intermediate details here, but at some point I decided to go see the famous biophysicist Max Delbrück. I was in the Chemistry Department and Max was in the Biology Department.

And there was, at that time, a certain — I would say not a barrier, but a three-foot fence between these two departments. Chemists looked down on the biologists because they worked just with squiggly, gooey things. Then the physicists naturally looked down on the chemists and the mathematicians looked down on the physicists. At least that was the impression of us graduate students. So, I was somewhat fearsome to go meet Max Delbrück, and he also had a fearsome reputation, as not tolerating any kind of nonsense. But finally I went to see him — he was a lovely man actually — and the first thing he said when I sat down was, “What do you think about these two new papers of Watson and Crick?” I said I’d never heard about them.  Well, he jumped out of his chair and grabbed a heap of reprints that Jim Watson had sent to him, and threw them all at me, and yelled at me, and said, “Read these and don’t come back until you read them.”

Well, I heard the words “come back.” So I read the papers and I went back, and he explained to me that there was a problem with the hypothesis that Jim and Francis had for DNA replication. The idea of theirs was that the two strands come apart by unwinding the double helix. And if that meant that you had to unwind the entire parent double helix along its whole length, the viscous drag would have been impossible to deal with. You couldn’t drive it with any kind of reasonable biological motor.

So Max thought that you don’t actually unwind the whole thing: You make breaks, and then with little pieces you can unwind those and then seal them up. This gives you a kind of dispersive replication in which the two daughter molecules, each one has some pieces of the parent molecule but no complete strand from the parent molecule. Well, when he told me that, I almost immediately — I think it was almost immediately — realized that density separation would be a way to find out if this hypothesis predicted the finding of half heavy DNA after one generation. That is, one old strand together with one new strand forming one new duplex of DNA.

So I went to Linus Pauling and said, “I’d like to do that experiment,” and he gently said, “Finish your X-ray crystallography.” So, I didn’t do that experiment then. Instead I went to Woods Hole to be a teaching assistant in the Physiology course with Jim Watson. Jim had been living at Caltech that year in the faculty club, the Athenaeum, and so had I, so I had gotten to know Jim pretty well then. So there I was at Woods Hole, and I was not really a teaching assistant — I was actually doing an experiment that Jim wanted me to do — but I was meeting with the instructors.

One day we were on the second floor of the Lily building and Jim looked out the window and pointed down across the street. Sitting on the grass was a fellow, and Jim said, “That guy thinks he’s pretty smart. His name is Frank Stahl. Let’s give him a really tough experiment to do all by himself.” The Hershey–Chase Experiment. Well, I knew what that experiment was, and I didn’t think you could do it in one day, let alone just single-handedly. So I went downstairs to tell this poor Frank Stahl guy that they were going to give him a tough assignment.

I told him about that, and I asked him what he was doing. And he was doing something very interesting with bacteriophages. He asked me what I was doing, and I told him that I was thinking of finding out if DNA replicates semi-conservatively the way Watson and Crick said it should, by a method that would have something to do with density measurements in a centrifuge. I had no clear idea how to do that, just something by growing cells in heavy water and then switching them to light water and see what kind of DNA molecules they made in a density gradient in a centrifuge. And Frank made some good suggestions, and we decided to do this together at Caltech because he was coming to Caltech himself to be a postdoc that very next September.

Anyway, to make a long story short we made the experiment work, and we published it in 1958. That experiment said that DNA is made up of two subunits and when it replicates its subunits come apart, each one becomes associated with a new sub-unit. Now anybody in his right mind would have said, “By sub-unit you really mean a single polynucleotide chain. Isn’t that what you mean?” And we would have answered at that time, “Yes of course, that’s what we mean, but we don’t want to say that because our experiment doesn’t say that. Our experiment says that some kind of subunits do that — the subunits almost certainly are the single polynucleotide chains — but we want to confine our written paper to only what can be deduced from the experiment itself, and not go one inch beyond that.” It was later a fellow named John Cairns proved that the subunits were really the single polynucleotide chains of DNA.

Ariel: So just to clarify, those were the strands of DNA that Watson and Crick had predicted, is that correct?

Matthew: Yes, it’s the result that they would have predicted, exactly so. We did a bunch of other experiments at Caltech, some on mutagenesis and other things, but this experiment, I would say, had a big psychological value. Maybe its psychological value was more than anything else.

The year 1954, the year after Watson and Crick had published the structure of DNA and their speculations as to its biological meaning at Woods Hole, and Jim was there and Francis was there. I was there, as I mentioned. Rosalind Franklin was there. Sydney Brenner was there. It was very interesting because a good number of people there didn’t believe their structure for DNA, or that it had anything to do with life and genes, on the grounds that it was too simple, and life had to be very complicated. And the other group of people thought it was too simple to be wrong.

So two views: every one agreed that the structure that they had proposed was a simple one. Some people thought simplicity meant truth, and others thought that in biology, truth had to be complicated. What I’m trying to get at here is that after the structure was published it was just a hypothesis. It wasn’t proven by any methods of, for example, crystallography, to show — it wasn’t until much later that crystallography and a certain other kind of experiment actually proved that the Watson and Crick structure was right. At that time, it was a proposal based on model building.

So why was our experiment, the experiment showing the semi-conservative replication, of psychological value? It was because this is the first time you could actually see something. Namely, bands in an ultracentrifuge gradient. So, I think the effect of our experiment in 1958 was to make the DNA structure proposal of 1954 — it gave it a certain reality. Jim, in his book The Double Helix, actually says that he was greatly relieved when that came along. I’m sure he believed the structure was right all the time, but this certainly was a big leap forward in convincing people.

Ariel: I’d like to pull Max into this just a little bit and then we’ll get back to your story. But I’m really interested in this idea of the psychological value of science. Sort of very, very broadly, do you think a lot of experiments actually come down to more psychological value, or was your experiment unique in that way? I thought that was just a really interesting idea. And I think it would be interesting to hear both of your thoughts on this.

Matthew: Max, where are you?

Max: Oh, I’m just fascinated by what you’ve been telling us about here. I think of course, the sciences — we see again and again that experiments without theory and theory without experiments, neither of them would be anywhere near as amazing as when you have both. Because when there’s a really radical new idea put forth, half the time people at the time will dismiss it and say, “Oh, that’s obviously wrong,” or whatnot. And only when the experiment comes along do people start taking it seriously and vice versa. Sometimes a lot of theoretical ideas are just widely held as truths — like Aristotle’s idea of how the laws of motion should be — until somebody much later decides to put it to the experimental test.

Matthew: That’s right. In fact, Sir Arthur Eddington is famous for two things. He was one of the first ones to find experimental proof of the accuracy of Einstein’s theory of general relativity, and the other thing for which Eddington was famous was having said, “No experiment should be believed until supported by theory.”

Max: Yeah. Theorists and experiments have had this love-hate relationship throughout the ages, which I think, in the end, has been a very fruitful relationship.

Matthew: Yeah. In cosmology the amazing thing to me is that the experiments now cost billions or at least hundreds of millions of dollars. And that this is one area, maybe the only one, in which politicians are willing to spend a lot of money for something that’s so beautiful and theoretical and far off and scientifically fundamental as cosmology.

Max: Yeah. Cosmology is also a reminder again of the importance of experiment, because the big questions there — such as where did everything come from, how big is our universe, and so on — those questions have been pondered by philosophers and deep thinkers for as long as people have walked the earth. But for most of those eons all you could do was speculate with your friends over some beer about this, and then you could go home, because there was no further progress to be made, right?

It was only more recently when experiments gave us humans better eyes: where with telescopes, et cetera, we could start to see things that our ancestors couldn’t see, and with this experimental knowledge actually start to answer a lot of these things. When I was a grad student, we argued about whether our universe was 10 billion years old or 20 billion years old. Now we argue about whether it’s 13.7 or 13.8 billion years old. You know why? Experiment.

Matthew: And now is a more exciting time than any previous time, I think, because we’re beginning to talk about things like multi-universes and entanglement, things that are just astonishing and really almost foreign to the way that we’re able to think — that there’s other universes, or that there could be what’s called quantum mechanical entanglement: that things influence each other very far apart, so far apart that light could not travel between them in any reasonable time, but by a completely weird process, which Einstein called spooky action at a distance. Anyway, this is an incredibly exciting time about which I know nothing except from podcasts and programs like this one.

Max: Thank you for bringing this up, because I think the examples you gave right now actually are really, really linked to these breakthroughs in biology that you were telling us about, because I think we’ve been on this intellectual journey all along where we humans kept underestimating our ability to understand stuff. So for the longest time, we didn’t even really try our best because we assumed it was futile. People used to think that the difference between a living bug and a dead bug was that there was some sort of secret sauce, and the living bug has some sort life essence or something that couldn’t be studied with the tools of science. And then by the time people started to take seriously that maybe actually the difference between that living bug and the dead bug is that the mechanism is just broken in one of them, and you can study the mechanism — then you get to these kind of experimental questions that you were talking about. I think in the same way, people had previously shied away from asking questions about, not just about life, but about the origin of our universe for example, as being always hopelessly beyond where we were ever even able to do anything about, so people didn’t ask what experiments they could make. They just gave up without even trying.

And then gradually I think people were emboldened by breakthroughs in, for example, biology, to say, “Hey, what about — let’s look at some of these other things where people said we’re hopeless, too?” Maybe even our universe obeys some laws that we can actually set out to study. So hopefully we’ll continue being emboldened, and stop being lazy, and actually work hard on asking all questions, and not just give up because we think they’re hopeless.

Matthew: I think the key to making this process begin was to abandon supernatural explanations of natural phenomena. So long as you believe in supernatural explanations, you can’t get anywhere, but as soon as you give them up and look around for some other kind of explanation, then you can begin to make progress. The amazing thing is that we, with our minds that evolved under conditions of hunter-gathering and even earlier than that — that these minds of ours are capable of doing such things as imagining general relativity or all of the other things.

So is there any limit to it? Is there going to be a point beyond which we will have to say we can’t really think about that, it’s too complicated? Yes, that will happen. But we will by then have built computers capable of thinking beyond. So in a sense, I think once supernatural thinking was given up, the path was open to essentially an infinity of discovery, possibly with the aid of advanced artificial intelligence later on, but still guided by humans. Or at least by a few humans.

Max: I think you hit the nail on the head there. Saying, “All this is supernatural,” has been used as an excuse to be lazy over and over again, even if you go further back, you know, hundreds of years ago. Many people looked at the moon, and they didn’t ask themselves why the moon doesn’t fall down like a normal rock because they said, “Oh, there’s something supernatural about it, earth stuff obeys earth laws, heaven stuff obeys heaven laws, which are just different. Heaven stuff doesn’t fall down.”

And then Newton came along and said, “Wait a minute. What if we just forget about the supernatural, and for a moment, explore the hypothesis that actually stuff up there in the sky obeys the same laws of physics as the stuff on earth? Then there’s got to be a different explanation for why the moon doesn’t fall down.” And that’s exactly how he was led to his law of gravitation, which revolutionized things of course. I think again and again, there was again the rejection of supernatural explanations that led people to work harder on understanding what life really is, and now we see some people falling into the same intellectual trap again and saying, “Oh yeah, sure. Maybe life is mechanistic but intelligence is somehow magical, or consciousness is somehow magical, so we shouldn’t study it.”

Now, artificial intelligence progress is really, again, driven by people willing to let go of that and say, “Hey, maybe intelligence is not supernatural. Maybe it’s all about information processing, and maybe we can study what kind of information processing is intelligent and maybe even conscious as in having experiences.” There’s a lot learn at this meta level from what you’re saying there, Matthew, that if we resist excuses to not do the work by saying, “Oh, it’s supernatural,” or whatever, there’s often real progress we can make.

Ariel: I really hate to do this because I think this is such a great discussion, but in the interest of time, we should probably get back to the stories at Harvard, and then you two can discuss some of these issues — or others — a little more shortly in this interview. So yeah, let’s go back to Harvard.

Matthew: Okay, Harvard. So I came to Harvard. I thought I’d stay only five years. I thought it was kind of a duty for an American who’d grown up in the West to find out a little bit about what the East was like. But I never left. I’ve been here for 60 years. When I was here for about three years, my friend Paul Doty, a chemist, no longer living, asked me if I’d like to go work at the United States Arms Control and Disarmament Agency in Washington DC. He was on the general advisory board of that government branch, and it was embedded in the State Department building on 21st Street in Washington, but it was quite independent, it could report it directly to the White House, and it was the first year of its existence, and it was trying to find out what it should be doing.

And one of the ways it tried to find out what it should be doing was to hire six academics to come just for the summer. One of them was me, one of them was Freeman Dyson, the physicist, and there were four others. When I got there, they said, “Okay, you’re going to work on theater nuclear weapons arms control,” something I knew less than zero about. But I tried, and I read things and so on, and very famous people came to brief me — like Llewellyn Thompson, our ambassador to Moscow, and Paul Nitze, the deputy secretary of defense.

I realized that I knew nothing about this and although scientists often have the arrogance to think that they can say something useful about nearly anything if they think about it, here was something that so many people had thought about. So I went through my boss and said, “Look, you’re wasting your time and your money. I don’t know anything about this. I’m not gonna produce anything useful. I’m a chemist and a biologist. Why don’t you have me look into the arms control of that stuff?” He said, “Yeah, you could do whatever you want. We had a guy who did that, and he got very depressed and he killed himself. You could have his desk.”

So I decided to look into chemical and biological weapons. In those days, the arms control agency was almost like a college. We all had to have very high security clearances, and that was because the congress was worried that maybe there would be some leakers amongst the people doing this suspicious work in arms control, and therefore, we had to be in possession of the highest level of security clearance. This had, in a way, the unexpected effect that you could talk to your neighbor about anything. Ordinarily, you might not have clearance for what your neighbor, a different office, a different room, or a different desk was doing — but we had, all of us, such security clearances that we could all talk to each other about what we were doing. So it was like a college in that respect. It was a wonderful atmosphere.

Anyway, I decided I would just focus on biological weapons, because the two together would be too much for a summer. I went to the CIA, and a young man there showed me everything we knew about what other countries were doing with biological weapons, and the answer was we knew very little. Then I went to Fort Detrick to see what we were doing with biological weapons, and I was given a tour by a quite good immunologist who had been a faculty member at the Harvard Medical School, name was Leroy Fothergill. And we came to a big building, seven stories high. From a distance, you would think it had windows but when you get up close, they were phony windows. And I asked Dr. Fothergill, “What do we do in there?” He said, “Well, we have a big fermentor in there and we make Anthrax.” I said, “Well, why do we do that?” He said, “Well, biological weapons are a lot cheaper than nuclear weapons. It will save us money.”

I don’t think it took me very long, certainly by the time I got back to my office in the State Department Building, to realize that hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all. Because in the hands of other people, it would be like their having nuclear weapons. It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.

So that dawned on me. My office mate was Freeman Dyson, and I talked with him a little bit about it and he encouraged me greatly to pursue this. The more I thought about it, two things motivated me very strongly. Not just the illogic of it. The illogic of it motivated me only in the respect that it made me realize that any reasonable person could be convinced of this. In other words, it wouldn’t be a hard job to get this thing stopped, because anybody who’s thoughtful would see the argument against it. But there were two other aspects. One, it was my science: biology. It’s hard to explain, but that my science would be perverted in that way. But there’s another aspect, and that is the difference between war and peace.

We’ve had wars and we’ve had peace. Germany fights Britain, Germany is aligned with Britain. Britain fights France, Britain is aligned with France. There’s war. There’s peace. There are things that go on during war that might advance knowledge a little bit, but certainly, it’s during times of peace that the arts, the humanities, and science, too, make great progress. What if you couldn’t tell the difference and all the time is both war and peace? By that I mean, war up until now has been very special. There are rules of it. Basically, it starts with hitting a guy so hard that he’s knocked out or killed. Then you pick up a stone and hit him with that. Then you make a spear and spear him with that. Then you make a bow and arrow and spear him with that. Then later on, you make a gun and you shoot a bullet at him. Even a nuclear weapon: it’s all like hitting with an arm, and furthermore, when it stops, it’s stopped, and you know when it’s going on. It make sounds. It makes blood. It makes bang.

Now biological weapons, they could be responsible for a kind of war that’s totally surreptitious. You don’t even know what’s happening, or you know it’s happening but it’s always happening. They’re trying to degrade your crops. They’re trying to degrade your genetics. They’re trying to introduce nasty insects to you. In other words, it doesn’t have a beginning and an end. There’s no armistice. Now today, there’s another kind of weapon. It has some of those attributes: It’s cyber warfare. It might over time erase the distinction between war and peace. Now that really would be a threat to the advance of civilization, a permanent science fiction-like, locked in, war-like situation, never ending. Biological weapons have that potentiality.

So for those two reasons — my science, and it could erase the distinction between war and peace, could even change what it means to be human. Maybe you could change what the other guy’s like: change his genes somehow. Change his brain by maybe some complex signaling, who knows? Anyway, I felt a strong philosophical desire to get this thing stopped. Fortunately, I was in Harvard University, and so was Jack Kennedy. And although by that time he had been assassinated, he had left behind lots of people in the key cabinet offices who were Kennedy appointees. In particular, people who came from Harvard. So I could knock on almost any door.

So I went to Lyndon Johnson’s national security adviser, who had been Jack Kennedy’s national security adviser, and who had been the dean at Harvard who hired me, McGeorge Bundy, and said all these things I’ve just said. And he said, “Don’t worry, Matt, I’ll keep it out of the war plans.” I’ve never seen a war plan, but I guess if he said that, it was true. But that didn’t mean it wouldn’t keep on being developed.

Now here I should make an aside. Does that mean that the Army or the Navy or the Air Force wanted these things? No. We develop weapons in a kind of commercial way that is a part of the military. In this case, the Army Materiel Command works out all kinds of things: better artillery pieces, communication devices, and biological weapons. It doesn’t belong to any service. Then after, in this case, biological weapons, if the laboratories develop what they think is a good biological weapon, they still have to get one of the services — Air Force, Army, Navy, Marines —  to say, “Okay, we’d like that. We’ll buy some of that.”

There was always a problem here. Nobody wanted these things. The Air Force didn’t want them because you couldn’t calculate how many planes you needed to kill a certain number of people. You couldn’t calculate the human dose response, and beyond that you couldn’t calculate the dose that would reach the humans. There were too many unknowns. The Army didn’t like it, not only because they, too, wanted predictability, but because their soldiers are there, maybe getting infected by the same bugs. Maybe there’s vaccines and all that, but it also seemed dishonorable. The Navy didn’t want it because the one thing that ships have to be is clean. So oddly enough, biological weapons were kind of a step child.

Nevertheless, there was a dedicated group of people who really liked the idea and pushed hard on it. These were the people who were developing the biological weapons, and they had their friends in Congress, so they kept getting it funded. So I made a kind of a plan, like a protocol for doing an experiment, to get us to stop all this. How do you do that? Well, first you ask yourself: who can stop it? There’s only one person who can stop it. That’s the President of the United States.

The next thing is: what kind of advice is he going to get, because he may want to do something, but if all the advice he gets is against it, it takes a strong personality to go against the advice you’re getting. Also, word might get out, if it turned out you made a mistake, that they told you all along it was a bad idea and you went ahead anyway. That makes you a super fool. So the answer there is: well, you go to talk to the Secretary of Defense, and the Secretary of State, and the head of the CIA, and all of the senior people, and their people who are just below them.

Then what about the people who are working on the biological weapons? You have to talk to them, but not so much privately, because they really are dedicated. There were some people who are caught in this and really didn’t want to be doing it, but there were other people who were really pushing it, and it wasn’t possible, really, to tell them to quit your job and get out of this. But what you could do is talk with them in public, and by knowing more than they knew about their own subject — which meant studying up a lot — show that they were wrong.

So I literally crammed, trying to understand everything there was to know about aerobiology, diffusion of clouds, pathogenicity, history of biological weapons, the whole bit, so that I could sound more knowledgeable. I know that’s a sort of slightly underhanded way to win an argument, but it’s a way of convincing the public that the guys who are doing this aren’t so wise. And then you have to get public support.

I had a pal here who told me I had to go down to Washington and meet a guy named Howard Simons, who was the managing editor of the Washington Post. He had been a science journalist at The Post and that’s why some scientists up here in Harvard knew him. So, I went down there — Howie at that time was now managing editor — and I told him, “I want to get newspaper articles all over the country about the problem of biological weapons.” He took out a big yellow pad and he wrote down about 30 names. He said, “These are the science journalists at San Francisco Chronicle, Baltimore Sun, New York Times, et cetera, et cetera.” Put down the names of all the main science journalists. And he said to me, “These guys have to have something once a week to give their editor for the science columns, or the science pages. They’re always on the lookout for something, and biological weapons is a nice subject — they’d like to write about that, because it grabs people’s attention.”

So I arranged to either meet, or at least talk to all of these guys. And we got all kinds of articles in the press, and mainly reflecting the views that I had that this was unwise for the United States to pioneer this stuff. We should be in the position to go after anybody else who was doing it even in peacetime and get them to stop, which we couldn’t very well do if we were doing it ourselves. In other words, that meant a treaty. You have to have a treaty, which might be violated, but if it’s violated and you know, at least you can go after the violators, and the treaty will likely stop a lot of countries from doing it in the first place.

So what are the treaties? There’s an old treaty, a 1925 Geneva Protocol. The United States was not a party to it, but it does prohibit the first use of bacteriological or other biological weapons. So the problem was to convince the United States to get on board that treaty.

The very first paper I wrote for the President is called the Geneva Protocol of 1925. I never met President Nixon, but I did know Henry Kissinger: He’d been my neighbor at Harvard, the building next door to mine. There was a good lunch room on the third floor. We both ate there. He had started an arms control seminar, met every month. I went to that, all the meetings. We traveled a little bit in Europe together. So I knew him, and I wrote papers for Henry knowing that those would get to Nixon. The first paper that I wrote, as I said, was “The United States and the Geneva Protocol.” It made all these arguments that I’m telling you now about why the United States should not be in this business. Now, the Protocol also prohibits chemical weapons or the first use of chemical weapons.

Now, I should say something about writing papers for Presidents. You don’t want to write a paper that’s saying, “Here’s what you should do.” You have to put yourself in their position. There are all kinds of options on what they should do. So, you have to write a paper from the point of view of a reader who’s got to choose between a lot of options. He doesn’t have a choice to start with. So that’s the kind of paper you need to write. You’ve got to give every option a fair trial. You’ve got to do your best, both to defend every option and to argue against every option. And you’ve got to do it in no more than a very few number of pages. That’s no easy job, but you can do it.

So eventually, as you know, the United States renounced biological weapons in November of 1969. There was an off the record press briefing that Henry Kissinger gave to the journalists about this. And one of them, I think it was the New York Times guy, said, “What about toxin weapons?”

Now, toxins are poisonous things made by living things, like Botulinum toxin made by bacteria or snake venom, and those could be used as weapons in principle. You can read in this briefing, Henry Kissinger says, “What are toxins?” So what this meant, in other words, is that a whole new review, a whole new decision process had to be cranked up to deal with the question, “Well, do we renounce toxin weapons?” And there were two points of view. One was, “They are made by living things, and since we’re renouncing biological warfare, we should renounce toxins.”

The other point of view is, “Yeah, they’re made by living things, but they’re just chemicals, and so they can also be made by chemists in laboratories. So, maybe we should renounce them when they’re made by living things like bacteria or snakes, but reserve the right to make them and use them in warfare if we can synthesize them in chemical laboratories.” So I wrote a paper arguing that we should renounce them completely. Partly because it would be very confusing to argue that the basis for renouncing or not renouncing is who made them, not what they are. But also, I knew that my paper was read by Richard Nixon on a certain day on Key Biscayne in Florida, which was one of the places he’d go for rest and vacation.

Nixon was down there, and I had written a paper called “What Policy for Toxins.” I was at a friend’s house with my wife the night that the President and Henry Kissinger were deciding this issue. Henry called me, and I wasn’t home. They couldn’t find their copy of my paper. Henry called to see if I could read it to them, but he couldn’t find me because I was at a dinner party. Then Henry called Paul Doty, my friend, because he had a copy of the paper. But he looked for his copy and he couldn’t find it either. Then late that night Kissinger called Doty again and said, “We found the paper, and the President has made up his mind. He’s going to renounce toxins no matter how they’re made, and it was because of Matt’s paper.”

I had tried to write a paper that steered clear of political arguments — just scientific ones and military ones. However, there had been an editorial in the Washington Post by one of their editorial writers, Steve Rosenfeld, in which he wrote the line, “How can the President renounce typhoid only to embrace Botulism?”

I thought it was so gripping, I incorporated it under the topic of the authority and credibility of the President of the United States. And what Henry told Paul on the telephone was: that’s what made up the President’s mind. And of course, it would. The President cares about his authority and credibility. He doesn’t care about little things like toxins, but his authority and credibility… And so right there and then, he scratched out the advice that he’d gotten in a position paper, which was to take the option, “Use them but only if made by chemists,” and instead chose the option to renounce them completely. And that’s how that decision got made.

Ariel: That all ended up in the Biological Weapons Convention, though, correct?

Matthew: Well, the idea for that came from the British. They had produced a draft paper to take to the arms control talks with the Russians and other countries in Geneva, suggesting a treaty to prohibit biological weapons in war — not just the way the Geneva Protocol did, but would prohibit even their production and possession, not merely their use. Richard Nixon, in his renunciation by the United States, what he did was threefold. He got the United States out of the biological weapons business and decreed that Fort Detrick and other installations that had been doing that would hence forward be doing only peaceful things, like Detrick was partly converted to a cancer research institute, and all the biological weapons that had been stock piled were to be destroyed, and they were.

The other thing he did was renounce toxins. Another thing he decided to do was to resubmit the Geneva Protocol to the United States Senate for its advice and approval. And the last thing was to support the British initiative, and that was the Biological Weapons Convention. But you could only get it if the Russians agreed. But eventually, after a lot of negotiation, we got the Biological Weapons Convention, which is still in force. A little later we even got the Chemical Weapons Convention, but not right away because in my view, and in the view of a lot of people, we did need chemical weapons. Until we could be pretty sure that the Soviet Union was going to get rid of its chemical weapons, too.

If there are chemical weapons on the battlefield, soldiers have to put on gas masks and protective clothing, and this really slows down the tempo of combat action, so that if you could simply put the other side into that restrictive clothing, you have a major military accomplishment. Chemical weapons in the hands of only one side would give that side the option of slowing down the other side, reducing the mobility on the ground of the other side. So, until we got a treaty that had inspection provisions, which the chemical treaty does, and the biological treaty does not — well, it has a kind of challenge inspection, but no one’s ever done that, and it’s very hard to make it work — but the chemical treaty had inspection provisions that were obligatory, and have been extensive: with the Russians visiting our chemical production facilities, and our guys visiting theirs, and all kinds of verification. So that’s how we got the Chemical Weapons Convention. That was quite a bit later.

Max: So, I’m curious, was there a Matthew Meselson clone on the British side, thanks to whom the British started pushing this?

Matthew: Yes. There were of course, numerous clones. And there were numerous clones on this side of the Atlantic, too. None of these things could ever be ever done by just one person. But my pal Julian Robinson, who was at the University of Sussex in Brighton, he was a real scholar of chemical and biological weapons, knows everything about them, and their whole history, and has written all of the very best papers on this subject. He’s just an unbelievably accurate and knowledgeable historian and scholar. People would go to Julian for advice. He was a Mycroft. He’s still in Sussex.

Ariel: You helped start the Harvard Sussex Program on chemical and biological weapons. Is he the person you helped start that with, or was that separate?

Matthew: We decided to do that together.

Ariel: Okay.

Matthew: It did several things, but one of the main things it did was to publish a quarterly journal, which had a dispatch from Geneva — progress towards getting the Chemical Weapons Convention — because when we started the bulletin, the Chemical Convention had not yet been achieved. There were all kinds of news items in the bulletin; We had guest articles. And it finally ended, I think, only a few years ago. But I think it had a big impact; not only because of what was in it, but because, also, it united people of all countries interested in this subject. They all read the bulletin, and they all got a chance to write in the bulletin as well, and they occasionally meet each other, so it had an effect of bringing together a community of people interested in safely getting rid of chemical weapons and biological weapons.

Max: This Biological Weapons Convention was a great inspiration for subsequent treaties, first the ban on biological weapons, and then various other kinds of weapons, and today, we have a very vibrant debate about whether there should be also be a ban on lethal autonomous weapons, and inhumane uses of A.I. So, I’m curious to what extent you got lots of push-back back in those days from people who said, “Oh this is a stupid idea,” or, “This is never going to work,” and what the lessons are that could be learned from that.

Matthew: I think that with biological weapons, and also, but to a lesser extent, with chemical weapons, the first point was we didn’t need them. We had never really accepted World War I — when we were involved in the use of chemical weapons, that had been started. But it was never something that the military liked. They didn’t want to fight a war by encumberment. Biological weapons for sure not, once we realized that to make cheap weapons, they could get into the hands of people who couldn’t afford nuclear weapons, was idiotic. And even chemical weapons are relatively cheap and have the possibility of covering fairly large areas at a low price, and also getting into the hands of terrorists. Now, terrorism wasn’t much on anybody’s radar until more recently, but once that became a serious issue, that was another argument against both biological and chemical weapons. So those two weapons really didn’t have a lot of boosters.

Max: You make it sound so easy though. Did it never happen that someone came and told you that you were all wrong and that this plan was never going to work?

Matthew: Yeah, but that was restricted to the people who were doing it, and a few really eccentric intellectuals. As evidence of this: in the military, the office which dealt with chemical and biological weapons, the highest rank you could find in that would be a colonel. No general, just a colonel. You don’t get to be a general in the chemical corps. There were a few exceptions, basically old times, as kind of a left over from World War I. If you’re a part of the military that never gets to have a general or even a full colonel, you ain’t got much influence, right?

But if you talk about the artillery or the infantry, my goodness, I mean there are lots of generals — including four star generals, even five star generals — who come out of the artillery and infantry and so on, and then Air Force generals, and fleet admirals in the Navy. So that’s one way you can quickly tell whether something is very important or not.

Anyway, we do have these treaties, but it might be very much more difficult to get treaties on war between robots. I don’t know enough about it to really have an opinion. I haven’t thought about it.

Ariel: I want to follow up with a question I think is similar, because one of the arguments that we hear a lot with lethal autonomous weapons, is this fear that if we ban lethal autonomous weapons, it will negatively impact science and research in artificial intelligence. But you were talking about how some of the biological weapons programs were repurposed to help deal with cancer. And you’re a biologist and chemist, but it doesn’t sound like you personally felt negatively affected by these bans in terms of your research. Is that correct?

Matthew: Well, the only technically really important thing — that would have happened anyway — that’s radar, and that was indeed accelerated by the military requirement to detect aircraft at a distance. But usually it’s the reverse. People who had been doing research in fundamental science naturally volunteered or were conscripted to do war work. Francis Crick was working on magnetic torpedoes, not on DNA or hemoglobin. So, the argument that a war stimulates basic science is completely backwards.

Newton, he was director of the mint. Nothing about the British military as it was at the time helped Newton realize that if you shoot a projectile fast enough, it will stay in orbit; He figured that out by himself. I just don’t believe the argument that war makes science advance. It’s not true. If anything, it slows it down.

Max: I think it’s fascinating to compare the arguments that were made for and against a biological weapons ban back then with the arguments that are made for and against a lethal autonomous weapons ban today, because another common argument I hear for why people want lethal autonomous weapons today is because, “Oh, they’re going to be great. They’re going to be so cheap.” That’s like exactly what you were arguing is a very good argument against, rather than for, a weapons class.

Matthew: There’s some similarities and some differences. Another similarity is that even one autonomous weapon in the hands of a terrorist could do things that are very undesirable — even one. On the other hand, we’re already doing something like it with drones. There’s a kind of continuous path that might lead to this, and I know that the military and DARPA are actually very interested in autonomous weapons, so I’m not so sure that you could stop it, both because it’s continuous; It’s not like a real break.

Biological weapons are really different. Chemical weapons are really different. Whereas autonomous weapons still are working on the ancient primitive analogy of hitting a man with your fist, or shooting a bullet. So long as those autonomous weapons are still using guns, bullets, things like that, and not something that is not native to our biology like poison. But with a striking of a blow you can make a continuous line all the way through stones, and bows and arrows, and bullets, to drones, and maybe autonomous weapons. So discontinuity is different.

Max: That’s an interesting challenge, deciding where exactly one draws the line to be more challenging in this case. Another very interesting analogy, I think, between biological weapons and lethal autonomous weapons is the business of verification. You mentioned earlier that there was a strong verification protocol for the Chemical Weapons Convention, and there have been verification protocols for nuclear arms reduction treaties also. Some people say, “Oh, it’s a stupid idea to ban lethal autonomous weapons because you can’t think of a good verification system.” But couldn’t people have said that also as a critique of the Biological Weapons Convention?

Matthew:  That’s a very interesting point, because most people who think that verification can’t work have never been told what’s the basic underlying idea of verification. It’s not that you could find everything. Nobody believes that you could find every missile that might exist in Russia. Nobody ever would believe that. That’s not the point. It’s more subtle. The point is that you must have an ongoing attempt to find things. That’s intelligence. And there must be a heavy penalty if you find even one.

So it’s a step back from finding everything, to saying if you find even one then that’s a violation, and then you can take extreme measures. So a country takes a huge risk that another country’s intelligence organization, or maybe someone on your side who’s willing to squeal, isn’t going to reveal the possession of even one prohibited object. That’s the point. You may have some secret biological production facility, but if we find even one of them, then you are in violation. It isn’t that we have to find every single blasted one of them.

That was especially an argument that came from the nuclear treaties. It was the nuclear people who thought that up. People like Douglas McEachin at the CIA, who realized that there’s a more sophisticated argument. You just have to have a pretty impressive ability to find one thing out of many, if there’s anything out there. This is not perfect, but it’s a lot different from the argument that you have to know where everything is at all times.

Max: So, if I can paraphrase, is it fair to say that you simply want to give the parties to the treaty a very strong incentive not to cheat, because even if they get caught off base one single time, they’re in violation, and moreover, those who don’t have the weapons at that time will also feel that there’s a very, very strong stigma? Today, for example, I find it just fascinating how biology is such a strong brand. If you go ask random students here at MIT what they associate with biology, they will say, “Oh, new cures, new medicines.” They’re not going to say bioweapons. If you ask people when was the last time you read about a bioterrorism attack in the newspaper, they can’t even remember anything typically. Whereas, if you ask them about the new biology breakthroughs for health, they can think of plenty.

So, biology has clearly very much become a science that’s harnessed to make life better for people rather than worse. So there’s a very strong stigma. I think if I or anyone else here at MIT tried to secretly start making bioweapons, we’d have a very hard time even persuading any biology grad student to want to work with them because of the stigma. If one could create a similar stigma against lethal autonomous weapons, the stigma itself would be quite powerful, even absent the ability to do perfect verification. Does that make sense?

Matthew: Yes, it does, perfect sense.

Ariel: Do you think that these stigmas have any effect on the public’s interest or politicians’ interest in science?

Matthew: I think there’s still great fascination of people with science. I think that the exploration of space, for example: lots of people, not just kids — but especially kids — that are fascinated by it. Pretty soon, Elon Musk says in 2022, he’s going to have some people walking around on Mars. He’s just tested that BFR rocket of his that’s going to carry people to Mars. I don’t know if he’ll actually get it done but people are getting fascinated by the exploration of space, are getting fascinated by lots of medical things, are getting desperate about the need for a cure for cancer. I myself think we need to spend a lot more money on preventing — not curing but preventing cancer — and I think we know how to do it.

I think the public still has a big fascination, respect, and excitement from science. The politicians, it’s because, see, they have other interests. It’s not that they’re not interested or don’t like science. It’s because they have big money interests, for example. Coal and oil, these are gigantic. Harvard University has heavily invested in companies that deal with fossil fuels. Our whole world runs on fossil fuels mainly. You can’t fool around with that stuff. So it becomes a problem of which is going to win out, your scientific arguments, which are almost certain to be right, but not absolutely like one and one makes two — but almost — or the whole economy and big financial interests. It’s not easy. It will happen, we’ll convince people, but maybe not in time. That’s the sad part. Once it gets bad enough, it’s going to be bad. You can’t just turn around on a dime and take care of disastrous climate change.

Max: Yeah, this is very much the spirit of course, of the Future Life Institute, that Ariel’s podcast is run by. Technology, what it really does, it empowers us humans to do more, either more good things or more bad things. And technology in and of itself isn’t evil, nor is it morally good; It’s a tool, simply. And the more powerful it becomes, the more crucial it is that we also develop the wisdom to steer the technology for good uses. And I think what you’ve done with your biology colleagues is such an inspiring role model for all of the other sciences, really.

We physicists still feel pretty guilty about giving the world nuclear weapons, but we’ve also gave the world a lot of good stuff, from lasers, to smartphones and computers. Chemists gave the world a lot of great materials, but they also gave us, ultimately, the internal combustion engine and climate change. Biology, I think more than any other field, has clearly ended up very solidly on the good side. Everybody loves biology for what it does, even though it could have gone very differently, right? We could have had a catastrophic arms race, a race to the bottom, with one super power outdoing the other in bioweapons, and eventually these cheap weapons being everywhere, and on the black market, and bioterrorism every day. That future didn’t happen, that’s why we all love biology. And I am very honored to get to be on this call here with you, so I could personally thank you for your role on making it this way. We should not take this for granted, that it’ll be this way with all sciences, the way it’s become for biology. So, thank you.

Matthew: Yeah. That’s all right.

I’d like to end with one thought. We’re learning how to change the human genome. They won’t really get going for a while, and there’s some problems that very few people are thinking about. Not the so-called off target effects, that’s a well-known problem — but there’s another problem that I won’t go into, but it’s called epistasis. Nevertheless, 10 years from now, 100 years from now, 500 years from now, sooner or later we’ll be changing the human genome on a massive scale, making people better in various ways, so-called enhancements.

Now, a question arises. Do we know enough about the genetic basis of what makes us human to be sure that we can keep the good things about being human? What are those? Well, compassion is one. I’d say curiosity is another. Another is the feeling of needing to be needed. That sounds kind of complicated, I guess, but if you don’t feel needed by anybody — there’s some people who can go through life and they don’t need to feel needed. But doctors, nurses, parents, people who really love each other: the feeling of being needed by another human being, I think, is very pleasurable to many people, maybe to most people, and it’s one of the things that’s of essence of what it means to be human.

Now, where does this all take us? It means that if we’re going to start changing the human genome in any big time way, we need to know, first of all, what we most value in being human, and that’s a subject for the humanities, for everybody to talk about, think about. And then it’s a subject for the brain scientists to figure out what’s the basis of it. It’s got to be in the brain. But what is it in the brain? And we’re miles and miles and miles away in brain science from being able to figure out what is it in the brain — or maybe we’re not, I don’t know any brain science, I shouldn’t be shooting off my mouth — but we’ve got to understand those things. What is it in our brains that makes us feel good when we are of use to someone else?

We don’t want to fool around with whatever those genes are — do not monkey with those genes unless you’re absolutely sure that you’re making them maybe better — but anyway, don’t fool around. And figure out in the humanities, don’t stop teaching humanities. Learn from Sophocles, and Euripides, and Aeschylus: What are the big problems about human existence? Don’t make it possible for a kid to go through Harvard — as is possible today — without learning a single thing from Ancient Greece. Nothing. You don’t even have to use the word Greece. You don’t have to use the word Homer or any of that. Nothing, zero. Isn’t that amazing?

Before President Lincoln, everybody, to get to enter Harvard, had to already know Ancient Greek and Latin. Even though these guys were mainly boys of course, and they were going to become clergymen. They also, by the way — there were no electives — everyone had to take fluctions, which is differential calculus. Everyone had to take integral calculus. Every one had to take astronomy, chemistry, physics, as well as moral philosophy, et cetera. Well, there’s nothing like that anymore. We don’t all speak the same language because we’ve all had such different kinds of education, and also the humanities just get a short shrift. I think that’s very short sighted.

MIT is pretty good in humanities, considering it’s a technical school. Harvard used to be tops. Harvard is at risk of maybe losing it. Anyway, end of speech.

Max: Yeah, I want to just agree with what you said, and also rephrase it the way I think about it. What I hear you saying is that it’s not enough to just make our technology more powerful. We also need the humanities, and our humanity, for the wisdom of how we’re going to manage our technology and what we’re trying to use it for, because it does no good to have a really powerful tool if you aren’t wise and use it for the right things.

Matthew: If we’re going to change, we might even split into several species. Almost all of the other species have very close other species: neighbors. Especially if you can get them separated — there’s a colony on Mars and they don’t travel back and forth much — species will diverge. It takes a long, long, long, long time, but the idea there, like the Bible says, that we are fixed, nothing will change, that’s of course wrong. Human evolution is going on as we speak.

Ariel: We’ll end part one of our two-part podcast with Matthew Meselson here. Please join us for the next episode which serves as a reminder that weapons bans don’t just magically work. But rather, there are often science mysteries that need to be solved in order to verify whether a group has used a weapon illegally. In the next episode, Matthew will talk about three such scientific mysteries he helped solve, including the anthrax incident in Russia, the yellow rain affair in Southeast Asia, and the research he did that led immediately to the prohibition of Agent Orange. So please join us for part two of this podcast, which is also available now.

As always, if you’ve been enjoying this podcast, please take a moment to like it, share it, and maybe even leave a positive review. It’s a small action on your part, but it’s tremendously helpful for us.

FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but I want to start by looking at this from the research and development side of things. So my very first question for both of you is: do you agree that 2018 was more about impressive progress, and less about major breakthroughs? Or were there breakthroughs that really were important to the AI community that just didn’t make it into the mainstream press?

David: Broadly speaking I think I agree, although I have a few caveats for that. One is just that it’s a little bit hard to recognize always what is a breakthrough, and a lot of the things in the past that have had really big impacts didn’t really seem like some amazing new paradigm shift—it was sort of a small tweak that then made a lot of things work a lot better. And the other caveat is that there are a few works that I think are pretty interesting and worth mentioning, and the field is so large at this point that it’s a little bit hard to know if there aren’t things that are being overlooked.

Roman: So I’ll agree with you, but I think the pattern is more important than any specific breakthrough. We kind of got used to getting something really impressive every month, so relatively it doesn’t sound as good, all the AlphaStar, AlphaFold, AlphaZero happening almost every month. And it used to be it took 10 years to see something like that.

It’s likely it will happen even more frequently. We’ll conquer a new domain once a week or something. I think that’s the main pattern we have to recognize and discuss. There are significant accomplishments in terms of teaching AI to work in completely novel domains. I mean now we can predict protein folding, now we can have multi-player games conquered. That never happened before so frequently. Chess was impressive because it took like 30 years to get there.

David: Yeah, so I think a lot of people were kind of expecting or at least hoping for StarCraft or Dota to be solved—to see, like we did with AlphaGo, AI systems that are beating the top players. And I would say that it’s actually been a little bit of a let down for people who are optimistic about that, because so far the progress has been kind of unconvincing.

So the AlphaStar, which was a really recent result from last week, for instance: I’ve seen criticism of it that I think is valid that it was making more actions than a human could within a very short interval of time. So they carefully controlled the actions-per-minute that AlphaStar was allowed to take, but they didn’t prevent it from doing really short bursts of actions that really helped its micro-game, and that means that it can win without really being strategically superior to its human opponents. And I think the Dota results that OpenAI has had was also criticized as being sort of not the hardest version of the problem, and still the AI sort of is relying on some crutches.

Ariel: So before we get too far into that debate, can we take a quick step back and explain what both of those are?

David: So these are both real-time strategy games that are, I think, actually the two most popular real-time strategy games in the world that people play professionally, and make money playing. I guess that’s all to say about them.

Ariel: So a quick question that I had too about your description then, when you’re talking about AlphaStar and you were saying it was just making more moves than a person can realistically make. Is that it—it wasn’t doing anything else special?

David: I haven’t watched the games, and I don’t play StarCraft, so I can’t say that it wasn’t doing anything special. I’m basing this basically on reading articles and reading the opinions of people who are avid StarCraft players, and I think the general opinion seems to be that it is more sophisticated than what we’ve seen before, but the reason that it was able to win these games was not because it was out-thinking humans, it’s because it was out-clicking, basically, in a way that just isn’t humanly possible.

Roman: I would agree with this analysis, but I don’t see it as a bug, I see it as a feature. That just shows another way machines can be superior to people. Even if they are not necessarily smarter, they can still produce superior performance, and that’s what we really care about. Right? We found a different way, a non-human approach to solving this problem. That’s impressive.

David: Well, I mean, I think if you have an agent that can just click as fast as it wants, then you can already win at StarCraft, before this work. There needs to be something that makes it sort of a fair fight in some sense.

Roman: Right, but think what you’re suggesting: We have to handicap machines to make them even remotely within being comparative to people. We’re talking about getting to superintelligent performance. You can get there by many ways. You can think faster, you can have better memory, you can have better reaction time—as long as you’re winning in whatever domain we’re interested in, you have superhuman performance.

David Krueger: So maybe another way of putting this would be if they actually made a robot play StarCraft and made it use the same interface that humans do, such as a screen and mouse, there’s no way that it could have beat the human players. And so by giving it direct access to the game controls, it’s sort of not solving the same problem that a human is when they play this game.

Roman: I feel what you’re saying, I just feel that it is solving it in a different way, and we have pro-human bias saying, well that’s not how you play this game, you have an advantage. Human players usually rely on superior strategy, not just faster movements that may take advantage of it for a few nanoseconds, a couple of seconds. But it’s not a long-term sustainable pattern.

One of the research projects I worked on was this idea of artificial stupidity, we called it—kind of limiting machines to human-level capacity. And I think that’s what we’re talking about it here. Nobody would suggest limiting a chess program to just human-level memory, or human memorization of opening moves. But we don’t see it as a limitation. Machines have an option of beating us in ways humans can’t. That’s the whole point, and that’s why it’s interesting, that’s why we have to anticipate such problems. That’s where most of the safety and security issues will show up.

Ariel: So I guess, I think, Roman, your point earlier was sort of interesting that we’ve gotten so used to breakthroughs that stuff that maybe a couple of years ago would have seemed like a huge breakthrough is just run-of-the-mill progress. I guess you’re saying that that’s what this is sort of falling into. Relatively recently this would have been a huge deal, but because we’ve seen so much other progress and breakthroughs, that this is now interesting and we’re excited about it—but it’s not reaching that level of, oh my god, this is amazing! Is that fair to say?

Roman: Exactly! We get disappointed if the system loses one game. It used to be we were excited if it would match amateur players. Now it’s, oh, we played a 100 games and you lost one? This is just not machine-level performance, you disappoint us.

Ariel: David, do you agree with that assessment?

David: I would say mostly no. I guess, I think what really impressed me with AlphaGo and AlphaZero was that it was solving something that had been established as a really grand challenge for AI. And then in the case of AlphaZero, I think the technique that they actually used to solve it was really novel and interesting from a research point of view, and they went on to show that this same technique can solve a bunch of other board games as well.

And my impression from what I’ve seen about how they did AlphaStar and AlphaFold is that there were some interesting improvements and the performance is impressive but I think it’s neither, like, quite at the point where you can say we’ve solved it, we’re better than everybody, or in the case of protein folding, there’s not a bunch more room for improvement that has practical significance. And it’s also—I don’t see any really clear general algorithmic insights about AI coming out of these works yet. I think that’s partially because they haven’t been published yet, but from what I have heard about the details about how they work, I think it’s less of a breakthrough on the algorithm side than AlphaZero was.

Ariel: So you’ve mentioned AlphaFold. Can you explain what that is real quick?

David: This is the protein folding project that DeepMind did, and I think there’s a competition called C-A-S-P or CASP that happens every three years, and they sort of dominated that competition this last year doing what was described as two CASPs in one, so basically doubling the expected rate of improvement that people have seen historically at these tasks, or at least at the one that is the most significant benchmark.

Ariel: I find the idea of the protein folding thing interesting because that’s something that’s actually relevant to scientific advancement and health as opposed to just being able to play a game. Are we seeing actual applications for this yet?

David: I don’t know about that, but I agree with you that that is a huge difference that makes it a lot more exciting than some of the previous examples. I guess one thing that I want to say about that, though, is that it does look a little bit more to me like continuation of progress that was already happening in the communities. It’s definitely a big step up, but I think a lot of the things that they did there could have really happened over the next few years anyways, even without DeepMind being there. So, one of the articles I read put it this way: If this wasn’t done by DeepMind, if this was just some academic group, would this have been reported in the media? I think the answer is sort of like a clear no, and that says something about the priorities of our reporting and media as well as the significance of the results, but I think that just gives some context.

Roman: I’ll agree with David—the media is terrible in terms of what they report on, we can all agree on that. I think it was quite a breakthrough, I mean, to say that they not just beat the competition, but to actually kind of doubled performance improvement. That’s incredible. And I think anyone who got to that point would not be denied publication in a top journal; It would be considered very important in that domain. I think it’s one of the most important problems in medical research. If you can accurately predict this, possibilities are really endless in terms of synthetic biology, in terms of curing diseases.

So this is huge in terms of impact from being able to do it. As far as how applicable is it to other areas, is it a great game-changer for AI research? All those things can adapt between this ability to perform in real-life environments of those multiplayer games, and being able to do this. Look at how those things can be combined. Right? You can do things in the real world you couldn’t do before, both in terms of strategy games, which are basically simulations for economic competition, for wars, for quite a few applications where impact would be huge.

So all of it is very interesting. It’s easy to say that, “Well if they didn’t do it, somebody else maybe would do it in a couple of years.” But it’s almost always true for all inventions. If you look at the history of inventions, things like, I don’t know, telephone, have been invented at the same time by two or three people; radio, two or three people. It’s just the point where science gets enough ingredient technology where yeah, somebody’s going to do it, nice. But still, we give credit to whoever got there first.

Ariel: So I think that’s actually a really interesting point, because I think for the last few years we have seen sort of these technological advances but I guess we also want to be considering the advances that are going to have a major impact on humanity even if it’s not quite as technologically new.

David: Yeah, absolutely. I think the framing in terms of breakthroughs is a little bit unclear what we’re talking about when we talk about AI breakthroughs, and I think a lot of people in the field of AI kind of don’t like how much people talk about it in terms of breakthroughs because a lot of the progress is gradual and builds on previous work and it’s not like there was some sudden insight that somebody had that just changed everything, although that does happen in some ways.

And I think you can think of the breakthroughs both in terms of like what is the impact—is this suddenly going to have a lot of potential to change the world? You can also think of it, though, from the perspective of researchers as like, is this really different from the kind of ideas and techniques we’ve seen or seen working before? I guess I’m more thinking about the second right now in terms of breakthroughs representing really radical new ideas in research.

Ariel: Okay, well I will take responsibility for being one of the media people who didn’t do a good job with presenting AI breakthroughs. But I think both with this podcast and probably moving forward, I think that is actually a really important thing for us to be doing—is both looking at the technological progress and newness of something but also the impact it could have on either society or future research.

So with that in mind, you guys also have a good list of other things that did happen this year, so I want to start moving into some of that as well. So next on your list is manual dexterity in robots. What did you guys see happening there?

David: So this is something that’s definitely not my area of expertise, so I can’t really comment too much on it. But there are two papers that I think are significant and potentially representing something like a breakthrough in this application. In general robotics is really difficult, and machine learning for robotics is still, I think, sort of a niche thing, like most robotics is using more classical planning algorithms, and hasn’t really taken advantage of the new wave of deep learning and everything.

So there’s two works, one is QT-Opt, and the other one is Dactyl, and these are both by people from the Berkeley OpenAI crowd. And these both are showing kind of impressive results in terms of manual dexterity in robots. So there’s one that does a really good job at grasping, which is one of the basic aspects of being able to act in the real world. And then there’s another one that was sort of just manipulating something like a cube with different colored faces on it—that one’s Dactyl; the grasping one is QT-Opt.

And I think this is something that was paid less attention to in the media, because it’s been more of a story of kind of gradual progress I think. But my friend who follows this deep reinforcement learning stuff more told me that QT-Opt is the first convincing demonstration of deep reinforcement learning in the real world, as opposed to all these things we’ve seen in games. The real world is much more complicated and there’s all sorts of challenges with the noise of the environment dynamics and contact forces and stuff like this that have been really a challenge for doing things in the real world. And then there’s also the limited sample complexity where when you play a game you can sort of interact with the game as much as you want and play the game over and over again, whereas in the real world you can only move your robot so fast and you have to worry about breaking it, so that means in the end you can collect a lot less data, which makes it harder to learn things.

Roman: Just to kind of explain maybe what they did. So hardware’s expensive, slow: It’s very difficult to work with. Things don’t go well in real life; It’s a lot easier to create simulations in virtual worlds, train your robot in there, and then just transfer knowledge into a real robot in the physical world. And that’s exactly what they did, training that virtual hand to manipulate objects, and they could run through thousands, millions of situations and then it’s something you cannot do with an actual, physical robot at that scale. So, I think that’s a very interesting approach for why lots of people try doing things in virtual environments. Some of the early AGI projects all concentrated on virtual worlds as domain of learning. So that makes a lot of sense.

David: Yeah, so this was for the Dactyl project, which was OpenAI. And that was really impressive I think, because people have been doing this sim-to-real thing—where you train in simulation and then try and transfer it to the real world—with some success for like a year or two, but this one I think was really kind of impressive in that sense, because they didn’t actually train it in the real world at all, and what they had learned managed to transfer to the real world.

Ariel: Excellent. I’m going to keep going through your list. One thing that you both mentioned are GANs. So very quickly, if one of you, or both of you, could explain what a GAN is and what that stands for, and then we’ll get into what happened last year with those.

Roman: Sure, so this is a somewhat new way of doing creative generational visuals and audio. You have two neural networks competing, one is kind of creating fakes, and the other one is judging them, and you get to a point where they’re kind of 50/50. You can’t tell if it’s fake or real anymore. And it’s a great way to produce artificial faces, cars, whatever. Any type of input you can provide to the networks, they quickly learn to extract the essence of that image or audio and generate artificial data sets full of such images.

And there’s really exciting work on being able to extract properties from those, different styles. So if we talk about faces, for example: there could be a style for hair, a style for skin color, a style for age, and now it’s possible to manipulate them. So I can tell you things like, “Okay, Photoshop, I need a picture of a female, 20 years old, blonde, with glasses,” and it would generate a completely realistic face based on those properties. And we’re starting to see it show up not just in images but transferred to video, to generating whole virtual worlds. It’s probably the closest thing we ever had computers get to creativity: actually kind of daydreaming and coming up with novel outputs.

David: Yeah, I just want to say a little bit about the history of the research in GAN. So the first work on GANs was actually back four or five years ago in 2014, and I think it was actually kind of—didn’t make a huge splash at the time, but maybe a year or two after that it really started to take off. And research in GANs over the last few years has just been incredibly fast-paced and there’s been hundreds of papers submitted and published at the big conferences every year.

If you look just in terms of the quality of what is generated, this is, I think, just an amazing demonstration of the rate of progress in some areas of machine learning. The first paper had these sort of black and white pictures of really blurry faces, and now you can get giant—I think 256 by 256, or 512 by 512, or even bigger—really high resolution and totally indistinguishable from real photos, to the human eye anyway—images of faces. So it’s really impressive, and we’ve seen really consistent progress on that, especially in the last couple years.

Ariel: And also, just real quick, what does it stand for?

David: Oh, generative adversarial network. So it’s generative, because it’s sort of generating things from scratch, or from its imagination or creativity. And it’s adversarial because there are two networks: the one that generates the things, and then the one that tries to tell those fake images apart from real images that we actually collect by taking photos in the world.

Ariel: This is an interesting one because it can sort of transition into some ethics stuff that came up this past year, but I’m not sure if we want to get there yet, or if you guys want to talk a little bit more about some of the other things that happened on the research and development side.

David: I guess I want to talk about a few other things that have been making, I would say, sort of steady progress, like GANs. With a lot of interest in, I guess I would say, their ideas that are coming to fruition, even though some of these are not exactly from the last year, they sort of really started to improve themselves and become widely used in the last year.

Ariel: Okay.

David: I think this is actually used in maybe the latest, greatest GAN paper, is something that’s called feature-wise transformations. So this is an idea that actually goes back up to 40 years, depending on how you measure it, but has sort of been catching on in specific applications in machine learning in the last couple of years—starting with, I would say, style-transfer, which is sort of like what Roman mentioned earlier.

So the idea here is that in a neural network, you have what are called features, which basically correspond to the activations of different neurons in the network. Like how much that neuron likes what it’s seeing, let’s say. And those can also be interpreted as representing different kinds of visual patterns, like different kinds of textures, or colors. And these feature-wise transformations basically just take each of those different aspects of the image, like the color or texture in a certain location, and then allow you to manipulate that specific feature, as we call it, by making it stronger or amplifying whatever was already there.

And so you can sort of view this as a way of specifying what sort of things are important in the image, and that’s why it allows you to manipulate the style of images very easily, because you can sort of look at a certain painting style for instance, and say, oh this person uses a lot of wide brush strokes, or a lot of narrow brush strokes, and then you can say, I’m just going to modulate the neurons that correspond to wide or narrow brush strokes, and change the style of the painting that way. And of course you don’t do this by hand, by looking in and seeing what the different neurons represent. This all ends up being learned end-to-end. And so you sort of have an artificial intelligence model that predicts how to modulate the features within another network, and that allows you to change what that network does in a really powerful way.

So, I mentioned that it has been applied in the most recent GAN papers, and I think they’re just using those kinds of transformations to help them generate images. But other examples where you can explain what’s happening more intuitively, or why it makes sense to try and do this, would be something like visual question answering. So there you can have the modulation of the vision network being done by another network that looks at a question and is trying to help answer that question. And so it can sort of read the question and see what features of images might be relevant to answering that question. So for instance, if the question was, “Is it a sunny day outside?” then it could have the vision network try and pay more attention to things that correspond to signs of sun. Or if it was asked something like, “Is this person’s hair combed?” then you could look for the patterns of smooth, combed hair and look for the patterns of rough, tangled hair, and have those features be sort of emphasized in the vision network. That allows the vision network to pay attention to the parts of the image that are most relevant to answering the question.

Ariel: Okay. So, Roman, I want to go back to something on your list quickly in a moment, but first I was wondering if you have anything that you wanted to add to the feature-wise transformations?

Roman: All of it, you can ask, “Well why is this interesting, what are the applications for it?” So you are able to generate inputs, inputs for computers, inputs for people, images, sounds, videos. A lot of times they can be adversarial in nature as well—what we call deep fakes. Right? You can make, let’s say, a video of a famous politician say something, or do something.

Ariel: Yeah.

Roman: And this has very interesting implications for elections, for forensic science, for evidence. As those systems get better and better, it becomes harder and harder to tell if something is real or not. And maybe it’s still possible to do some statistical analysis, but it takes time, and we talked about media being not exactly always on top of it. So it may take 24 hours before we realize if this video was real or not, but the election is tonight.

Ariel: So I am definitely coming back to that. I want to finish going through the list of the technology stuff, but yeah I want to talk about deep fakes and in general, a lot of the issues that we’ve seen cropping up more and more with this idea of using AI to fake images and audio and video, because I think that is something that’s really important.

David: Yeah, it’s hard for me to estimate these things, but I would say this is probably, in terms of the impact that this is going to have societally, this is sort of the biggest story maybe of the last year. And it’s not like something that happened all of the sudden. Again, it’s something that has been building on a lot of progress in generative models and GANs and things like this. And it’s just going to continue, we’re going to see more and more progress like that, and probably some sort of arms’ race here where—I shouldn’t use that word.

Ariel: A competition.

David: A competition between people who are trying to use that kind of technology to fake things and people who are sort of doing forensics to try and figure out what is real and what is fake. And that also means that people are going to have to trust the people who have the expertise to do that, and believe that they’re actually doing that and not part of some sort of conspiracy or something.

Ariel: Alright, well are you guys ready to jump into some of those ethical questions?

David: Well, there are like two other broad things I wanted to mention, which I think are sort of interesting trends in the research community. One is just the way that people have been continuing to scale up AI systems. So a lot of the progress I think has arguably just been coming from more and more computation and more and more data. And there was a pretty great blog post by OpenAI about this last year that argued that the amount of computation that’s being used to train the most advanced AI systems is increasing by a factor of 10 times every year for the last several years, which is just astounding. But it also suggests that this might not be sustainable for a long time, so to the extent that you think that using more computation is a big driver of progress, we might start to see that slow down within a decade or so.

Roman: I’ll add another—what I think also is kind of building-on technology, not so much a breakthrough, we had it for a long time—but neural evolution is something I’m starting to pay a lot more attention to and that’s kind of borrowing from biology, trying to evolve ways for neural networks, optimized neural networks. And it’s producing very impressive results. It’s possible to run it in parallel really well, and it’s competitive with some of the leading alternative approaches.

So, the idea basically is you have this very large neural network, brain-like structure, but instead of trying to train it back, propagate errors, teach it in a standard neural networks way, you just kind of have a population of those brains competing for who’s doing best in a particular problem, and they share weights between good parents, and after a while you just evolve really well performing solutions to some of the most interesting problems.

Additionally you can kind of go meta-level on it and evolve architectures for the neural network itself—how many layers, how many inputs. This is nice because it doesn’t require much human intervention. You’re essentially letting the system figure out what the solutions are. We had some very successful results with genetic algorithms for optimization. We didn’t have much success with genetic programming, and now neural evolution kind of brings it back where you’re optimizing intelligence systems, and that’s very exciting.

Ariel: So you’re saying that you’ll have—to make sure I understand this correctly—there’s two or more neural nets trying to solve a problem, and they sort of play off of each other?

Roman: So you create a population of neural networks, and you give it a problem, and you see this one is doing really well, and that one. The others, maybe not so great. So you take weights from those two and combine them—like mom and dad, parent situation that produces offspring. And so you have this simulation of evolution where unsuccessful individuals are taken out of a population. Successful ones get to reproduce and procreate, and provide their high fitness weights to the next generation.

Ariel: Okay. Was there anything else that you guys saw this year that you want to talk about, that you were excited about?

David: Well I wanted to give a few examples of the kind of massive improvements in scale that we’ve seen. One of the most significant models and benchmarks in the community is ImageNet and training image classifiers that can tell you what a picture is a picture of on this dataset.So the whole sort of deep learning revolution was arguably started, or at least really came into the eyes of the rest of the machine learning community, because of huge success on this ImageNet competition. And training the model there took something like two weeks, and this last year there was a paper where you can train a more powerful model in less than four minutes, and they do this by using like 3000 graphics cards in parallel.

And then DeepMind also had some progress on parallelism with this model called IMPALA, which basically was in the context of reinforcement learning as opposed to classification, and there they sort of came up with a way that allowed them to do updates in parallels, like learn on different machines and combine everything that was learned in a way that’s asynchronous. So in the past the sort of methods that they would use for these reinforcement learning problems, you’d have to wait for all of the different machines to finish their learning on the current problem or instance that they’re learning about, and then combine all of that centrally—whereas the new method allows you to just as soon as you’re done computing or learning something, you can communicate it to the rest of the system, the other computers that are learning in parallel. And that was really important for allowing them to scale to hundreds of machines working on their problem at the same time.

Ariel: Okay, and so that, just to clarify as well, that goes back to this idea that right now we’re seeing a lot of success just scaling up the computing, but at some point that could slow things down essentially, if we had a limit for how much computing is possible.

David: Yeah, and I guess one of my points is also doing this kinds of scaling of computing requires some amount of algorithmic insight or breakthrough if you want to be dramatic as well. So this DeepMind paper I talked about, they had to devise new reinforcement learning algorithms that would still be stable when they had this real-time asynchronous updating. And so, in a way, yeah, a lot of the research that’s interesting right now is on finding ways to make the algorithm scale so that you can keep taking advantage of more and more hardware. And the evolution stuff also fits into that picture to some extent.

Ariel: Okay. I want to start making that transition into some of the concerns that we have for misuse around AI and how easy it is for people to be deceived by things that have been created by AI. But I want to start with something that’s hopefully a little bit more neutral, and talk about Google Duplex, which is the program that Google came out with, I think last May. I don’t know the extent to which it’s in use now, but they presented it, and it’s an AI assistant that can essentially make calls and set up appointments for you. So their examples were it could make a reservation at a restaurant for you, or it could make a reservation for you to get a haircut somewhere. And it got sort of mixed reviews, because on the one hand people were really excited about this, and on the other hand it was kind of creepy because it sounded human, and the people on the other end of the call did not know that they were talking to a machine.

So I was hoping you guys could talk a little bit I guess maybe about the extent to which that was an actual technological breakthrough versus just something—this one being more one of those breakthroughs that will impact society more directly. And then also I guess if you agree that this seems like a good place to transition into some of the safety issues.

David: Yeah, no, I would be surprised if they really told us about the details of how that worked. So it’s hard to know how much of an algorithmic breakthrough or algorithmic breakthroughs were involved. It’s very impressive, I think, just in terms of what it was able to do, and of course these demos that we saw were maybe selected for their impressiveness. But I was really, really impressed personally, just to see a system that’s able to do that.

Roman: It’s probably built on a lot of existing technology, but it is more about impact than what you can do with this. And my background is cybersecurity, so I see it as a great tool for like automating spear-phishing attacks on a scale of millions. You’re getting a real human calling you, talking to you, with access to your online data; Pretty much everyone’s gonna agree and do whatever the system is asking of you, if it’s credit card numbers, or social security numbers. So, in many ways it’s going to be a game changer.

Ariel: So I’m going to take that as a definite transition into safety issues. So, yeah, let’s start talking about, I guess, sort of human manipulation that’s happening here. First, the phrase “deep fake” shows up a lot. Can you explain what those are?

David: So “deep fakes” is basically just: you can make a fake video of somebody doing something or saying something that they did not actually do or say. People have used this to create fake videos of politicians, they’ve used it to create porn using celebrities. That was one of the things that got it on the front page of the internet, basically. And Reddit actually shut down the subreddit where people were doing that. But, I mean, there’s all sorts of possibilities.

Ariel: Okay, so I think the Reddit example was technically the very end of 2017. But all of this sort became more of an issue in 2018. So we’re seeing this increase in capability to both create images that seem real, create audio that seems real, create video that seems real, and to modify existing images and video and audio in ways that aren’t immediately obvious to a human. What did we see in terms of research to try to protect us from that, or catch that, or defend against that?

Roman: So here’s an interesting observation, I guess. You can develop some sort of a forensic tool to analyze it, and give you a percentage likelihood that it’s real or that it’s fake. But does it really impact people? If you see it with your own eyes, are you going to believe your lying eyes, or some expert statistician on CNN?

So the problem is it will still have tremendous impact on most people. We’re not very successful at convincing people about multiple scientific facts. They simply go outside, or it’s cold right now, so global warming is false. I suspect we’ll see exactly that with, let’s say, fake videos of politicians, where a majority of people easily believe anything they hear once or see once versus any number of peer reviewed publications disproving it.

David: I kind of agree. I mean, I think, when I try to think about how we would actually solve this kind of problem, I don’t think a technical solution that just allows somebody who has technical expertise to distinguish real from fake is going to be enough. We really need to figure out how to build a better trust infrastructure in our whole society which is kind of a massive project. I’m not even sure exactly where to begin with that.

Roman: I guess the good news is it gives you plausible deniability. If a video of me comes out doing horrible things I can play it straight.

Ariel: That’s good for someone. Alright, so, I mean, you guys are two researchers, I don’t know how into policy you are, but I don’t know if we saw as many strong policies being developed. We did see the implementation of the GDPR, and for people who aren’t familiar with the GDPR, it’s essentially European rules about what data companies can collect from your interactions online, and the ways in which you need to give approval for companies to collect your data, and there’s a lot more to it than that. One of the things that I found most interesting about the GDPR is that it’s entirely European based, but it had a very global impact because it’s so difficult for companies to apply something only in Europe and not in other countries. And so earlier this year when you were getting all of those emails about privacy policies, that was all triggered by the GDPR. That was something very specific that happened and it did make a lot of news, but in general I felt that we saw a lot of countries and a lot of national and international efforts for governments to start trying to understand how AI is going to be impacting their citizens, and then also trying to apply ethics and things like that.

I’m sort of curious, before we get too far into anything: just as researchers, what is your reaction to that?

Roman: So I never got as much spam as I did that week when they released this new policy, so that kind of gives you a pretty good summary of what to expect. If you look at history, we have regulations against spam, for example. Computer viruses are illegal. So that’s a very expected result. It’s not gonna solve technical problems. Right?

David: I guess I like that they’re paying attention and they’re trying to tackle these issues. I think the way GDPR was actually worded, it has been criticized a lot for being either much too broad or demanding, or vague. I’m not sure—there are some aspects of the details of that regulation that I’m not convinced about, or not super happy about. I guess overall it seems like people who are making these kinds of decisions, especially when we’re talking about cutting edge machine learning, it’s just really hard. I mean, even people in the fields don’t really know how you would begin to effectively regulate machine learning systems, and I think there’s a lot of disagreement about what a reasonable level of regulation would be or how regulations should work.

People are starting to have that sort of conversation in the research community a little bit more, and maybe we’ll have some better ideas about that in a few years. But I think right now it seems premature to me to even start trying to regulate machine learning in particular, because we just don’t really know where to begin. I think it’s obvious that we do need to think about how we control the use of the technology, because it’s just so powerful and has so much potential for harm and misuse and accidents and so on. But I think how you actually go about doing that is a really unclear and difficult problem.

Ariel: So for me it’s sort of interesting, we’ve been debating a bit today about technological breakthroughs versus societal impacts, and whether 2018 actually had as many breakthroughs and all of that. But I would guess that all of us agree that AI is progressing a lot faster than government does.

David: Yeah.

Roman: That’s almost a tautology.

Ariel: So I guess as researchers, what concerns do you have regarding that? Like do you worry about the speed at which AI is advancing?

David: Yeah, I would say I definitely do. I mean, we were just talking about this issue with fakes and how that’s going to contribute to things like fake news and erosion of trust in media and authority and polarization of society. I mean, if AI wasn’t going so fast in that direction, then we wouldn’t have that problem. And I think the rate that it’s going, I don’t see us catching up—or I should say, I don’t see the government catching up on its own anytime soon—to actually control the use of AI technology, and do our best anyways to make sure that it’s used in a safe way, and a fair way, and so on.

I think in and of itself it’s maybe not bad that the technology is progressing fast. I mean, it’s really amazing; Scientifically there’s gonna be all sorts of amazing applications for it. But there’s going to be more and more problems as well, and I don’t think we’re really well equipped to solve them right now.

Roman: I’ll agree with David, I’m very concerned at its relative rate of progress. AI development progresses a lot faster than anything we see in AI safety. AI safety is just trying to identify problem areas, propose some general directions, but we have very little to show in terms of solved problems.

If you look at our work in adversarial fields, maybe a little bit cryptography, the good guys have always been a step ahead of the bad guys, whereas here you barely have any good guys as a percentage. You have like less than 1% of researchers working directly on safety full-time. Same situation with funding. So it’s not a very optimistic picture at this point.

David: I think it’s worth definitely distinguishing the kind of security risks that we’re talking about, in terms of fake news and stuff like that, from long-term AI safety, which is what I’m most interested in, and think is actually even more important, even though I think there’s going to be tons of important impacts we have to worry about already, and in the coming years.

And the long-term safety stuff is really more about artificial intelligence that becomes broadly capable and as smart or smarter than humans across the board. And there, there’s maybe a little bit more signs of hope if I look at how the fields might progress in the future, and that’s because there’s a lot of problems that are going to be relevant for controlling or aligning or understanding these kind of generally intelligent systems that are probably going to be necessary anyways in terms of making systems that are more capable in the near future.

So I think we’re starting to see issues with trying to get AIs to do what we want, and failing to, because we just don’t know how to specify what we want. And that’s, I think, basically the core of the AI safety problem—is that we don’t have a good way of specifying what we want. An example of that is what are called adversarial examples, which sort of demonstrate that computer vision systems that are able to do a really amazing job at classifying images and seeing what’s in an image and labeling images still make mistakes that humans just would never make. Images that look indistinguishable to humans can look completely different to the AI system, and that means that we haven’t really successfully communicated to the AI system what our visual concepts are. And so even though we think we have done a good job of telling it what to do, it’s like, “tell us what this picture is of”—the way that it found to do that really isn’t the way that we would do it and actually there’s some very problematic and unsettling differences there. And that’s another field that, along with the ones that I mentioned, like generative models and GANs, has been receiving a lot more attention in the last couple of years, which is really exciting from the point of view of safety and specification.

Ariel: So, would it be fair to say that you think we’ve had progress or at least seen progress in addressing long-term safety issues, but some of the near-term safety issues, maybe we need faster work?

David: I mean I think to be clear, we have such a long way to go to address the kind of issues we’re going to see with generally intelligent and super intelligent AIs, that I still think that’s an even more pressing problem, and that’s what I’m personally focused on. I just think that you can see that there are going to be a lot of really big problems in the near term as well. And we’re not even well equipped to deal with those problems right now.

Roman: I’ll generally agree with David. I’m more concerned about long-term impacts. There are both more challenging and more impactful. It seems like short-term things may be problematic right now, but the main difficulty is that we didn’t start working on them in time. So problems like algorithmic fairness, bias, technological unemployment, are social issues which are quite solvable; They are not really that difficult from engineering or technical points of view. Whereas long-term control of systems which are more intelligent than you are—very much unsolved at this point in any even toy model. So I would agree with the part about bigger concerns but I think current problems we have today, they are already impacting people, but the good news is we know how to do better.

David: I’m not sure that we know how to do better exactly. Like I think a lot of these problems, it’s more of a problem of willpower and developing political solutions, so the ones that you mentioned. But with the deep fakes, this is something that I think requires a little bit more of a technical solution in the sense of how we organize our society so that people are either educated enough to understand this stuff, or so that people actually have someone they trust and have a reason to trust, who they can take their word for it on that.

Roman: That sounds like a great job, I’ll take it.

Ariel: It almost sounds like something we need to have someone doing in person, though.

So going back to this past year: were there, say, groups that formed, or research teams that came together, or just general efforts that, while maybe they didn’t produce something yet, you think could produce something good, either in safety or AI in general?

David: I think something interesting is happening in terms of the way AI safety is perceived and talked about in the broader AI and machine learning community. It’s a little bit like this phenomenon where once we solve something people don’t consider it AI anymore. So I think machine learning researchers, once they actually recognize the problem that the safety community has been sort of harping on and talking about and saying like, “Oh, this is a big problem”—once they say, “Oh yeah, I’m working on this kind of problem, and that seems relevant to me,” then they don’t really think that it’s AI safety, and they’re like, “This is just part of what I’m doing, making something that actually generalizes well and learns the right concept, or making something that is actually robust, or being able to interpret the model that I’m building, and actually know how it works.”

These are all things that people are doing a lot of work on these days in machine learning that I consider really relevant for AI safety. So I think that’s like a really encouraging sign, in a way, that the community is sort of starting to recognize a lot of the problems, or at least instances of a lot of the problems that are going to be really critical for aligning generally intelligent AIs.

Ariel: And Roman, what about you? Did you see anything sort of forming in the last year that maybe doesn’t have some specific result, but that seemed hopeful to you?

Roman: Absolutely. So I’ve mentioned that there is very few actual AI safety researchers as compared to the number of AI developers, researchers directly creating more capable machines. But the growth rate is much better I think. The number of organizations, the number of people who show interest in it, the number of papers I think is growing at a much faster rate, and it’s encouraging because as David said, it’s kind of like this convergence if you will, where more and more people realize, “I cannot say I built an intelligent system if it kills everyone.” That’s just not what an intelligent system is.

So safety and security become integral parts of it. I think Stuart Russell has a great example where he talks about bridge engineering. We don’t talk about safe bridges and secure bridges—there’s just bridges. If it falls down, it’s not a bridge. Exactly the same is starting to happen here: People realize, “My system cannot fail and embarrass the company, I have to make sure it will not cause an accident.”

David: I think that a lot of people are thinking about that way more and more, which is great, but there is a sort of research mindset, where people just want to understand intelligence, and solve intelligence. And I think that’s kind of a different pursuit. Solving intelligence doesn’t mean that you make something that is safe and secure, it just means you make something that’s really intelligent, and I would like it if people who had that mindset were still, I guess, interested in or respectful of or recognized that this research is potentially dangerous. I mean, not right now necessarily, but going forward I think we’re going to need to have people sort of agree on having that attitude to some extent of being careful.

Ariel: Would you agree though that you’re seeing more of that happening?

David: Yeah, absolutely, yeah. But I mean it might just happen naturally on its own, which would be great.

Ariel: Alright, so before I get to my very last question, is there anything else you guys wanted to bring up about 2018 that we didn’t get to yet?

David: So we were talking about AI safety and there’s kind of a few big developments in the last year. I mean, there’s actually too many I think for me to go over all of them, but I wanted to talk about something which I think is relevant to the specification problem that I was talking about earlier.

Ariel: Okay.

David: So, there are three papers in the last year, actually, on what I call superhuman feedback. The idea motivating these works is that even specifying what we want on a particular instance in some particular scenario can be difficult. So typically the way that we would think about training an AI that understands our intentions is to give it a bunch of examples, and say, “In this situation, I prefer if you do this. This is the kind of behavior I want,” and then the AI is supposed to pick up on the patterns there and sort of infer what our intentions are more generally.

But there can be some things that we would like AI systems to be competent at doing, ideally, that are really difficult to even assess individual instances of. Two examples that I like to use are designing a transit system for a large city, or maybe for a whole country, or the world or something. That’s something that right now is done by a massive team of people. Using that whole team to sort of assess a proposed design that the AI might make would be one example of superhuman feedback, because it’s not just a single human. But you might want to be able to do this with just a single human and a team of AIs helping them, instead of a team of humans. And there’s a few proposals for how you could do that that have come out of the safety community recently, which I think are pretty interesting.

Ariel: Why is it called superhuman feedback?

David: Actually, this is just my term for it. I don’t think anyone else is using this term.

Ariel: Okay.

David: Sorry if that wasn’t clear. The reason I use it is because there are three different, like, lines of work here. So there’s these two papers from OpenAI on what’s called amplification and debate, and then another paper from DeepMind on reward learning and recursive reward learning. And I like to view these as all kind of trying to solve the same problem. How can we assist humans and enable them to make good judgements and informed judgements that actually reflect what their preferences are when they’re not capable of doing that by themselves unaided. So it’s superhuman in the sense that it’s better than a single human can do. And these proposals are also aspiring to do things I think that even teams of humans couldn’t do by having AI helpers that sort of help you do the evaluation.

An example that Yan—who’s the lead author on the DeepMind paper, which I also worked on—gives is assessing an academic paper. So if you yourself aren’t familiar with the field and don’t have the expertise to assess this paper, you might not be able to say whether or not it should be published. But if you can decompose that task into things like: is the paper valid? Are the proofs valid? Are the experiments following a reasonable protocol? Is it novel? Is it formatted correctly for the venue where it’s submitted? And you got answers to all of those from helpers, then you could make the judgment. You’d just be like okay, it meets all of the criteria, so it should be published. The idea would be to get AI helpers to do those sorts of evaluations for you across a broad range of tasks, and allow us to explain to AIs, or teach AIs what we want across a broad range of tasks in that way.

Ariel: So, okay, and so then were there other things that you wanted to mention as well?

David: I do feel like I should talk about another thing that was, again, not developed last year, but really sort of took off last year—is this new kind of neural network architecture called the transformer, which is basically being used in a lot of places where convolutional neural networks and recurrent neural networks were being used before. And those were kind of the two main driving factors behind the deep learning revolution in terms of vision, where you use convolutional networks and things that have a sequential structure, like speech, or text, where people were using recurrent neural networks. And this architecture is actually motivated originally by the same sort of scaling consideration because it allowed them to remove some of the most computationally heavy parts of running these kind of models in the context of translation, and basically make it a hundred times cheaper to train a translation model. But since then it’s also been used in a lot of other contexts and has shown to be a really good replacement for these other kinds of models for a lot of applications.

And I guess the way to describe what it’s doing is it’s based on what’s called an attention mechanism, which is basically a way of giving a neural network the ability to pay more attention to different parts of an input than other parts. So like to look at one word that is most relevant to the current translation task. So if you’re imagining outputting words one at a time, then because different languages have words in different order, it doesn’t make sense to sort of try and translate the next word. You want to look through the whole input sentence, like a sentence in English, and find the word that corresponds to whatever word should come next in your output sentence.

And that was sort of the original inspiration for this attention mechanism, but since then it’s been applied in a bunch of different ways, including paying attention to different parts of the model’s own computation, paying attention to different parts of images. And basically just using this attention mechanism in the place of the other sort of neural architectures that people thought were really important to give you temporal dependencies across something sequential like a sentence that you’re trying to translate, turned out to work really well.

Ariel: So I want to actually pass this to Roman real quick. Did you have any comments that you wanted to add to either the superhuman feedback or the transformer architecture?

Roman: Sure, so superhuman feedback: I like the idea and I think people should be exploring that, but we can kind of look at similar examples previously. So, for a while we had situation where teams of human chess players and machines did better than just unaided machines or unaided humans. That lasted about ten years. And then machines became so much better, humans didn’t really contribute anything, it was kind of just like an additional bottleneck to consult with them. I wonder if long term this solution will face similar problems. It’s very useful right now, but it seems like, I don’t know if it will scale.

David: Well I want to respond to that, because I think it’s—the idea here is, in my mind, to have something that actually scales in the way that you’re describing, where it can sort of out-compete pure AI systems. Although I guess some people might be hoping that that’s the case, because that would make the strategic picture better in terms of people’s willingness to use safer systems. But this is more about just how can we even train systems—if we have the willpower, if people want to build a system that has the human in charge, and ends up doing what the human wants—how can we actually do that for something that’s really complicated?

Roman: Right. And as I said, I think it’s a great way to get there. So this part I’m not concerned about. It’s a long-term game with that.

David: Yeah, no, I mean I agree that that is something to be worried about as well.

Roman: There is a possibility of manipulation if you have a human in the loop, and that itself makes it not safer but more dangerous in certain ways.

David: Yeah, one of the biggest concerns I have for this whole line of work is that the human needs to really trust the AI systems that are assisting it, and I just don’t see that we have good enough mechanisms for establishing trust and building trustworthy systems right now, to really make this scale well without introducing a lot of risk for things like manipulation, or even just compounding of errors.

Roman: But those approaches, like the debate approach, it just feels like they’re setting up humans for manipulation from both sides, and who’s better at breaking the human psychological model.

David: Yep, I think it’s interesting, and I think it’s a good line of work. But I think we haven’t seen anything that looks like a convincing solution to me yet.

Roman: Agreed.

Ariel: So, Roman, was there anything else that you wanted to add about things that happened in the last year that we didn’t get to?

Roman: Well, as a professor, I can tell you that students stop learning after about 40 minutes. So I think at this point we’re just being counterproductive.

Ariel: So for what it’s worth, our most popular podcasts have all exceeded two hours. So, what are you looking forward to in 2019?

Roman: Are you asking about safety or development?

Ariel: Whatever you want to answer. Just sort of in general, as you look toward 2019, what relative to AI are you most excited and hopeful to see, or what do you predict we’ll see?

David: So I’m super excited for people to hopefully pick up on this reward learning agenda that I mentioned that Jan and me and people at DeepMind worked on. I was actually pretty surprised how little work has been done on this. So the idea of this agenda at a high level is just: we want to learn a reward function—which is like a score, that tells an agent how well it’s doing—learn reward functions that encode what we want the AI to do, and that’s the way that we’re going to specify tasks to an AI. And I think from a machine learning researcher point of view this is kind of the most obvious solution to specification problems and to safety—is just learner reward function. But very few people are really trying to do that, and I’m hoping that we’ll see more people trying to do that, and encountering and addressing some of the challenges that come up.

Roman: So I think by definition we cannot predict short-term breakthroughs. So what we’ll see is a lot of continuation of 2018 work, and previous work scaling up. So, if you have, let’s say, Texas hold ’em poker: so for two players, we’ll take it to six players, ten players, something like that. And you can make similar projections for other fields, so the strategy games will be taken to new maps, involve more players, maybe additional handicaps will be introduced for the bots. But that’s all we can really predict, kind of gradual improvement.

Protein folding will be even more efficient in terms of predicting actual structures: Any type of accuracy rates, if they were climbing from 80% to 90%, will hit 95, 96. And this is a very useful way of predicting what we can anticipate, and I’m trying to do something similar with accidents. So if we can see historically what was going wrong with systems, we can project those trends forward. And I’m happy to say that there is now at least two or three different teams working and collecting those examples and trying to analyze them and create taxonomies for them. So that’s very encouraging.

David: Another thing that comes to mind is—I mentioned adversarial examples earlier, which are these imperceptible differences to a human that change how the AI system perceives something like an image. And so far, for the most part, the field has been focused on really imperceptible changes. But I think now people are starting to move towards a broader idea of what counts as an adversarial example. So basically anything that a human thinks clearly should belong to this class and the AI system thinks clearly should belong to this other class that has sort have been constructed deliberately to create that kind of a difference.

And I think this going to be really interesting and exciting to see how the field tries to move in that direction, because as I mentioned, I think it’s hard to define how humans decide whether or not something is a picture of a cat or something. And the way that we’ve done it so far is just by giving lots of examples of things that we say are cats. But it turns out that that isn’t sufficient, and so I think this is really going to push a lot of people closer towards thinking about some of the really core safety challenges within the mainstream machine learning community. So I think that’s super exciting.

Roman: It is a very interesting topic, and I am in particular looking at a side subject in that, which is adversarial inputs for humans, and machines developing which I guess is kind of like optical illusions, and audio illusions, where a human is mislabeling inputs in a predictable way, which is allowing for manipulation.

Ariel: Along very similar lines, I think I want to modify my questions slightly, and also ask: coming up in 2019, what are you both working on that you’re excited about, if you can tell us?

Roman: Sure, so there has been a number of publications looking at particular limitations, either through mathematical proofs or through well known economic models, and what is possible in fact, from computational, complexity points of view. And I’m trying to kind of integrate those into a single model showing—in principle, not in practice, but even in principle—what can we do with the AI control problem? How solvable is it? Is it solvable? Is it not solvable? Because I don’t think there is a mathematically rigorous proof, or even a rigorous argument either way. So I think that will be helpful, especially with kind of arguing about importance of a problem and resource allocation.

David: I’m trying to think what I can talk about. I guess right now I have some ideas for projects that are not super well thought out, so I won’t talk about those. And I have a project that I’m trying to finish off which is a little bit hard to describe in detail, but I’ll give the really high level motivation for it. And it’s about something that people in the safety community like to call capability control. I think Nick Bostrom has these terms, capability control and motivation control. And so what I’ve been talking about most of the time in terms of safety during this podcast was more like motivation control, like getting the AI to want to do the right thing, and to understand what we want. But that might end up being too hard, or sort of limited in some respect. And the alternative is just to make AIs that aren’t capable of doing things that are dangerous or catastrophic.

A lot of people in the safety community sort of worry about capability control approaches failing because if you have a very intelligent agent, it will view these attempts to control it as undesirable, and try and free itself from any constraints that we give it. And I think a way of sort of trying to get around that problem is to sort of look at capability control from the lens of motivation control. So to basically make an AI that doesn’t want to influence certain things, and maybe doesn’t have some of these drives to influence the world, or to influence the future. And so in particular I’m trying to see how can we design agents that really don’t try to influence the future, and really only care about doing the right thing, right now. And if we try and do that in a sort of naïve way, or there ways that can fail, and we can get some sort of emergent drive to still try and optimize over the long term, or try and have some influence in the future. And I think to the extent we see things like that, that’s problematic from this perspective of let’s just make AIs that aren’t capable or motivated to influence the future.

Ariel: Alright! I think I’ve kept you both on for quite a while now. So, David and Roman, thank you so much for joining us today.

David: Yeah, thank you both as well.

Roman: Thank you so much.

FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.

Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings—most Americans, for example, don’t trust Facebook—were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.

This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University’s political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.

In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:

  • Demographic differences in perceptions of AI
  • Discrepancies between expert and public opinions
  • Public trust (or lack thereof) in AI developers
  • The effect of information on public perceptions of scientific issues

Research and publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi there. I’m Ariel Conn with the Future of Life Institute. Today, I am doing a special podcast, which I hope will be just the first in a continuing series, in which I talk to researchers about the work that they’ve just published. Last week, a report came out called Artificial Intelligence: American Attitudes and Trends, which is a survey that looks at what Americans think about AI. I was very excited when the lead author of this report agreed to come join me and talk about her work on it, and I am actually now going to just pass this over to her, and let her introduce herself, and just explain a little bit about what this report is and what prompted the research.

Baobao: My name is Baobao Zhang. I’m a PhD candidate in Yale University’s political science department, and I’m also a research affiliate with the Center for the Governance of AI at the University of Oxford. We conducted a survey of 2,000 American adults in June 2018 to look at what Americans think about artificial intelligence. We did so because we believe that AI will impact all aspects of society, and therefore, the public is a key stakeholder. We feel that we should study what Americans think about this technology that will impact them. In this survey, we covered a lot of ground. In the past, surveys about AI tend to have very specific focus, for instance on automation and the future of work. What we try to do here is cover a wide range of topics, including the future of work, but also lethal autonomous weapons, how AI might impact privacy, and trust in various actors to develop AI.

So one of the things we found is Americans believe that AI is a technology that should be carefully managed. In fact, 82% of Americans feel this way. Overall, Americans express mixed support for developing AI. 41% somewhat support or strongly support the development of AI, while there’s a smaller minority, 22%, that somewhat or strongly opposes it. And in terms of the AI governance challenges that we asked—we asked about 13 of them—Americans think all of them are quite important, although they prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake news online, preventing AI cyber attacks, and protecting data privacy.

Ariel: Can you talk a little bit about what the difference is between concerns about AI governance and concerns about AI development and more in the research world?

Baobao: In terms of the support for developing AI, we saw that as a general question in terms of support—we didn’t get into the specifics of what developing AI might look like. But in terms of the governance challenges, we gave quite detailed, concrete examples of governance challenges, and these tend to be more specific.

Ariel: Would it be fair to say that this report looks specifically at governance challenges as opposed to development?

Baobao: It’s a bit of both. I think we ask both about the R&D side, for instance we ask about support for developing AI and which actors the public trusts to develop AI. On the other hand, we also ask about the governance challenges. Among the 13 AI governance challenges that we presented to respondents, Americans tend to think all of them are quite important.

Ariel: What were some of the results that you expected, that were consistent with what you went into this survey thinking people thought, and what were some of the results that surprised you?

Baobao: Some of the results that surprised us is how soon the public thinks that high-level machine intelligence will be developed. We find that they think it will happen a lot sooner than what experts predict, although some past research suggests similar results. What didn’t surprise me, in terms of the AI governance challenge question, is how people are very concerned about data privacy and digital manipulation. I think these topics have been in the news a lot recently, given all the stories about hacking or digital manipulation on Facebook.

Ariel: So going back real quick to your point about the respondents expecting high-level AI happening sooner: how soon do they expect it?

Baobao: In our survey, we asked respondents about high-level machine intelligence, and we defined it as when machines are able to perform almost all tasks that are economically relevant today better than the median human today at each task. My co-author, Allan Dafoe, and some of my other team members, we’ve done a survey asking AI researchers—this was back in 2016—a similar question, and there we had a different definition of high-level machine intelligence that required a higher bar, so to speak. So that might have caused some difference. We’re trying to ask this question again to AI researchers this year. We’re doing continuing research, so hopefully the results will be more comparable. Even so, I think the difference is quite large.

I guess one more caveat is—we have in the footnote—we did ask the same definition as we asked AI experts in 2016 in a pilot survey on the American public, and we also found that the public thinks high-level machine intelligence will happen sooner than experts predict. So it might not just be driven by the definition itself, but the public and experts have different assessments. But to answer your question, the median respondent in our American public sample predicts that there’s a 54% probability of high-level machine intelligence being developed within the next 10 years, which is quite high of a probability.

Ariel: I’m hesitant to ask this, because I don’t know if it’s a very fair question, but do you have thoughts on why the general public thinks that high-level AI will happen sooner? Do you think it is just a case that there’s different definitions that people are referencing, or do you think that they’re perceiving the technology differently?

Baobao: I think that’s a good question, and we’re doing more research to investigate these results and to probe at it. One thing is that the public might have a different perception of what AI is compared to experts. In future surveys, we definitely want to investigate that. Another potential explanation is that the public lacks understanding of what goes into AI R&D.

Ariel: Have there been surveys that are as comprehensive as this in the past?

Baobao: I’m hesitant to say that there are surveys that are as comprehensive as this. We certainly relied on a lot of past survey research when building our surveys. The Eurobarometer had a couple of good surveys on AI in the past, but I think we cover both sort of the long-term and the short-term AI governance challenges, and that’s something that this survey really does well.

Ariel: Okay. The reason I ask that is I wonder how much people’s perceptions or misperceptions of how fast AI is advancing would be influenced by just the fact that we have had significant advancements just in the last couple of years that I don’t think were quite as common during previous surveys that were presented to people.

Baobao: Yes, that certainly makes sense. One part of our survey tries to track responses over time, so I was able to dig up some surveys going all the way back to the 1980s that were conducted by the National Science Foundation on the question of automation—whether automation will create more jobs or eliminate more jobs. And we find that compared with the historical data, the percentage of people who think that automation will create more jobs than it eliminates—that percentage has decreased, so this result could be driven by people reading in the news about all these advances in AI and thinking, “Oh, AI is getting really good these days at doing tasks normally done by humans,” but again, you would need much more data to sort of track these historical trends. So we hope to do that. We just recently received a grant from the Ethics and Governance of AI Fund, to continue this research in the future, so hopefully we will have a lot more data, and then we can really map out these historical trends.

Ariel: Okay. We looked at those 13 governance challenges that you mentioned. I want to more broadly ask the same two-part question of: looking at the survey in its entirety, what results were most expected and what results were most surprising?

Baobao: In terms of the AI governance challenge question, I think we had expected some of the results. We’d done some pilot surveys in the past, so we were able to have a little bit of a forecast, in terms of the governance challenges that people prioritize, such as data privacy, cyber attacks, surveillance, and digital manipulation. These were also things that respondents in the pilot surveys had prioritized. I think some of the governance challenges that people still think of as important, but don’t view as likely to impact large numbers of people in the next 10 years, such as critical AI systems failure—these questions are sort of harder to ask in some ways. I know that AI experts think about it a lot more than, say, the general public.

Another thing that sort of surprised me is how much people think value alignment— which is sort of an abstract concept—how much people think that’s quite important, and also likely to impact large numbers of people within the next 10 years. It’s up there with safety of autonomous vehicles or biased hiring algorithms, so that was somewhat surprising.

Ariel: That is interesting. So if you’re asking people about value alignment, were respondents already familiar with the concept, or was this something that was explained to them and they just had time to consider it as they were looking at the survey?

Baobao: We explained to them what it meant, and we said that it means to make sure that AI systems are safe, trustworthy, and aligned with human values. Then we gave a brief paragraph definition. We think that maybe people haven’t heard of this term before, or it could be quite abstract, so therefore we gave a definition.

Ariel: I would be surprised if it was a commonly known term. Then looking more broadly at the survey as a whole, you looked at lots of different demographics. You asked other questions too, just in terms of things like global risks and the potential for global risks, or generally about just perception of AI in general, and whether or not it was good, and whether or not advanced AI was good or bad, and things like that. So looking at the whole survey, what surprised you the most? Was it still answers within the governance challenges, or did anything else jump out at you as unexpected?

Baobao: Another thing that jumped out at me is that respondents who have computer science or engineering degrees tend to think that the AI governance challenges are less important across the board than people who don’t have computer science or engineering degrees. These people with computer science or engineering degrees also are more supportive of developing AI. I suppose that result is not totally unexpected, but I suppose in the news there is a sense that people who are concerned about AI safety, or AI governance challenges, tend to be those who have a technical computer background. But in reality, what we see are people who don’t have a tech background who are concerned about AI. For instance, women, those with low levels of education, or those who are low-income, tend to be the least supportive of developing AI. That’s something that we want to investigate in the future.

Ariel: There’s an interesting graph in here where you’re showing the extent to which the various groups consider an issue to be important, and as you said, people with computer science or engineering degrees typically don’t consider a lot of these issues very important. I’m going to list the issues real quickly. There’s data privacy, cyber attacks, autonomous weapons, surveillance, autonomous vehicles, value alignment, hiring bias, criminal justice bias, digital manipulation, US-China arms race, disease diagnosis, technological unemployment, and critical AI systems failure. So as you pointed out, the people with the CS and engineering degrees just don’t seem to consider those issues nearly as important, but you also have a category here of people with computer science or programming experience, and they have very different results. They do seem to be more concerned. Now, I’m sort of curious what the difference was between someone who has experience with computer science and someone who has a degree in computer science.

Baobao: I don’t have a very good explanation for the difference between the two, except for I can say that the people with experience, that’s a lower bar, so there are more people in the sample who have computer science or programming experience—and in fact, there’s 735 of them, compared to people who have computer science or engineering undergrad or graduate degrees, and that’s 195 people. I suppose those who have the CS or programming experience, that comprises a greater number of people. Going forward, in future surveys, we want to probe at this a bit more. We might look at what industries various people are working in, or how much experience they have either using AI or developing AI.

Ariel: And then I’m also sort of curious—I know you guys still have more work that you want to do—but I’m curious what you know now about how American perspectives are either different or similar to people in other countries.

Baobao: The most direct comparison that we can make is with respondents in the EU, because we have a lot of data based on the Eurobarometer surveys, and we find that Americans share similar concerns with Europeans about AI. So as I mentioned earlier, 82% of Americans think that AI is a technology that should be carefully managed, and that percentage is similar to what the EU respondents have expressed. Also, we find similar demographic trends, in that women, those with lower levels of income or lower levels of education, tend to be not as supportive of developing AI.

Ariel: I went through this list, and one of the things that was on it is the potential for a US-China arms race. Can you talk a little bit about the results that you got from questions surrounding that? Do Americans seem to be concerned about a US-China arms race?

Baobao: One of the interesting findings from our survey is that Americans don’t necessarily think the US or China is the best at AI R&D, which is surprising, given that these two countries are probably the best. That’s a curious fact that I think we need to be cognizant of.

Ariel: I want to interject there, and then we can come back to my other questions, because I was really curious about that. Is that a case of the way you asked it—it was just, you know, “Is the US in the lead? Is China in the lead?”—as opposed to saying, “Do you think the US or China are in the lead?” Did respondents seem confused by possibly the way the question was asked, or do they actually think there’s some other country where there’s even more research happening?

Baobao: We asked this question in a way that it has been asked about general scientific achievements that Pew Research Center has asked about, so we did it such that it’s a survey experiment where half of the respondents were randomly assigned to consider the US and half of the respondents were randomly assigned to consider China. We wanted to ask this question in this manner, so we get more specific distribution of responses. When you just ask who is in the lead, you’re only allowed to put down one, whereas we give respondents a number of choices, so you can be either best in the world or above average, et cetera.

In terms of people underestimating US R&D, I think this is reflective of the public underestimating US scientific achievements in general. Pew had a similar question in a 2015 survey, and while 45% of the scientists they interviewed think that scientific achievement in the US are the best in the world, only 15% of Americans expressed the same opinion. So this could just be reflecting this general trend.

Ariel: I want to go back to my questions about the US-China arms race, and I guess it does make sense, first, to just define what you are asking about with a US-China arms race. Is that focused more on R&D, or were you also asking about a weapons race?

Baobao: This is actually a survey experiment, where we present different messages to respondents about a potential US-China arms race, and we asked both about investment in AI military capabilities as well as developing AI in a more peaceful manner, and cooperation between the US and China in terms of general R&D. We found that Americans seem to both support the US investing more in AI military capabilities, to make sure that it doesn’t fall behind China’s, even though it would exacerbate a AI military arms race. On the other hand, they also support the US working hard with China to cooperate to avoid the dangers of a AI arms race, and they don’t seem to understand that there’s a trade-off between the two.

I think this result is important for policymakers trying to not exacerbate an arms race, or to prevent one, when communicating with the public—to communicate these trade-offs, although we find that messages that explain the risks of an arm race tend to decrease respondent support for the US investing more in AI military capabilities, but the other information treatments don’t seem to change public perceptions.

Ariel: Do you think it’s a misunderstanding of the trade-offs, or maybe just hopeful thinking that there’s some way to maintain military might while still cooperating?

Baobao: I think this is a question that involves further investigation. I apologize that I keep saying this.

Ariel: That’s the downside to these surveys. I end up with far more questions than get resolved.

Baobao: Yes, and we’re one of the first groups who are asking these questions, so we’re just at the beginning stages of probing this very important policy question.

Ariel: With a project like this, do you expect to get more answers or more questions?

Baobao: I think in the beginning stages, we might get more questions than answers, although we are certainly getting some important answers—for instance that the American public is quite concerned about the societal impacts of AI. With that result, then we can probe and get more detailed answers hopefully. What are they concerned about? What can policymakers do to alleviate these concerns?

Ariel: Let’s get into some of the results that you had regarding trust. Maybe you could just talk a little bit about what you asked the respondents first, and what some of their responses were.

Baobao: Sure. We asked two questions regarding trust. We asked about trust in various actors to develop AI, and we also asked about trust in various actors to manage the development and deployment of AI. These actors include parts of the US government, international organizations, companies, and other groups such as universities or nonprofits. We found that among the actors that are most trusted to develop AI, these include university researchers and the US military.

Ariel: That was a rather interesting combination, I thought.

Baobao: I would like to give it some context. In general, trust in institutions is low among the American public. Particularly, there’s a lot of distrust in the government, and university researchers and the US military are the most trusted institutions across the board, when you ask about other trust issues.

Ariel: I would sort of wonder if there’s political sides with which people are more likely to trust universities and researchers versus trust the military. Is that across the board respondents on either side of the political aisle trusted both, or were there political demographics involved in that?

Baobao: That’s something that we can certainly look into with our existing data. I would need to check and get back to you.

Ariel: The other thing that I thought was interesting with that—and we can get into the actors that people don’t trust in a minute—but I know I hear a lot of concern that Americans don’t trust scientists. As someone who does a lot of science communication, I think that concern is overblown. I think there is actually a significant amount of trust in scientists; There’s just some certain areas where it’s less, and I was sort of wondering what you’ve seen in terms of trust in science, and if the results of this survey have impacted that at all.

Baobao: I would like to add that among the actors that we asked who are currently building AI or planning to build AI, trust is relatively low amongst all these groups.

Ariel: Okay.

Baobao: So, even with university scientists: 50% of respondents say that they have a great amount of confidence or a fair amount of confidence in university researchers developing AI in the interest of the public, so that’s better than some of these other organizations, but it’s not super high, and that is a bit concerning. And in terms of trust in science in general—I used to work in the climate policy space before I moved into AI policy, and there, it’s a question that we struggle with in terms of trust in expertise with regards to climate change. I found that in my past research, communicating the scientific consensus in climate change is actually an effective messaging tool, so your concerns about distrust in science being overblown, that could be true. So I think going forward, in terms of effective scientific communication, having AI researchers deliver an effective message: I think that could be important in bringing the public to trust AI more.

Ariel: As someone in science communication, I would definitely be all for that, but I’m also all for more research to understand that better. I also want to go into the organizations that Americans don’t trust.

Baobao: I think in terms of tech companies, they’re not perceived as untrustworthy across the board. I think trust is still relatively high for tech companies, besides Facebook. People really don’t trust Facebook, and that could be because of all the recent coverage of Facebook violating data privacy, the Cambridge Analytica scandal, digital manipulation on Facebook, et cetera. So we conducted this survey a few months after the Cambridge Analytica Facebook scandal had been in the news, but we’ve also run some pilot surveys before all that press coverage of the Cambridge Analytica Facebook scandal had broke, and we also found that people distrust Facebook. So it might be something particular to the company, although that’s a cautionary tale for other tech companies, that they should work hard to make sure that the public trusts its products.

Ariel: So I’m looking at this list, and under the tech companies, you asked about Microsoft, Google, Facebook, Apple, and Amazon. And I guess one question that I have—the trust in the other four, Microsoft, Google, Apple, and Amazon appears to be roughly on par, and then there’s very limited trust in Facebook. But I wonder, do you think it’s just—since you’re saying that Facebook also wasn’t terribly trusted beforehand—do you think that has to do with the fact that we have to give so much more personal information to Facebook? I don’t think people are aware of giving as much data to even Google, or Microsoft, or Apple, or Amazon.

Baobao: That could be part of it. So, I think going forward, we might want to ask more detailed questions about how people use certain platforms, or whether they’re aware that they’re giving data to particular companies.

Ariel: Are there any other reasons that you think could be driving people to not trust Facebook more than the other companies, especially as you said, with the questions and testing that you’d done before the Cambridge Analytica scandal broke?

Baobao: Before the Cambridge Analytica Facebook scandal, there were a lot of news coverage around the 2016 elections of vast digital manipulation on Facebook, and on social media, so that could be driving the results.

Ariel: Okay. Just to be consistent and ask you the same question over and over again, with this, what did you find surprising and what was on par with your expectations?

Baobao: I suppose I don’t find the Facebook results that unsurprising, given its negative press coverage, and also from our pilot results. What I did find surprising is the high levels of trust in the US military to develop AI, because I think some of us in the AI policy community are concerned about military applications of AI, such as lethal autonomous weapons. But on the other hand, Americans seem to place a high general level of trust in the US military.

Ariel: Yeah, that was an interesting result. So if you were going to move forward, what are some questions that you would ask to try to get a better feel for why the trust is there?

Baobao: I think I would like to ask some questions about particular uses or applications of AI these various actors are developing. Sometimes people aren’t aware that the US military is perhaps investing in this application of AI that they might find problematic, or that some tech companies are working on some other applications. I think going forward, we might do more of these survey experiments, where we give information to people and see if that increases or decreases trust in the various actors.

Ariel: What did Americans think of high-level machine learning and AI?

Baobao: What we found is that the public thinks, on balance, it will be more bad than good: So we have 15% of respondents who think it will be extremely bad, possibly leading to human extinction, and that’s a concern. On the other hand, only 5% thinks it will be extremely good. There’s a lot of uncertainty. To be fair, it is about a technology that a lot of people don’t understand, so 18% said, “I don’t know.”

Ariel: What do we take away from that?

Baobao: I think this also reflects on our previous findings that I talked about, where Americans expressed concern about where AI is headed: that there are people with serious reservations about AI’s impact on society. Certainly, AI researchers and policymakers should take these concerns seriously, invest a lot more research into how to prevent the bad outcomes and how to make sure that AI can be beneficial to everyone.

Ariel: Were there groups who surprised you by either being more supportive of high-level AI and groups who surprised you by being less supportive of high-level AI?

Baobao: I think the results for support of developing high-level machine intelligence versus support for developing AI, they’re quite similar. The correlation is quite high, so I suppose nothing is entirely surprising. Again, we find that people with CS or engineering degrees tend to have higher levels of support.

Ariel: I find it interesting that people who have higher incomes seem to be more supportive as well.

Baobao: Yes. That’s another result that’s pretty consistent across the two questions. We also performed analysis looking at these different levels of support for developing high-level machine intelligence, controlling for support of developing AI, and what we find there is that those with CS or programming experience have greater support of developing high-level machine intelligence, even controlling for support of developing AI. So there, it seems to be another tech optimism story, although we need to investigate further.

Ariel: And can you explain what you mean when you say that you’re analyzing the support for developing high-level machine learning with respect to the support for AI? What distinction are you making there?

Baobao: Sure. So we use a multiple linear regression model, where we’re trying to predict support for developing high-level machine intelligence using all these demographic characteristics, but also including respondent’s support for developing AI, to see if there’s something driving the support for developing high-level machine intelligence in spite of controlling for developing AI. And we find that controlling for support for developing AI, having CS or programming experience is further correlated with support of developing high-level machine intelligence. I hope that makes sense.

Ariel: For the purposes of the survey, how do you distinguish between AI and high-level machine learning?

Baobao: We defined AI as computer systems that perform tasks or make decisions that usually require human intelligence. So that’s a more general definition, versus high-level machine intelligence defined in such a way where the AI is doing most economically relevant tasks at the level of the median human.

Ariel: Were there inconsistencies between those two questions, where you were surprised to find support for one and not support for the other?

Baobao: We can sort of probe it further, to see if there’s people who answer differently for those two questions. We haven’t looked into it, but certainly that’s something that we can with our existing data.

Ariel: Were there any other results that you think researchers specifically should be made aware of, that could potentially impact the work that they’re doing in terms of developing AI?

Baobao: I guess here’s some general recommendations. I think it’s important for researchers or people working in an adjacent space to do a lot more scientific communication to explain to the public what they’re doing—particularly maybe AI safety researchers, because I think there’s a lot of hype about AI in the news, either how scary it is or how great it will be, but I think some more nuanced narratives would be helpful for people to understand the technology.

Ariel: I’m more than happy to do what I can to try to help there. So for you, what are your next steps?

Baobao: Currently, we’re working on two projects. We’re hoping to run a similar survey in China this year, so we’re currently translating the questions into Chinese and changing the questions to have more local context. So then we can compare our results—the US results with the survey results from China—which will be really exciting. We’re also working on surveying AI researchers about various aspects of AI, both looking at their predictions for AI development timelines, but also their views on some of these AI governance challenge questions.

Ariel: Excellent. Well, I am very interested in the results of those as well, so I hope you’ll keep us posted when those come out.

Baobao: Yes, definitely. I will share them with you.

Ariel: Awesome. Is there anything else you wanted to mention?

Baobao: I think that’s it.

Ariel: Thank you so much for joining us.

Baobao: Thank you. It’s a pleasure talking to you.

 

 

Podcast: Existential Hope in 2019 and Beyond

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable  just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.

The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts–Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg–about their views on the present, the future, and the path between them.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

We hope you’ll come away feeling inspired and motivated–not just to prevent catastrophe, but to facilitate greatness.

Topics discussed in this episode include:

  • How technology aids us in realizing personal and societal goals.
  • FLI’s successes in 2018 and our goals for 2019.
  • Worldbuilding and how to conceptualize the future.
  • The possibility of other life in the universe and its implications for the future of humanity.
  • How we can improve as a species and strategies for doing so.
  • The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
  • Existential hope and what it looks like now and far into the future.

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone. Welcome back to the FLI podcast. I’m your host, Ariel Conn, and I am truly excited to bring you today’s show. This month, we’re departing from our standard two-guest interview format because we wanted to tackle a big and fantastic topic for the end of the year that would require insight from a few extra people. It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine.

And so, as we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

I’m delighted to present Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark and Anders Sandberg, all of whom were kind enough to come on the show and talk about why they’re so hopeful for the future and just how amazing that future could be.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and she created the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

Over the course of a few days, I interviewed all six of our guests, and I have to say, it had an incredibly powerful and positive impact on my psyche. We’ve merged these interviews together for you here, and I hope you’ll all also walk away feeling a bit more hope for humanity’s collective future, whatever that might be.

But before we go too far into the future, let’s start with Anthony and Max, who can talk a bit about where we are today.

Anthony: I’m Anthony Aguirre, I’m one of the founders of the Future of Life Institute. And in my day job, I’m a Physicist at the University of California at Santa Cruz.

Max: I am Max Tegmark, a professor doing physics and AI research here at MIT, and also the president of the Future of Life Institute.

Ariel: All right. Thank you so much for joining us today. I’m going to start with sort of a big question. That is, do you think we can use technology to solve today’s problems?

Anthony: I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

Max: Take, for example, poverty. It’s not like we don’t have the technology right now to eliminate poverty. But we’re steering the technology in such a way that there are people who starve to death, and even in America there are a lot of children who just don’t get enough to eat, through no fault of their own.

Anthony: So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them. Now, that being said, there are things that are more amenable to better technology, and things that are less amenable. And there are technologies that tend to, rather than functioning as kind of an extension of our will, will take on a bit of a life of their own. If you think about technologies like medicine, or good farming techniques, those tend to be sort of overall beneficial and really are kind of accomplishing purposes that we set. You know, we want to be more healthy, we want to be better fed, we build the technology and it happens. On the other hand, there are obviously technologies that are just as useful or even more useful for negative purposes — socially negative or things that most people agree are negative things: landmines, for example, as opposed to vaccines. These technologies come into being because somebody is trying to accomplish their purpose — defending their country against an invading force, say — but once that technology exists, it’s kind of something that is easily used for ill purposes.

Max: Technology simply empowers us to do good things or bad things. Technology isn’t evil, but it’s also not good. It’s morally neutral. Right? You can use fire to warm up your home in the winter or to burn down your neighbor’s house. We have to figure out how to steer it and where we want to go with it. I feel that there’s been so much focus on just making our tech powerful right now — because that makes money, and it’s cool — that we’ve neglected the steering and the destination quite a bit. And in fact, I see the core goal of the Future of Life Institute: Help bring back focus on the steering of our technology and the destination.

Anthony: There are also technologies that are really tricky in that they give us what we think we want, but then we sort of regret having later, like addictive drugs, or gambling, or cheap sugary foods, or-

Ariel: Social media.

Anthony: … certain online platforms that will go unnamed. We feel like this is what we want to do at the time; We choose to do it. We choose to eat the huge sugary thing, or to spend some time surfing the web. But later, with a different perspective maybe, we look back and say, “Boy, I could’ve used those calories, or minutes, or whatever, better.” So who’s right? Is it the person at the time who’s choosing to eat or play or whatever? Or is it the person later who’s deciding, “Yeah, that wasn’t a good use of my time or not.” Those technologies I think are very tricky, because in some sense they’re giving us what we want. So we reward them, we buy them, we spend money, the industries develop, the technologies have money behind them. At the same time, it’s not clear that they make us happier.

So I think there are certain social problems, and problems in general, that technology will be tremendously helpful in improving as long as we can act to sort of wisely try to balance the effects of technology that have dual use toward the positive, and as long as we can somehow get some perspective on what to do about these technologies that take on a life of their own, and tend to make us less happy, even though we dump lots of time and money into them.

Ariel: This sort of idea of technologies — that we’re using them and as we use them we think they make us happy and then in the long run we sort of question that — is this a relatively modern problem, or are there examples of anything that goes further back that we can learn from from history?

Anthony: I think it goes fairly far back. Certainly drug use goes a fair ways back. I think there have been periods where drugs were used as part of religious or social ceremonies and in other kind of more socially constructive ways. But then, it’s been a fair amount of time where opiates and very addictive things have existed also. Those have certainly caused social problems back at least a few centuries.

I think a lot of these examples of technologies that give us what we seem to want but not really what we want are ones in which we’re applying the technology to a species — us — that developed in a very different set of circumstances, and that contrast between what’s available and what we evolutionarily wanted is causing a lot of problems. The sugary foods are an obvious example where we can now just supply huge plenitudes of something that was very rare and precious back in more evolutionary times — you know, sweet calories.

Drugs are something similar. We have a set of chemistry that helps us out in various situations, and then we’re just feeding those same chemical pathways to make ourselves feel good in a way that is destructive. And violence might be something similar. Violent technologies go way, way back. Those are another one that are clearly things that we want to invent to further our will and accomplish our goals. They’re also things that may at some level be addictive to humans. I think it’s not entirely clear exactly how — there’s a strange mix there, but I think there’s certainly something compelling and built into at least many humans’ DNA that promotes fighting and hunting and all kinds of things that were evolutionarily useful way back when and perhaps less useful now. It had a clear evolutionary purpose with tribes that had to defend themselves, with animals that needed to be killed for food. But feeding that desire to run around and hunt and shoot people, which most people aren’t doing in real life, but tons of people are doing in video games. So there’s clearly some built in mechanism that’s rewarding that behavior as being fun to do and compelling. Video games are obviously a better way to express that than running around and doing it in real life, but it tells you something about some circuitry that is still there and is left over from early times. So I think there are a number of examples like that — this connection between our biological evolutionary history and what technology makes available in large quantities — where we really have to think carefully about how we want to play that.

Ariel: So, as you look forward to the future, and sort of considering some of these issues that you’ve brought up, how do you envision us being able to use technology for good and maybe try to overcome some of these issues? I mean, maybe it is good if we’ve got people playing video games instead of going around shooting people in real life.

Anthony: Yeah. So there may be examples where some of that technology can fulfill a need in a less destructive way than it might otherwise be. I think there are also plenty of examples where a technology can root out or sort of change the nature of a problem that would be enormously difficult to do something about without a technology. So for example, I think eating meat, when you analyze it from almost any perspective, is a pretty destructive thing for humanity to be doing. Ecologically, ethically in terms of the happiness of the animals, health-wise: so many things are destructive about it. And yet, you really have the sense that it’s going to be enormously difficult — it would be very unlikely for that to change wholesale on a relatively short period of time.

However, there are technologies — clean meat, cultured meat, really good tasting vegetarian meat substitutes — that are rapidly coming to market. And you could imagine if those things were to get cheap and widely available and perhaps a little bit healthier, that could dramatically change that situation relatively quickly. I think if a non-ecologically destructive, non-suffering inducing, just as tasty and even healthier product were cheaper, I don’t think people would be eating meat. Very few people actually like, I think, intrinsically the idea of having an animal suffer in order for them to eat. So I think that’s an example of something that would be really, really hard to change through just social actions. Could be jump started quite a lot by technology — that’s one of the ones I’m actually quite hopeful about.

Global warming I think is a similar one — it’s on some level a social and economic problem. It’s a long-term planning problem, which we’re very bad at. It’s pretty clear how to solve the global warming issue if we really could think on the right time scales and weigh the economic costs and benefits over decades — it’d be quite clear that mitigating global warming now and doing things about it now might take some overall investment that would clearly pay itself off. But we seem unable to accomplish that.

On the other hand, you could easily imagine a really cheap, really power-dense, quickly rechargeable battery being invented and just utterly transforming that problem into a much, much more tractable one. Or feasible, small-scale nuclear fusion power generation that was cheap. You can imagine technologies that would just make that problem so much easier, even though it is ultimately kind of a social or political problem that could be solved. The technology would just make it dramatically easier to do that.

Ariel: Excellent. And so thinking more hopefully — even when we’re looking at what’s happening in the world today, news is usually focusing on all the bad things that have gone wrong — when you look around the world today, what do you think, “Wow, technology has really helped us achieve this, and this is super exciting?”

Max: Almost everything I love about today is the result of technology. It’s because of technology that we’ve more than doubled the lifespan that we humans used to have, most of human history. More broadly, I feel that the technology is empowering us. Ten thousand years ago, we felt really, really powerless; We were these beings, you know, looking at this great world out there and having very little clue about how it worked — it was largely mysterious to us — and even less ability to actually influence the world in a major way. Then technology enabled science, and vice versa. So the sciences let us understand more and more how the world works, and let us build this technology which lets us shape the world to better suit us. Helping produce much better, much more food, helping keep us warm in the winter, helping make hospitals that can take care of us, and schools that can educate us, and so on.

Ariel: Let’s bring on some of our other guests now. We’ll turn first to Gaia Dempsey. How do you envision technology being used for good?

Gaia: That’s a huge question.

Ariel: It is. Yes.

Gaia: I mean, at its essence I think technology really just means a tool. It means a new way of doing something. Tools can be used to do a lot of good — making our lives easier, saving us time, helping us become more of who we want to be. And I think technology is best used when it supports our individual development in the direction that we actually want to go — when it supports our deeper interests and not just the, say, commercial interests of the company that made it. And I think in order for that to happen, we need for our society to be more literate in technology. And to me that’s not just about understanding how computing platforms work, but also understanding the impact that tools have on us as human beings. Because they don’t just shape our behavior, they actually shape our minds and how we think.

So I think we need to be very intentional about the tools that we choose to use in our own lives, and also the tools that we build as technologists. I’ve always been very inspired by Douglas Engelbart’s work, and I think that — I was revisiting his original conceptual framework on augmenting human intelligence, which he wrote and published in 1962 — and I really think he had the right idea, which is that tools used by human beings don’t exist in a vacuum. They exist in a coherent system and that system involves language: the language that we use to describe the tools and understand how we’re using them; the methodology; and of course the training and education around how we learn to use those tools. And I think that as a tool maker it’s really important to think about each of those pieces of an overarching coherent system, and imagine how they’re all going to work together and fit into an individual’s life and beyond: you know, the level of a community and a society.

Ariel: I want to expand on some of this just a little bit. You mentioned this idea of making sure that the tool, the technology tool, is being used for people and not just for the benefit, the profit, of the company. And that that’s closely connected to making sure that people are literate about the technology. One, just to confirm that that is actually what you were saying. And, two, I mean one of the reasons I want to confirm this is because that is my own concern — that it’s being too focused for making profit and not enough people really understand what’s happening. My question to you is, then, how do we educate people? How do we get them more involved?

Gaia: I think for me, my favorite types of tools are the kinds of tools that support us in developing our thinking and that help us accelerate our ability to learn. But I think that some of how we do this in our society is not just about creating new tools or getting trained on new tools, but really doesn’t have very much to do with technology at all. And that’s in our education system, teaching critical thinking. And teaching, starting at a young age, to not just accept information that is given to you wholesale, but really to examine the motivations and intentions and interests of the creator of that information, and the distributor of that information. And I think these are really just basic tools that we need as citizens in a technological society and in a democracy.

Ariel: That actually moves nicely to another question that I have. Well, I actually think the sentiment might be not quite as strong as it once was, but I do still hear a lot of people who sort of approach technology as the solution to any of today’s problems. And I’m personally a little bit skeptical that we can only use technology. I think, again, it comes back to what you were talking about with it’s a tool so we can use it, but I think it just seems like there’s more that needs to be involved. I guess, how do you envision using technology as a tool, and still incorporating some of these other aspects like teaching critical thinking?

Gaia: You’re really hitting on sort of the core questions that are fundamental to creating the kind of society that we want to live in. And I think that we would do well to spend more time thinking deeply about these questions. I think technology can do really incredible, tremendous things in helping us solve problems and create new capabilities. But it also creates a new set of problems for us to engage with.

We’ve sort of coevolved with our technology. So it’s easy to point to things in the culture and say, “Well, this never would have happened without technology X.” And I think that’s true for things that are both good and bad. I think, again, it’s about taking a step back and taking a broader view, and really not just teaching critical thinking and critical analysis, but also systems level thinking. And understanding that we ourselves are complex systems, and we’re not perfect in the way that we perceive reality — we have cognitive biases, we cannot necessarily always trust our own perceptions. And I think that’s a lifelong piece of work that everyone can engage with, which is really about understanding yourself first. This is something that Yuval Noah Harari talked about in a couple of his recent books and articles that he’s been writing, which is: if we don’t do the work to really understand ourselves first and our own motivations and interests, and sort of where we want to go in the world, we’re much more easily co-opted and hackable by systems that are external to us.

There are many examples of recommendation algorithms and sentiment analysis — audience segmentation tools that companies are using to be able to predict what we want and present that information to us before we’ve had a chance to imagine that that is something we could want. And while that’s potentially useful and lucrative for marketers, the question is what happens when those tools are then utilized not just to sell us a better toothbrush on Amazon, but when it’s actually used in a political context. And so with the advent of these vast machine learning, reinforcement learning systems that can look at data and look at our behavior patterns and understand trends in our behavior and our interests, that presents a really huge issue if we are not ourselves able to pause and create a gap, and create a space between the information that’s being presented to us within the systems that we’re utilizing and really our own internal compass.

Ariel: You’ve said two things that I think are sort of interesting, especially when they’re brought together. And the first is this idea that we’ve coevolved with technology — which, I actually hadn’t thought of it in that phrase before, and I think it’s a really, really good description. But then when we consider that we’ve coevolved with technology, what does that mean in terms of knowing ourselves? And especially knowing ourselves as our biological bodies, and our limiting cognitive biases? I don’t know if that’s something that you’ve thought about much, but I think that combination of ideas is an interesting one.

Gaia: I mean, I know that I certainly already feel like I’m a cyborg. Part of knowing myself is — it does involve understanding the tools that I use, that feel that they are extensions of myself. That kind of comes back to the idea of technology literacy, and systems literacy, and being intentional about the kinds of tools that I want to use. For me, my favorite types of tools are the kind that I think are very rare: the kind that support us developing the capacity for long-term thinking, and for being true to the long-term intentions and goals that I set for myself.

Ariel: Can you give some examples of those?

Gaia: Yeah, I’ll give a couple examples. One example that’s sort of probably familiar to a lot of people listening to this comes from the book Ready Player One. And in this book the main character is interacting with his VR system that he sort of lives and breathes in every single day. And at a certain point the system asks him: do you want to activate your health module? I forgot exactly what it was called. And without giving it too much thought, he kind of goes, “Sure. Yeah, I’d like to be healthier.” And it instantiates a process whereby he’s not allowed to log into the OASIS without going through his exercise routine every morning. To me, what’s happening there is: there is a choice.

And it’s an interesting system design because he didn’t actually do that much deep thinking about, “Oh yeah, this is a choice I really want to commit to.” But the system is sort of saying, “We’re thinking through the way that your decision making process works, and we think that this is something you really do want to consider. And we think that you’re going to need about three months before you make a final decision as to whether this is something you want to continue with.”

So that three month period or whatever, and I believe it was three months in the book, is what’s known as an akrasia horizon. Which is a term that I learned through a different tool that is sort of a real life version of that, which is called Beeminder. And the akrasia horizon is, really, it’s a time period that’s long enough that it will sort of circumvent a cognitive bias that we have to really prioritize the near term at the expense of the future. And in the case of the Ready Player One example, the near term desire that he would have that would circumvent the future — his long-term health — is, “I don’t feel like working out today. I just want to get into my email or I just want to play a video game right now.” And a very similar sort of setup is created in this tool Beeminder, which I love to use to support some goals that I want to make sure I’m really very motivated to meet.

So it’s a tool where you can put in your goals and you can track them either yourself by entering the data manually, or you can connect to a number of different tracking capabilities like RescueTime and others. And if you don’t stay on track with your goals, they charge your credit card. It’s a very effective sort of motivating force. And so I sort of have a nickname: I call these systems time bridges. Which are really choices made by your long-term thinking self, that in some way supersedes the gravitational pull toward mediocrity inherent in your short-term impulses.

It’s about experimenting too. And this is one particular system that creates consequences and accountability. And I love systems. For me if I don’t have systems in my life that help me organize the work that I want to do, I’m hopeless. That’s why I like to collect and I’m sort of an avid taster of different systems, and I’ll try anything, and really collect and see what works. And I think that’s important. It’s a process of experimentation to see what works for you.

Ariel: Let’s turn to Allison Duettmann now, for her take on how we can use technology to help us become better versions of ourselves and to improve our societal interactions.

Allison: I think there are a lot of technological tools that we can use to aid our reasoning and sense making and coordination. So I think that technologies can be used to help with reasoning, for example, by mitigating trauma, or bias, or by augmenting our intelligence. That’s the whole point of creating AI in the first place. Technologies can also be used to help with collective sense-making, for example with truth-finding and knowledge management, and I think your hypertexts and prediction markets — something that Anthony’s working on — are really worthy examples here. I also think technologies can be used to help with coordination. Mark Miller, who I’m currently writing a book with, likes to say that if you lower the risks of cooperation, you’ll get a more cooperative world. I think that most cooperative interactions may soon be digital.

Ariel: That’s sort of an interesting idea, that there’s risks to cooperation. Can you maybe expand on that a little bit more?

Allison: Yeah, sure. I think that most of our interactions are already digital ones, for some of us at least, and they will be more and more so in the future. So I think that one step to lowering the risk of cooperation is establishing cybersecurity as a first step, because this would decrease the risk of digital coercion. But I do think that’s only part of it, because rather than just freeing us from the restraints that keep us from cooperating, we also need to equip us with the tools to cooperate, right?

Ariel: Yes.

Allison: I think some of those may be smart contracts to allow individuals to credibly commit, but there may be others too. I just think that we have to realize that the same technologies that we’re worried about in terms of risks are also the ones that may augment our abilities to decrease those risks.

Ariel: One of the things that came to mind as you were talking about this, using technology to improve cooperation — when we look at the world today, technology isn’t spread across the globe evenly. People don’t have equal access to these tools that could help. Do you have ideas for how we address various inequality issues, I guess?

Allison: I think inequality is a hot topic to address. I’m currently writing a book with Mark Miller and Christine Peterson on a few strategies to strengthen civilization. In this book we outline a few paths to do so, but also potential positive outcomes. One of the outcomes that we’re outlining is a voluntary world in which all entities can cooperate freely with each other to realize their interests. It’s kind of based on the premise that finding one utopia that works for everyone is hard, and is perhaps impossible, but that in the absence of knowing what’s in everyone’s interest, we shouldn’t try to impose any interests by one entity — whether that’s an AI or an organization or a state — but we should try to create a framework in which different entities, with different interests, whether they’re human or artificial, can pursue their interests freely by cooperating. And I think If you look at the strategy, it has worked pretty well so far. If you look at society right now it’s really not perfect, but by allowing humans to cooperate freely and engage in some mutually beneficial relationships, civilization already serves our interests quite well. And it’s really not perfect by far, I’m not saying this, but I think as a whole, our civilization at least tends imperfectly to plan for pareto-preferred paths. We have survived so far, and in better and better ways.

So a few ways that we propose to strengthen this highly involved process is by proposing kind of general recommendations for solving coordination problems, and then a few more specific ideas on reframing a few risks. But I do think that enabling a voluntary world in which different entities can cooperate freely with each other is the best we can do, given our limited knowledge of what is in everyone’s interests.

Ariel: I find that interesting, because I hear lots of people focus on how great intelligence is, and intelligence is great, but it does often seem — and I hear other people say this — that cooperation is also one of the things that our species has gotten right. We fail at it sometimes, but it’s been one of the things, I think, that’s helped.

Allison: Yeah, I agree. I hosted an event last year at the Internet Archive on different definitions of intelligence. Because in the paper that we wrote last year, we have this very grand, or broad conception of intelligence, which includes civilization as an intelligence. So I think you may be asking yourself the question of, what does it mean to be intelligent, and if what we care about is problem-solving ability then I think that civilization certainly classifies as a system that can solve more problems than any individual that is within it alone. So I do think this is part of the cooperative nature of the individual parts within civilization, and so I don’t think that cooperation and intelligence are mutually exclusive at all. Marvin Minsky wrote this amazing book, Society of Mind, and in much of this, has similar ideas.

Ariel: I’d like to take this idea and turn it around, and this is a question specifically for Max and Anthony: looking back at this past year, how has FLI helped foster cooperation and public engagement surrounding the issues we’re concerned about? What would you say were FLI’s greatest successes in 2018?

Anthony: Let’s see, 2018. What I’ve personally enjoyed the most, I would say, is starting the engagement between the technical researchers and the nonprofit community really starting to get more engaged with state and federal governments. So for example the Asilomar principles — which were generated at this nexus of business and nonprofit and academic thinkers about AI and related things — I think were great. But that conversation didn’t really include much from people in policy, and governance, and governments, and so on. So, starting to see that thinking, and those recommendations, and those aspirations of the community of people who know about AI and are thinking hard about it and what it should do and what it shouldn’t do — seeing that start to come into the political sphere, and the government sphere, and the policy sphere I think is really encouraging.

That seems to be happening in many places at some level. I think the local one that I’m excited about is the passage of the California legislature of a resolution endorsing the Asilomar principles. That felt really good to see that happen and really encouraging that there were people in the legislature that — we didn’t go and lobby them to do that, they came to us and said, “This is really important. We want to do something.” And we worked with them to do that. That was super encouraging, because it really made it feel like there is a really open door, and there’s a desire in the policy world to do something. This thing is getting on people’s radar, that there’s a huge transformation coming from AI.

They see that their responsibility is to do something about that. They don’t intrinsically know what they should be doing, they’re not experts in AI, they haven’t been following the field. So there needs to be that connection and it’s really encouraging to see how open they are and how much can be produced with honestly not a huge level of effort; Just communication and talking through things I think made a significant impact. I was also happy to see how much support there continues to be for controlling the possibility of lethal autonomous weapons.

The thing we’ve done this year, the lethal autonomous weapons pledge, I felt really good about the success of. So this was an idea that anybody who’s interested, but especially companies who are engaged in developing related technologies, drones, or facial recognition, or robotics, or AI in general — to get them to take that step themselves of saying, “No, we want to develop these technologies for good, and we have no interest in developing things that are going to be weaponized and used in lethal autonomous weapons.”

I think having a large number of people and corporations sign on to a pledge like that is useful not so much because they were planning to do all those things and now they signed a pledge, so they’re not going to do it anymore. I think that’s not really the model so much as it’s creating a social and cultural norm that these are things that people just don’t want to have anything to do with, just like biotech companies don’t really want to be developing biological weapons, they want to be seen as forces for good that are building medicines and therapies and treatments and things. Everybody is happy for biotech companies to be doing those things.

If biotech companies were building biological weapons also, you really start to wonder, “Okay, wait a minute, why are we supporting this? What are they doing with my information? What are they doing with all this genetics that they’re getting? What are they doing with the research that’s funded by the government? Do we really want to be supporting this?” So keeping that distinction in the industry between all the things that we all support — better technologies for helping people — versus the military applications, particularly in this rather destabilizing and destructive way: I think that is more the purpose — to really make clear that there are companies that are going to develop weapons for the military, and that’s part of the reality of the world.

We have militaries; We need, at the moment, militaries. I think I certainly would not advocate that the US should stop defending itself, or shouldn’t develop weapons, and I think it’s good that there are companies that are building those things. But there are very tricky issues when the companies building military weapons are the same companies that are handling all of the data of all of the people in the world or in the country. I think that really requires a lot of thought, how we’re going to handle it. And seeing companies engage with those questions and thinking about how are the technologies that we’re developing, how are they going to be used and for what purposes, and what purposes do we not want them to be used for is really, really heartening. It’s been very positive I think to see at least in certain companies those sort of conversations go on with our pledge or just in other ways.

You know, seeing companies come out with, “This is something that we’re really worried about. We’re developing these technologies, but we see that there could be major problems with them.” That’s very encouraging. I don’t think it’s necessarily a substitute for something happening at the regulatory or policy level, I think that’s probably necessary too, but it’s hugely encouraging to see companies being proactive about thinking about the societal and ethical implications of the technologies they’re developing.

Max: There are four things I’m quite excited about. One of them is that we managed to get so many leading companies and AI researchers and universities to pledge to not build lethal autonomous weapons, also known as killer robots. Second is that we were able to channel two million dollars, thanks to Elon Musk, to 10 research groups around the world to help figure out how to make artificial general intelligence safe and beneficial. Third is that the state of California decided to officially endorse the 23 Asilomar Principles. It’s really cool that these are getting more taken seriously now, even by policy makers. And the fourth is that we were able to track down the children of Stanislav Petrov in Russia, thanks to whom this year is not the 35th anniversary year of World War III, and actually give them the appreciation we feel that they deserve.

I’ll tell you a little more about this one because it’s something I think a lot of people still aren’t that aware of. But September 26th, 35 years ago, Stanislav Petrov was on shift and in charge of his Soviet early warning station, which showed five US nuclear missiles incoming, one after the other. Obviously, not what he was hoping that would happen at work that day and a really horribly scary situation where the natural response is to do what that system was built for: namely, warning the Soviet Union so that they would immediately strike back. And if that had happened, then thousands of mushroom clouds later, you know, you and I, Ariel, would probably not be having this conversation. Instead, he, mostly on gut instinct, came to the conclusion that there was something wrong and said, “This is a false alarm.” And we’re incredibly grateful for that level-headed action of him. He passed away recently.

His two children are living on very modest means outside of Moscow and we felt that when someone does something like this, or in his case abstains from doing something, that future generations really appreciate, we should show our appreciation, so that others in his situation later on know that if they sacrifice themselves for the greater good, they will be appreciated. Or if they’re dead, their loved ones will. So we organized a ceremony in New York City and invited them to it and bought air tickets for them and so on. And in a very darkly humorous illustration of how screwed up their relationships are at the global level now, the US decided that because — that the way to show appreciation for the US not having gotten nuked was to deny a visa to Stanislav’s son. So he could only join by Skype. Fortunately, his daughter was able to get a visa, even though the waiting period to even get a visa point for Moscow was 300 days. We had to fly her to Israel to get her the Visa.

But she came and it was her first time ever outside of Russia. She was super excited to come and see New York. It was very touching for me to see all the affection that the New Yorkers there deemed at her and see her reaction and her husband’s reaction and to get to give her this $50,000 award, which for them was actually a big deal. Although it’s of course nothing compared to the value for the rest of the world of what their father did. And it was a very sobering reminder that we’ve had dozens of near misses where we almost had a nuclear war by mistake. And even though the newspapers usually make us worry about North Korea and Iran, of course by far the most likely way in which we might get killed by a nuclear explosion is because another just stupid malfunction or error causing the US and Russia to start a war by mistake.

I hope that this ceremony and the one we did the year before also, for family of Vasili Arkhipov, can also help to remind people that hey, you know, what we’re doing here, having 14,000 hydrogen bombs and just relying on luck year after year isn’t a sustainable long-term strategy and we should get our act together and reduce nuclear arsenals down to the level needed for deterrence and focus our money on more productive things.

Ariel: So I wanted to just add a quick follow-up to that because I had the privilege of attending the ceremony and I got to meet the Petrovs. And one of the things that I found most touching about meeting them was their own reaction to New York, which was in part just an awe of the freedom that they felt. And I think, especially, this is sort of a US centric version of hope, but it’s easy for us to get distracted by how bad things are because of what we see in the news, but it was a really nice reminder of how good things are too.

Max: Yeah. It’s very helpful to see things through other people’s eyes and in many cases, it’s a reminder of how much we have to lose if we screw up.

Ariel: Yeah.

Max: And how much we have that we should be really grateful for and cherish and preserve. It’s even more striking if you just look at the whole planet, you know, in a broader perspective. It’s a fantastic, fantastic place, this planet. There’s nothing else in the solar system even remotely this nice. So I think we have a lot to win if we can take good care of it and not ruin it. And obviously, the quickest way to ruin it would be to have an accidental nuclear war, which — it would be just by far the most ridiculously pathetic thing humans have ever done, and yet, this isn’t even really a major election issue. Most people don’t think about it. Most people don’t talk about it. This is, of course, the reason that we, with the Future of Life Institute, try to keep focusing on the importance of positive uses of technology, whether it be nuclear technology, AI technology, or biotechnology, because if we use it wisely, we can create such an awesome future, like you said: Take the good things we have, make them even better.

Ariel: So this seems like a good moment to introduce another guest, who just did a whole podcast series exploring existential risks relating to AI, biotech, nanotech, and all of the other technologies that could either destroy society or help us achieve incredible advances if we use them right.

Josh: I’m Josh Clark. I’m a podcaster. And I’m the host of a podcast series called the End of the World with Josh Clark.

Ariel: All right. I am really excited to have you on the show today because I listened to all of the End of the World. And it was great. It was a really, really wonderful introduction to existential risks.

Josh: Thank you.

Ariel: I highly recommend it to anyone who hasn’t listened to it. But now that you’ve just done this whole series about how things can go horribly wrong, I thought it would be fun to bring you on and talk about what you’re still hopeful for after having just done that whole series.

Josh: Yeah, I’d love that, because a lot of people are hesitant to listen to the series because they’re like, well, “it’s got to be such a downer.” And I mean, it is heavy and it is kind of a downer, but there’s also a lot of hope that just kind of emerged naturally from the series just researching this stuff. There is a lot of hope — it’s pretty cool.

Ariel: That’s good. That’s exactly what I want to hear. What prompted you to do that series, The End of the World?

Josh: Originally, it was just intellectual curiosity. I ran across a Bostrom paper in like 2005 or 6, my first one, and just immediately became enamored with the stuff he was talking about — it’s just baldly interesting. Like anyone who hears about this stuff can’t help but be interested in it. And so originally, the point of the podcast was, “Hey, everybody come check this out. Isn’t this interesting? There’s like, people actually thinking about this kind of stuff and talking about it.” And then as I started to interview some of the guys at the Future of Humanity Institute, started to read more and more papers and research further, I realized, wait, this isn’t just like, intellectually interesting. This is real stuff. We’re actually in real danger here.

And so as I was creating the series, I underwent this transition for how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

Ariel: So you have two episodes that go into biotechnology and artificial intelligence, which are two — especially artificial intelligence — they’re both areas that we work on at FLI. And in them, what I thought was nice is that you do get into some of the reasons why we’re still pursuing these technologies, even though we do see these existential risks around them. And so, I was curious, as you were doing your research into the series, what did you learn about, where you were like, “Wow, that’s amazing, that I’m so psyched that we’re doing this, even though there are these risks.”

Josh: Basically everything I learned about. I had to learn particle physics to explain what’s going on in large Hadron Collider. I had to learn a lot about AI. I realized when I came into it, that my grasp of AI was beyond elementary. And it’s not like I could actually put together a AGI myself from scratch or anything like that now, but I definitely know a lot more than I did before. With biotech in particular, there was a lot that I learned that I found particularly jarring with the number of accidents that are reported every year, and then more than that, the fact that not every lab in the world has to report accidents. I found that extraordinarily unsettling.

So kind of from start to finish, I learned a lot more than I knew going into it, which is actually one of the main reasons why it took me well over a year to make the series because I would start to research something and then I’d realized I need to understand the fundamentals of this. So I’d go understand, I’d go learn that, and then there’d be something else I had to learn first, before I could learn something the next level up. So I kept having to kind of regressively research and I ended up learning quite a bit of stuff.

But I think to answer your question, the thing that struck me the most was learning about physics, about particle physics, and how tenuous our understanding of our existence is, but just how much we’ve learned so far in just the last like century or so, when we really dove into quantum physics, particle physics and just what we know about things. One of the things that just knocked my socks off was the idea that there’s no such thing as particles — like particles, as we think of them are just basically like shorthand. But the rest of the world outside of particle physics has said like, “Okay, particles, there’s like protons and neutrons and all that stuff. There’s electrons. And we understand that they kind of all fit into this model, like a solar system. And that’s how atoms work.”

That is not at all how atoms work, like a particle is just a pack of energetic vibrations and everything that we experience and see and feel, and everything that goes on in the universe is just the interaction of these energetic vibrations in force fields that are everywhere at every point in space and time. And just to understand that, like on a really fundamental level, changed my life actually, changed the way that I see the universe and myself and everything actually.

Ariel: I don’t even know where I want to go next with that. I’m going to come back to that because I actually think it connects really nicely to the idea of existential hope. But first I want to ask you a little bit more about this idea of getting people involved more. I mean, I’m coming at this from something of a bubble at this point where I am surrounded by people who are very familiar with the existential risks of artificial intelligence and biotechnology. But like you said, once you start looking at artificial intelligence, if you haven’t been doing it already, you suddenly realize that there’s a lot there that you don’t know.

Josh: Yeah.

Ariel: I guess I’m curious, now that you’ve done that, to what extent do you think everyone needs to? To what extent do you think that’s possible? Do you have ideas for how we can help people understand this more?

Josh: Yeah you know, that really kind of ties into taking on existential risks in general, is just being an interested curious person who dives into the subject and learns as much as you can, but that at this moment in time, as I’m sure you know, that’s easier said than done. Like you really have to dedicate a significant portion of your life to spending time focusing on that one issue whether it’s AI, it’s biotech or particle physics, or nanotech, whatever. You really have to immerse yourself into it because it’s not a general topic of national or global conversation, the existential risks that we’re facing, and certainly not the existential risks we’re facing from all the technology that everybody’s super happy that we’re coming out with.

And I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about it. Groups like yours, talking to the public, educating the public. I’m hoping that my series did something like that, just arousing curiosity in people, but also raising awareness of these things like, these are real things, these aren’t crackpots talking about this stuff. This is real, legitimate issues that are coming down the pike, that are being pointed out by real, legitimate scientists and philosophers and people who have given great thought about this. This isn’t like a chicken little situation; This is quite real. I think if you can pique someone’s curiosity just enough that they listen, stop and listen, do a little research, it sinks in after a minute that this is real. And that, oh, this is something that they want to be a part of doing something about.

And so I think just getting people talking about that just by proxy will interest other people who hear about it, and it will spread further and further out. And I think that that’s step one, is to just make it so it’s an okay thing to talk about, so you’re not nuts to raise this kind of stuff seriously.

Ariel: Well, I definitely appreciate you doing your series for that reason. I’m hopeful that that will help a lot.

Ariel: Now, Allison — you’ve got this website which, my understanding is that you’re trying to get more people involved in this idea that if we focus on these better ideals for the future, we stand a better shot at actually hitting them.

Allison: At ExistentialHope.com, I keep a map of reading, podcasts, organizations, and people that inspire an optimistic long-term vision for the future.

Ariel: You’re clearly doing a lot to try to get more people involved. What is it that you’re trying to do now, and what do you think we all need to be doing more of to get more people thinking this way?

Allison: I do think that it’s up to everyone, really, to try to, again, engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generating common knowledge to catalyze more directed coordination toward beautiful futures. I think that there’s a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on what to influence that. So I think we should try to map the space of both peril and promise which lie before us, but we should really try to aim for that this knowledge can empower each and every one of us to navigate toward the grand future.

For us currently on the website this involves orienting ourselves, so collecting useful models, and relevant broadcasts, and organizations that generate new insights, and then try to synthesize a map of where we came from, and a really kind of long perspective, and where we may go, and then which lenses of science and technology and culture are crucial to consider along the way. Then finally we would like to publish a living document that summarizes those models that are published elsewhere, to outline possible futures, and the idea is that this is a collaborative document. Even already, currently, the website links to a host of different Google docs in which we’re trying to really synthesize the current state of the art in the different focus areas. The idea is that this is collaborative. This is why it’s on Google docs, because everyone can just comment. And people do, and I think this should really be a collaborative effort.

Ariel: What are some of your favorite examples of content that, presumably, you’ve added to your website, that look at these issues?

Allison: There’s quite a host of things on there, I think, that a good start for people to go on the website is just to go on the overview. Because here I list kind of my top 10 lists about short pieces and long pieces, but my personal ones, I think, as a starting ground: I really like the metaethics sequence by Eliezer Yudkowsky. It contains a really good post, like Existential Angst Factory, and Reality as Fixed Computation. For me this is kind of like existentialism 2.0. Have to get your motivations and expectations right. What can I reasonably hope for? Then I think, relatedly, there’s also the Fan Sequence, also by Yudkowsky. But that together with, for example, Letter From Utopia by Nick Bostrom, or Hedonistic Imperative by David Pearce, or Post On Raikoth by Scott Alexander — they are really a nice next step because they actually lay out a few compelling positive versions of utopia.

Then if you want to get into the more nitty gritty there’s a longer section on civilization, its past and its future — so, what’s wrong and how to improve it. Here Nick Bostrom wrote this piece on the future of human evolution, which lays out two suboptimal paths for humanity’s future, and interestingly enough they don’t involve extinction. A similar one, I think, which probably many people are familiar with, is Scott Alexander’s Meditations On Moloch, and then some that people are less familiar with — Growing Children For Bostrom’s Disneyland. They are really interesting, because they are other pieces of this type, which are sketching out competitive and selective pressures that lead toward races to the bottom, as negative futures which don’t involve extinction per se. I think the really interesting thing, then, is that even those features are only bad if you think that the bottom is bad.

Next to them I list books, for example, like Robin Hanson, Age of M, which argues that living at subsistence may not be terrible, and in fact it’s pretty much what most of our past lives outside of the current dream time have always involved. So I think those are two really different lenses to make sense of the same reality, and I personally found this contrast so intriguing that I hosted a salon last year with Paul Christiano, Robin Hanson, Peter Eckersley, and a few others to kind of map out where we may be racing towards, so how bad those competitive equilibria actually are. I also link to those from the website.

To me it’s always interesting to map out one potentially possible future visions, and then try to find one either that contradicts or compliments it. I think having a good idea of an overview of those gives you a good map, or at least a space of possibilities.

Ariel: What do you recommend to people who are interested in trying to do more? How do you suggest they get involved?

Allison: One thing, an obvious thing, would be commenting on the Google Docs, and I really encourage everyone to do that. Another one would be just to join the mailing list. You can kind of indicate whether you want updates on me, or whether you want to collaborate, in which case we may be able to reach out to you. Or if you’re interested in meetups, they would only be in San Francisco so far, but I’m hoping that there may be others. I do think that currently the project is really in its infancy. We are relying on the community to help with this, so there should be a kind of collaborative vision.

I think that one of the main things that I’m hoping that people can get out of it for now is just to give some inspiration on where we may end up if we get it right, and on why work toward better futures, or even work toward preventing existential risks, is both possible and necessary. If you go on the website on the first section — the vision section — that’s what that section is for.

Secondly, then, if you are already opted in, if you’re already committed, I’m hoping that perhaps the project can provide some orientation. If someone would like to help but doesn’t really know where to start, the focus areas are an attempt to map out the different areas that we need to make progress on for better futures. Each area comes with an introductory text, and organizations that are working in that area that one can join or support, and Future of Life is in a lot of those areas.

Then I think finally, just apart from inspiration or orientation, it’s really a place for collaboration. The project is in its infancy and everyone should contribute their favorite pieces to our better futures.

Ariel: I’m really excited to see what develops in the coming year for existentialhope.com. And, naturally, I also want to hear from Max and Anthony about 2019. What are you looking forward to for FLI next year?

Max: For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth. At the nerdy level, I’m looking forward to more collaboration on AI’s safety research and also ways of making the economy, that keeps growing thanks to AI, actually make everybody better off, rather than some people poorer and angrier. And at the most global level really looking forward to working harder to get past this outdated us versus them attitude that we still have between the US and China and Russia and other major powers. Many of our political leaders are so focused on the zero sum game mentality that they will happily risk major risks of nuclear war and AI arms races and other outcomes where everybody would lose, instead of just realizing hey, you know, we’re actually in this together. What does it mean for America to win? It means that all Americans get better off. What does it mean for China to win? It means that the Chinese people all get better off. Those two things can obviously happen at the same time as long as there’s peace, and technology just keeps improving life for everybody.

In practice, I’m very eagerly looking forward to seeing if we can get scientists from around the world — for example, AI researchers — to converge on certain shared goals that are really supported everywhere in the world, including by political leaders and in China and the US and Russia and Europe and so on, instead of just obsessing about the differences. Instead of thinking us versus them, it’s all of us on this planet working together against the common enemy, which is our own stupidity and the tendency to make bad mistakes, so that we can harness this powerful technology to create a future where everybody wins.

Anthony: I would say I’m looking forward to more of what we’re doing now, thinking more about the futures that we do want. What exactly do those look like? Can we really think through pictures of the future that makes sense to us that are attractive, that are plausible, and yet aspirational, and where we can identify things and systems and institutions that we can build now toward the aim of getting us to those futures? I think there’s been a lot of, so far, thinking about what are the major problems that might arise, and I think that’s really, really important, and that project is certainly not over, and it’s not like we’ve avoided all of those pitfalls by any means, but I think it’s important not to just not fall into the pit, but to actually have a destination that we’d like to get to — you know, the resort at the other end of the jungle or whatever.

I find it frustrating a bit when people do what I’m doing now: they talk about talking about what we should and shouldn’t do. But they don’t actually talk about what we should and shouldn’t do. I think the time has come to actually talk about it in the same way that when… there was the first use of CRISPR in a embryo that came to term. So everybody’s talking about, “Well, we need to talk about what we should and shouldn’t do with this. We need to talk about that, we need to talk about it.” Let’s talk about it already.

So I’m excited about upcoming events that FLI will be involved in that are explicitly thinking about: let’s talk about what that future is that we would like to have and let’s debate it, let’s have that discussion about what we do want and don’t want, try to convince each other and persuade each other of different visions for the future. I do think we’re starting to actually build those visions for what institutions and structures in the future might look like. And if we have that vision, then we can think of what are the things we need to put in place to have that.

Ariel: So one of the reasons that I wanted to bring Gaia on is because I’m working on a project with her — and it’s her project — where we’re looking at this process of what’s known as worldbuilding, to sort of look at how we can move towards a better future for all. I was hoping you could describe it, this worldbuilding project that I’m attempting to help you with, or work on with you. What is worldbuilding, and how are you modifying it for your own needs?

Gaia: Yeah. Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series, for example. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So these huge connected systems of systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are these vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice. And in my personal definition of worldbuilding, the way that I’m thinking of it, and using it, is that it unfolds in four main stages. The first stage is: we develop a foundation of shared knowledge that’s grounded in science, and research, and relevant domain expertise. And the second phase is building on that foundation of knowledge. We engage in an exercise where we predict how the interconnected systems that have emerged in this knowledge database — we predict how they will evolve. And we imagine the state of their evolution at a specific point in the future. Then the third phase is really about capturing that state in all its complexity, and making that information useful to the people who need to interface with it. And that can be in the form of interlinked databases and particularly also in the form of visualizations, which help make these sort of abstract ideas feel more present and concrete. And then the fourth and final phase is then utilizing that resulting world as a tool that can be used to support scenario simulation, research, and development in many different areas including public policy, media production, education, and product development.

I mentioned that these techniques are being brought outside of the realm of entertainment. So rather than just designing fantasy worlds for the sole purpose of containing narrative fiction and stories, these techniques are now being used with communities, and Fortune 500 companies, and foundations, and NGOs, and other places, to create plausible future worlds. It’s fascinating to me to see how these are being used. For example, they’re being used to reimagine the mission of an organization. They’re being used to plan for the future, and plan around a collective vision of that future. They’re very powerful for developing new strategies, new programs, and new products. And I think to me one of the most interesting things is really around informing policy work. That’s how I see worldbuilding.

Ariel: Are there any actual examples that you can give or are they proprietary?

Gaia: There are many examples that have created some really incredible outcomes. One of the first examples of worldbuilding that I ever learned about was a project that was done with a native Alaskan tribe. And the comments that came from the tribe and about that experience were what really piqued my interest. Because they said things like, “This enabled us to sort of leap frog over the barriers in our current thinking and imagine possibilities that were sort of beyond what we had considered.” This project brought together several dozen members of the community, again, to engage in this collaborative design exercise, and actually visualize and build out those systems and understand how they would be interconnected. And it ended up resulting in, I think, some really incredible things. Like a partnership with MIT where they brought a digital fabrication lab onto their reservation, and created new education programs around digital design and digital fabrication for their youth. And there’s a lot of other things that are still coming out of that particular worldbuild.

There are other examples where Fortune 500 companies are building out really detailed, long-term worldbuilds that are helping them stay relevant, and imagine how their business model is going to need to transform in order to adapt to really plausible, probable futures that are just around the corner.

Ariel: I want to switch now to what you specifically are working on. The project we’re looking at is looking roughly 20 years into the future. And you’ve sort of started walking through a couple systems yourself while we’ve been working on the project. And I thought that it might be helpful if you could sort of walk through, with us, what those steps are to help understand how this process works.

Gaia: Maybe I’ll just take a quick step back, if that’s okay and just explain the worldbuild that we’re preparing for.

Ariel: Yeah. Please do.

Gaia: This is a project called Augmented Intelligence. The first Augmented Intelligence summit is happening in March in 2019. And our goal with this project is really to engage with and shift the culture, and also our mindset, about the future of artificial intelligence. And to bring together a multidisciplinary group of leaders from government, academia, and industry, and to do a worldbuild that’s focused on this idea of: what does our future world look like with advanced AI deeply integrated into it? And to go through the process of really imagining and predicting that world in a way that’s just a bit further beyond the horizon that we normally see and talk about. And that exercise, that’s really where we’re getting that training for long-term thinking, and for systems level thinking. And the world that results — our hope is that it will allow us to develop better intuitions, to experiment, to simulate scenarios, and really to have a more attuned capacity to engage in many ways with this future. And ultimately explore how we want to evolve our tools and our society to meet that challenge.

Gaia: What will come out of this process — it really is a generative process that will create assets and systems that are interconnected, that inhabit and embody a world. And this world should allow us to experiment, and simulate scenarios, and develop a more attuned capacity to engage with the future. And that means on both an intuitive level and also in a more formal structured way. And ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future. Our goal is to really bootstrap a steering mechanism that allows us to navigate more effectively toward outcomes that support human flourishing.

Ariel: I think that’s really helpful. I think an example to walk us through what that looks like would be helpful.

Gaia: Sure. You know, basically what would happen in a worldbuilding process is that you would have some constraints or some sort of seed information that you think is very likely — based on research, based on the literature, based on sort of the input that you’re getting from domain experts in that area. For example, you might say, “In the future we think that education is all going to happen in a virtual reality system that’s going to cover the planet.” Which I don’t think is actually the case, but just to give an example. You might say something like, “If this were true, then what are the implications of that?” And you would build a set of systems, because it’s very difficult to look at just one thing in isolation.

Because as soon as you start to do that — John Muir says, “As soon as you try to look at just one thing, you find that it is irreversibly connected to everything else in the universe.” And I apologize to John Muir for not getting that quote exactly correct, he says it much more eloquently than that. But the idea is there. And that’s sort of what we leverage in a worldbuilding process: where you take one idea and then you start to unravel all of the implications, and all of the interconnecting systems that would be logical, and also possible, if that thing were true. It really does depend on the quality of the inputs. And that’s something that we’re working really, really hard to make sure that our inputs are believable and plausible, but don’t put too much in terms of constraints on the process that unfolds. Because we really want to tap into the creativity in the minds of this incredible group of people that we’re gathering, and that is where the magic will happen.

Ariel: To make sure that I’m understanding this right: if we use your example of, let’s say all education was being taught virtually, I guess questions that you might ask or you might want to consider would be things like: who teaches it, who’s creating it, how do students ask questions, who would their questions be directed to? What other types of questions would crop up that we’d want to consider? Or what other considerations do you think would crop up?

Gaia: You also want to look at the infrastructure questions, right? So if that’s really something that is true all over the world, what do server farms look like in that future, and what’s the impact on the environment? Is there some complimentary innovation that has happened in the field of computing that has made computing far more efficient? How have we been able to do this? Given the — there are certain physical limitations that just exist on our planet. If X is true in this interconnected system, then how have we shaped, and molded, and adapted everything around it to make that thing true? You can look at infrastructure, you can look at culture, you can look at behavior, you can look at, as you were saying, communication and representation in that system and who is communicating. What are the rules? I mean, I think a lot about the legal framework, and the political structure that exists around this. So who has power and agency? How are decisions made?

Ariel: I don’t know what this says about me, but I was just wondering what detention looks like in a virtual world.

Gaia: Yeah. It’s a good question. I mean, what are the incentives and what are the punishments in that society? And do our ideas of what incentives and punishments look like actually change in that context? There isn’t a place where you can come on a Saturday if there’s no physical school yard. How is detention even enforced when people can log in and out of the system at will?

Ariel: All right, now you have me wondering what recess looks like.

Gaia: So you can see that there are many different fascinating sort of rabbit holes that you could go down. And of course our goal is to really make this process really useful to imagining the way that we want our policies, and our tools, and our education to evolve.

Ariel: I want to ask one more question about … Well, it’s sort of about this but there’s also a broader aspect to it. And that is, I hear a lot of talk — and I’m one of the people saying this because I think it’s absolutely true — that we need to broaden the conversation and get more diverse voices into this discussion about what we want our future to look like. But what I’m finding is that this sounds really nice in theory, but it’s incredibly hard to actually do in practice. I’m under the impression that that is some of what you’re trying to address with this project. I’m wondering if you can talk a little bit about how you envision trying to get more people involved in considering how we want our world to look in the future.

Gaia: Yeah, that’s a really important question. One of the sources of inspiration for me on this point was a conversation with Stuart Russell — an interview with Stuart Russell, I should say — that I listened to. We’ve been really fortunate and we are thrilled that he’s one of our speakers and he’ll be involved in the worldbuilding process. And he kind of talks about this idea that the artificial intelligence researchers, the roboticists, even a few technologists that are building these amplifying tools that are just increasing in potency year over year, are not the only ones who need to have input into the conversation around how they’re utilized and the implications on all of us. And that’s really one of the sort of core philosophies behind this particular project, is that we really want it to be a multidisciplinary group that comes together, and we’re already seeing that. We have a really wonderful set of collaborators who are thinking about ethics in this space, and who are thinking about a broader definition of ethics, and different cultural perspectives on ethics. And how we can create a conversation that allows space for those to simultaneously coexist.

Allison: I recently had a similar kind of question that arose in conversation, which was about: why are we lacking positive future visions so much? Why are we all kind of stuck in a snapshot of the current suboptimal macro situation? I do think it’s our inability to really think in larger terms. If you look at our individual human life, clearly for most of us, it’s pretty incredible — our ability to lead much longer and healthier lives than ever before. If we compare this to how well humans used to live, this difference is really unfathomable. I think Yuval Harari said it right, he said “You wouldn’t want to have lived 100 years ago.” I think that’s correct. On the other hand I also think that we’re not there yet.

I find it, for example, pretty peculiar that we say that we value freedom of choice in everything we do, but in the one thing that’s kind of the basis of all of our freedoms, which is our very existence, we leave it up again to slowly deteriorate according to aging. This would really deteriorate ourselves and everything we value. I think that every day aging is burning libraries. We’ve come a long way, but we’re not safe, and we are definitely not there yet. I think the same holds true for civilization at large. I think thanks to a lot of technologies our living standards have been getting better and better, and I think the decline of poverty and violence are just a few examples.

We can share knowledge much easier, and I think everyone who’s read Enlightenment Now will be kind of tired of those graphs, but again, I also think that we’re not there yet. I think even though we have less wars than ever before, the ability to wipe ourselves out as a species also really exists, and I think in fact this ability is now more available to more people, and with technologies of maturity, it may really only take a small and well-curated group of individuals to cause havoc of catastrophic consequences. If you let that sink in, it’s really absurd that we have no emergency plan for the use of technological weapons. We have no plans to rebuild civilization. We have no plans to back up human life.

I think that current news articles take too much of a short term view. They’re more a snapshot. I think the long-term view, on the one hand, opens up this eye of, “Hey, look how far we’ve come,” but also, “Oh man. We’re here, and we’ve made it so far, but there’s no feasible plan for safety yet.” I do think we need to change that, so I think the long run doesn’t only open up rosy glasses, but also the realization that we ought to do more because we’ve come so far.

Josh: Yeah, one of the things that makes this time so dangerous is we’re at this kind of a fork in the road, where if we go this one way, like say, with figuring out how to develop friendliness in AI, we could have this amazing, astounding future for humanity that stretches for billions and billions and billions of years. One of the things that really opened my eyes was, I always thought that the heat death of the universe will spell the end of humanity. There’s no way we’ll ever make it past that, because that’s just the cessation of everything that makes life happen, right? And we will probably have perished long before that. But let’s say we figured out a way to just make it to the last second and humanity dies at the same time the universe does. There’s still an expiration date on humanity. We still go extinct eventually. But one of the things I ran across when I was doing research for the physics episode is that the concept of growing a universe from seed, basically, in a lab is out there. It’s done. I don’t remember who came up with it. But somebody has sketched out basically how to do this.

It’s 2018. If we think 100 or 200 or 500 or a thousand years down the road and that concept can be built upon and explored, we may very well be able to grow universes from seed in laboratories. Well, when our universe starts to wind down or something goes wrong with it, or we just want to get away, we could conceivably move to another universe. And so we suddenly lose that expiration date for humanity that’s associated with the heat death of the universe, if that is how the universe goes down. And so this idea that we have a future lifetime that spans into at least the multiple billions of years — at least a billion years if we just manage to stay alive on Planet Earth and never spread out but just don’t actually kill ourselves — when you take that into account the stakes become so much higher for what we’re doing today.

Ariel: So, we’re pretty deep into this podcast, and we haven’t heard anything from Anders Sandberg yet, and this idea that Josh brought up ties in with his work. Since we’re starting to talk about imagining future technologies, let’s meet Anders.

Anders: Well, I’m delighted to be on this. I’m Anders Sandberg. I’m a senior research fellow at The Future of Humanity Institute at University of Oxford.

Ariel: One of the things that I love, just looking at your FHI page, you talk about how you try to estimate the capabilities of future technology. I was hoping you could talk a little bit about what that means, what you’ve learned so far, how one even goes about studying the capabilities of future technologies?

Anders: Yeah. It is a really interesting problem because technology is based on ideas. As a general rule, you cannot predict what ideas people will come up with in the future, because if you could, you would already kind of have that idea. So this means that, especially technologies that are strongly dependent on good ideas, are going to be tremendously hard to predict. This is of course why artificial intelligence is a little bit of a nightmare. Similarly, biotechnology is strongly dependent on what we discover in biology and a lot of that is tremendously weird, so again, it’s very unpredictable.

Meanwhile, other domains of life are advancing at a more sedate pace. It’s more like you incrementally improve things. So the ideas are certainly needed, but we don’t really change everything around. If you think about more slower, microprocessors are getting better and a lot of improvements are small, incremental ones. Some of them require a lot of intelligence to come up with, but in the end it all sums together. It’s a lot of small things adding together. So you can see a relatively smooth development in the large.

Ariel: Okay. So what you’re saying is we don’t just have each year some major discovery, and that’s what doubles it. It’s lots of little incremental steps.

Anders: Exactly. But if you look at the performance of some software, quite often it goes up smoothly because the computers are getting better and then somebody has a brilliant idea that can do it not just in 10% less time, but maybe in 10% of the time that it would have taken. For example, the fast Fourier transform that people invented in the 60s and 70s enables the compression we use today for video and audio and enables multimedia on the internet. Without that to speed up, it would not be practical to do, even with current computers. This is true for a lot of things in computing. You get a surprise insight and the problem that previously might be impossible to do efficiently suddenly becomes quite convenient. So the problem is of course: what can we say about the abilities of future technology if these things happen?

One of the nice things you can do is you can lean on the laws of physics. There are good reasons not to think that perpetual motion machines can work, because we understand, actually, energy conservation and the laws of thermodynamics that give very strong reason why this cannot happen. We can be pretty certain that that’s not possible. We can analyze what would then be possible if you had perpetual motion machines or faster than light transport and you can see that some of the consequences are really weird. But it makes you suspect that this is probably not going to happen. So that’s one way of looking at it. But you can do the reverse: You can take laws of physics and engineering that you understand really well and make fictional machines — essentially work out all the details and say “okay, I can’t build this but were I to build it, in that case what properties would it have?” If I wanted to build, let’s say, a machine made out of atoms, could I make it to work? And it turns out that this is possible to do in a rigorous way, and it tells you capabilities about machines that don’t exist yet, and maybe we will never build, but it shows you what’s possible.

This is what Eric Drexler did for nanotechnology in the 80s and 90s. He basically worked out what would be possible if we could put atoms in the right place. He could demonstrate that this would produce machines of tremendous capability. We still haven’t built them, but he proved that these can be built — and we probably should build them because they are so effective, so environmentally friendly, and so on.

Ariel: So you gave the example of what he came up with a while back. What sort of capabilities have you come across that you thought were interesting that you’re looking forward to us someday pursuing?

Anders: I’ve been working a little bit on the questions about “is it possible to settle a large part of the universe?” I have been working out, together with my colleagues, a bit of the physical limitations of that. All in all, we found that a civilization doesn’t need to use an enormous, astronomical amount of matter and energy to settle a very large chunk of the universe. The total amount of matter corresponds with roughly a Mercury-sized planet in a solar system in each of the galaxies. Many people would say if you want to settle the universe you need an enormous spacecraft and you need enormous amount of energy. It looks like you would be able to see that across half of the universe, but we could demonstrate that actually if you essentially use matter from a really big asteroid or a small planet, you can get enough solar collectors to launch small spacecraft to all the stars and all the galaxies within reach and there you’ll use again a bit of asteroids to do it. The laws of physics allow intelligent life to spread across an enormous amount of the universe in a rather quiet way.

Ariel: So does that mean you think it’s possible that there is life out there and it’s reasonable for us not to have found it?

Anders: Yes. If we were looking at the stars, we would probably miss if one or two stars in remote galaxies were covered with solar collectors. It’s rather easy to miss them among the hundreds of billions of other stars. This was actually the reason we did this paper: We demonstrate that much of the thinking about the Fermi paradox — that annoying question that well, there ought to be a lot of intelligent life out in the universe given how large it is and that we tend to think that it’s relatively likely yet we don’t see anything — many of those explanations are based on the possibility of colonizing just the Milky Way. In this paper, we demonstrate that actually you need to care about all the other galaxies too. In a sense, we made the fermi paradox between a million and a billion times worse. Of course, this is all in a day’s work for us in the Philosophy Department, making everybody’s headaches bigger.

Ariel: And now it’s just up to someone else to figure out the actual way to do this technically.

Anders: Yeah, because it might actually be a good idea for us to do.

Ariel: So Josh, you’ve mentioned the future of humanity a couple of times, and humanity in the future, and now Anders has mentioned the possibility of colonizing space. I’m curious how you think that might impact humanity. How do you define humanity in the future?

Josh: I don’t know. That’s a great question. It could take any number of different routes. I think — Robin Hanson is an economist who came up with this, the great filter hypothesis, and I talked to him about that very question. His idea was that — and I’m sure it’s not just his, but it’s probably a pretty popular idea — that once we spread out from Earth and start colonizing further and further out into the galaxy, and then into the universe, we’ll undergo speciation events like, there will be multiple species of humans in the universe again, just like there was like 50,000 years ago, when we shared Earth with multiple species of humans.

The same thing is going to happen as we spread out from Earth. I mean, I guess the question is, which humans are you talking about, in what galaxy? I also think there’s a really good chance — and this could happen among multiple human species — that at least some humans will eventually shed their biological form and upload themselves into some sort of digital format. I think if you just start thinking in efficiencies, that’s just a logical conclusion to life. And then there’s any number of routes we could take and change especially as we merge more with technology or spread out from Earth and separate ourselves from one another. But I think the thing that really kind of struck me as I was learning all this stuff is that we tend to think of ourselves as the pinnacle of evolution, possibly the most intelligent life in the entire universe, right? Certainly the most intelligent on Earth, we’d like to think. But if you step back and look at all the different ways that humans can change, especially like the idea that we might become post-biological, it becomes clear that we’re just a point along a spectrum that keeps on stretching out further and further into the future than it does even into the past.

We’re just at a current situation on that point right now. We’re certainly not like the end-all be-all of evolution. And ultimately, we may take ourselves out of evolution by becoming post-biological. It’s pretty exciting to think about all the different ways that it can happen, all the different routes we can take — there doesn’t have to just be one single one either.

Ariel: Okay, so, I kind of want to go back to some of the space stuff a little bit, and Anders is the perfect person for my questions. I think one of the first things I want to ask is, very broadly, as you’re looking at these different theories about whether or not life might exist out in the universe and that it’s reasonable for us not to have found it, do you connect the possibility that there are other life forms out there with an idea of existential hope for humanity? Or does it cause you concern? Or are they just completely unrelated?

Anders: The existence of extraterrestrial intelligence: if we knew they existed that would in some sense be hopeful because we know the universe allows for more than our kind of intelligence and intelligence might survive over long spans of time. If we just discovered that we’re all alone and a lot of ruins from extinct civilizations, that would be very bad news for us. But we might also have this weird situation that we currently feel, that we don’t see anybody. We don’t notice any ruins; Maybe we’re just really unique and should perhaps feel a bit proud or lucky but also responsible for a whole universe. It’s tricky. It seems like we could learn something very important if we understood how much intelligence there is out there. Generally, I have been trying to figure out: is the absence of aliens evidence for something bad? Or might it actually be evidence for something very hopeful?

Ariel: Have you concluded anything?

Anders: Generally, our conclusion has been that the absence of aliens is not surprising. We tend to think that the Fermi Paradox implies “oh, there’s something strange here.” The universe is so big and if you multiply the number of stars with some reasonable probability, you should get loads of aliens. But actually, the problem here is reasonable probability. We normally tend to think of that as something like bigger than one chance in a million or so, but actually, there is no reason the laws of physics wouldn’t put a probability that’s one in a googol. It actually turns out that we’re uncertain enough about the origin of life and the origins of intelligence and other forms of complexity that it’s not implausible that maybe we are the only life within the visible universe. So we shouldn’t be too surprised about that empty sky.

One possible reason for the great silence is that life is extremely rare. Another possibility might be that life is not rare but it’s very rare that it becomes the kind of life that evolves to complex nervous systems. Another reason might be of course that once you get intelligence, well, it destroys itself relatively quickly, and Robin Hanson has called this the Great Filter. We know that one of the terms in the big equation for the number of civilizations in the universe needs to be very small; otherwise, the sky would be full of aliens. But is that one of the early terms, like the origin of life, or the origin of intelligence — or the late term, how long intelligence survives? Now, if there is an early Great Filter, this is rather good news for us. We are going to be very unique and maybe a bit lonely, but, it doesn’t tell us anything dangerous about our own chances. Of course, we might still flub it and go extinct because our own stupidity but that’s kind of up to us rather than the laws of physics.

On the other hand, if it turns out that there is a late Great Filter, then even though we know the universe might be dangerous, we’re still likely to get wiped out — which is very scary. So, figuring out where the unlikely terms in the big equation are is actually quite important for making a guess about our own chances.

Ariel: Where are we now in terms of that?

Anders: Right now, in my opinion — I have a paper, not published yet but it’s in the review process, where we try to apply proper uncertainty calculations to this. Because many people make guesstimates about the probabilities of various things, admit that they’re guesstimates, and then get a number at the end that we also admit is a bit uncertain. But we haven’t actually done a proper uncertainty calculation so quite a lot of these numbers become surprisingly biased. So instead of saying that maybe there’s one chance in a million that a planet develops life, you should try to have a full range of what’s the lowest probability there could be for life and what’s the highest probability and how do you think it distributes between them. If you use that kind of proper uncertainty range and then multiply it all together and do the maths right, then you get the probability distribution for how many alien species there could be in the universe. Even if you’re starting out as somebody who’s relatively optimistic about the mean value of all of this, you will still find that you get a pretty big chunk of probability that we’re actually pretty alone in the Milky Way or even the observable universe.

In some sense, this is just common sense. But it’s a very nice thing to be able to quantify the common sense, and then start saying: so what happens if we for example discover that there is life on Mars? What will that tell us? How will that update things? You can use the math to calculate that, and this is what we’ve done. Similarly, if we notice that there doesn’t seem to be any alien super civilizations around the visible universe, that’s a very weak update but you can still use that to see that this updates our estimates of the probability of life and intelligence much more than the longevity of civilizations.

Mathematically this gives us a reason to think that the Great Filter might be early. The absence of life might be rather good news for us because it means that once you get intelligence, there’s no reason why it can’t persist for a long time and grow into something very flourishing. That is a really good cause of existential hope. It’s really promising, but we of course need to do our observations. We actually need to look for life, we need to look out in the sky and see. You may find alien civilizations. In the end, any amount of mathematics and armchair astrobiology, that’s always going to be disproven by any single observation.

Ariel: That comes back to a question that came to mind a bit earlier. As you’re looking at all of this stuff and especially as you’re looking at the capabilities of future technologies, once we figure out what possibly could be done, can you talk a little bit about what our limitations are today from actually doing it? How impossible is it?

Anders: Well, impossible is a really tricky word. When I hear somebody say “it’s impossible,” I immediately ask “do you mean against the laws of physics and logic” or “we will not be able to do this for the foreseeable future” or “we can’t do it within the current budget”?

Ariel: I think maybe that’s part of my question. I’m guessing a lot of these things probably are physically possible, which is why you’ve considered them, but yeah, what’s the difference between what we’re technically capable of today and what, for whatever reason, we can’t budget into our research?

Anders: We have a domain of technologies that we already have been able to construct. Some of them are maybe too expensive to be very useful. Some of them still requires a bunch of grad students holding them up and patching them as they are breaking all the time, but we can kind of build them. And then there’s some technology that we are very robustly good at. We have been making cog wheels and combustion engines for decades now and we’re really good at that. Then there are these technologies that we can do exploratory engineering to demonstrate that if we actually had cog wheels made out of pure diamond or the Dyson shell surrounding the sun collecting energy, they could do the following things.

So they don’t exist as practical engineering. You can work out blueprints for them and in some sense of course, once we have a complete enough blueprint, if you asked could you build the thing, you could do it. The problem is of course normally you need the tools and resources for that, and you need to make the tools to make the tools, and the tools to make those tools, and so on. So if we wanted to make atomically precise manufacturing today, we can’t jump straight to it. What we need to make is a tool that allows us to build things that are moving us much closer.

The Wright Brothers’ airplane was really lousy as an airplane but it was flying. It’s a demonstration, but it’s also a tool that allows you to make a slightly better tool. You would want to get through this and you’d probably want to have a roadmap and do experiments and figure out better tools to do that.

This is typically where scientists actually have to give way to engineers. Because engineers care about solving a problem rather than being the most elegant about it. In science, we want to have this beautiful explanation of how everything works; Then we do experiments to test whether it’s true and refine our explanation. But in the end, the paper that gets published is going to be the one that has the most elegant understanding. In engineering, the thing that actually sells and changes the world is not going to be the most elegant thing but the most useful thing. The AK-47 is in many ways not a very precise piece of engineering but that’s the point. It should be possible to repair it in the field.

The reason our computers are working so well was we figured out the growth path where you use photolithography to etch silicon chips, and that allowed us to make a lot of them very cheaply. As we learned more and more about how to do that, they became cheaper and more capable and we developed even better ways of etching them. So in order to build molecular nanotechnology, you would need to go through a somewhat similar chain. It might be that you start out with using biology to make proteins, and then you use the proteins to make some kind of soft machinery, and then you use that soft machinery to make hard machinery, and eventually end up with something like the work of Eric Drexler.

Ariel: I actually want to step back to the present now and you mentioned computers and we’re doing them very well. But computers are also an example of — or maybe software I suppose is more the example — of technology that works today but it often fails. Especially when we’re considering things like AI safety in the future, what should we make of the fact that we’re not designing software to be more robust? I mean, I think especially if we look at something like airplanes which are quite robust, we can see that it could be done but we’re still choosing not to.

Anders: Yeah, nobody would want to fly with an airplane that crashed as often as a word processor.

Ariel: Exactly.

Anders: It’s true that the earliest airplanes were very crash prone — in fact most of them were probably as bad as our current software is. But the main reason we’re not making software better is that most of the time we’re not willing to pay for that quality. Also, that there is some very hard engineering problems with engineering complexity. So making a very hard material is not easy but in some sense, it’s a straightforward problem. If, on the other hand, you have literally billions of moving pieces that all need to fit together, then it gets tricky to make sure that this always works as it should. But it can be done.

People have been working on mathematical proofs that certain pieces of software are correct and secure. It’s just that up until recently, it’s been so expensive and tough that nobody really cared to do it except maybe some military groups. Now it’s starting to become more and more essential because we’ve built our entire civilization on a lot of very complex systems that are unfortunately very insecure, very unstable, and so on. Most of the time we get around it by making backup copies and whenever a laptop crashes, well, we reboot it, swear a bit and hopefully we haven’t lost too much work.

That’s not always a bad solution — a lot of biology is like that too. Cells in our bodies are failing all the time but they’re just getting removed and replaced and then we try again. But this, of course, is not enough for certain sensitive applications. If we ever want to have brain-to-computer interfaces, we certainly want to have good security so we don’t get hacked. If we want to have very powerful AI systems, we want to make sure that their motivations are constrained in such a way that they’re helpful. We also want to make sure that they don’t get hacked or develop weird motivations or behave badly because their owners told them to behave badly. Those are very complex problems: It’s not just like engineering something that’s simply safe. You’re going to need entirely new forms of engineering for that kind of learning system.

This is something we’re learning. We haven’t been building things like software for very long and when you think about the sheer complexity of a normal operating system, even a small one running on a phone, it’s kind of astonishing that it works at all.

Allison: I think that Eliezer Yudkowsky once said that the problem of our complex civilization is its complexity. It does seem that technology is outpacing our ability to make sense of it. But I think we have to remind ourselves again of why we developed those technologies in the first place, and of the tremendous promises if we get it right. Of course on the one hand I think solving problems that are created by technologies, for example, existential risks — or at least some of those, they require a few kind of non-technological aspects, especially human reasoning, sense-making, and coordination.

And  I’m not saying that we have to focus on one conception of the good. There are many conceptions of the good. There’s transhumanist futures, there’s cosmist futures, there’s extropian futures, and many, many more, and I think that’s fine. I don’t think we have to agree on a common conception just yet — in fact we really shouldn’t. But the point is not that we ought to settle soon, but that we have to allow into our lives again the possibility that things can be good, that good things are possible — not guaranteed, but they’re possible. I think to use technologies for good we really need a change of mindset, from pessimism to at least conditional optimism. And we need a plethora of those, right? It’s not going to be one of them.

I do think that in order to use technologies for good purposes, we really have to remind ourselves that they can be used for good, and that there are good outcomes in the first place. I genuinely think that often in our research, we put the cart before the horse in focusing solely on how catastrophic human extinction would be. I think this often misses the point that extinction is really only so bad because the potential value that could be lost is so big.

Josh: If we can just make it to this point — Nick Bostrom, whose ideas a lot of The End of the World is based on, calls it technological maturity. It’s kind of a play on something that Carl Sagan said about the point we’re at now: “technological adolescence” is what Sagan called it, which is this point where we’re starting to develop this really intense, amazingly powerful technology that will one day be able to guarantee a wonderful, amazing existence for humanity, if we can survive to the point where we’ve mastered it safely. That’s what the next hundred or 200 or maybe 300 years stretches out ahead of us. That’s the challenge that we have in front of us. If we can make it to technological maturity, if we figure out how to make an artificial generalized intelligence that is friendly to humans, that basically exists to make sure that humanity is well cared for and taken care of, there’s just no telling what we’ll be able to come up with and just how vastly improved the life of the average human would be in that situation.

We’re talking — honestly, this isn’t like some crazy far out far future idea. This is conceivably something that we could get done as humans in the next century or two or three. Even if you talk out to 1000 years, that sounds far away. But really, that’s not a very long time when you consider just how far of a lifespan humanity could have stretching out ahead of it. The stakes: that makes me, almost gives me a panic attack when I think of just how close that kind of a future is for humankind and just how close to the edge we’re walking right now in developing that very same technology.

Max: The way I see the future of technology as we go towards artificial general intelligence, and perhaps beyond — it could totally make life the master of its own destiny, which makes this a very important time to stop and think what do we want this destiny to be? The more clear and positive vision we can formulate, I think the more likely it is we’re going to get that destiny.

Allison: We often seem to think that rather than optimizing for good outcomes, we should aim for maximizing the probability of an okay outcome, but I think for many people it’s more motivational to act on a positive vision, rather than one that is steered by risks only. To be for something rather than against something. To work toward a grand goal, rather than an outcome in which survival is success. I think a good strategy may be to focus on good outcomes.

Ariel: I think it’s incredibly important to remember all of the things that we are hopeful for for the future, because these are the precise reasons that we’re trying to prevent the existential risks, all of the ways that the future could be wonderful. So let’s talk a little bit about existential hope.

Allison: The term existential hope was coined by Owen Cotton-Barratt and Toby Ord to describe the chance of something extremely good happening, as opposed to an existential risk, which is a chance of something extremely terrible occurring. Kind of like describing a eucatastrophe instead of a catastrophe. I personally really agree with this line, because I think for me really it means that you can ask yourself this question of: do you think you can save the future? I think this question may appear at first pretty grandiose, but I think it’s sometimes useful to ask yourself that question, because I think if your answer is yes then you’ll likely spend your whole life trying, and you won’t rest, and that’s a pretty big decision. So I think it’s good to consider the alternative, because if the answer is no then you perhaps may be able to enjoy the little bit of time that you have on Earth rather than trying to spend it on making a difference. But I am not sure if you could actually enjoy every blissful minute right now if you knew that there was just a slight chance that you could make a difference. I mean, could you actually really enjoy this? I don’t think so, right?

I think perhaps we fail — and we do our best, but at the final moment something comes along that makes us go extinct anyways. But I think if we imagine the opposite scenario, in which we have not tried, and it turns out that we could have done something, an idea that we may have had or a skill we may have given was missing and it’s too late, I think that’s a much worse outcome.

Ariel: Is it fair for me to guess, then, that you think for most people the answer is that yes, there is something that we can do to achieve a more existential hope type future?

Allison: Yeah, I think so. I think that for most people there is at least something that we can be doing if we are not solving the wrong problems. But I do also think that this question is a serious question. If the answer for yourself is no, then I think you can really try to focus on having a life that is as good as it could be right now. But I do think that if the answer is yes, and if you opt in, then I think that there’s no space any more to focus on how terrible everything is. Because we’ve just confessed to how terrible everything is, and we’ve decided that we’re still going to do it. I think that if you opt in, really, then you can take that bottle of existential angst and worries that I think is really pestering us, and put it to the side for a moment. Because that’s an area you’ve dealt with and decided we’re still going to do it.

Ariel: The sentiment that’s been consistent is this idea that the best way to achieve a good future is to actually figure out what we want that future to be like and aim for it.

Max: On one hand, should be a no-brainer because that’s how we think about life as individuals. Right? I often get students walking into my office at MIT for career advice, and I always ask them about their vision for the future, and they always tell me something positive. They don’t walk in there and say, “Well, maybe I’ll get murdered. Maybe I’ll get cancer. Maybe I’ll …” because they know that that’s a really ridiculous approach to career planning. Instead, they envision the positive future, their aspiring things, so that we can constructively think about the challenges, the pitfalls to be avoided, and a good strategy for getting there.

Yet, as a species, we do exactly the opposite. We go to the movies and we watch Terminator, or Blade Runner, or yet another dystopic future vision that just fills us with fear and sometimes paranoia or hypochondria, when what we really need to do, as a species, is the same thing as we need to do as individuals: envision a hopeful, inspiring future that we want to rally around. It’s a well known historical fact, right, that the secret to get more constructive collaboration is to develop a shared positive vision. Why is Silicon Valley in California and not in Uruguay or Mongolia? Well, it’s because in the 60s, JFK articulated this really inspiring vision — going to space — which lead to massive investments in stem research and gave the US the best universities in the world and these amazing high tech companies, ultimately. Came from a positive vision.

Similarly, why is Germany now unified into one country instead of fragmented into many? Or Italy? Because of a positive vision. Why are the US states working together instead of having more civil wars against each other? Because of a positive vision of how much greater we’ll be if we work together. And if we can develop a more positive vision for the future of our planet, where we collaborate and everybody wins by getting richer and better off, we’re again much more likely to get that than if everybody just keeps spending their energy and time thinking about all the ways they can get screwed by their neighbors and all the ways in which things can go wrong — causing some self fulfilling prophecy basically, where we get a future with war and destruction instead of peace and prosperity.

Anders: One of the things I’m envisioning is that you can make a world where everybody’s connected but also connected on their own terms. Right now, we don’t have a choice. My smartphone gives me a lot of things but it also reports my location and a lot of little apps are sending my personal information to companies and institutions I have no clue about and I don’t trust. I think one important technology that might actually be that you do privacy-enhancing technologies. Many of the little near-field microchips we carry around, they also are indiscriminately reporting to nearby antennas what we’re doing. But you could imagine having a little personal firewall that actually blocks signals that you don’t approve of. You could actually have firewalls and ways of controlling the information leaving your smartphone or your personal space. And I think we actually need to develop that, both for security purposes but also to feel that we actually are in charge of our private lives.

Some of that privacy is a social convention. We agree on what is private and not: This is why we have certain rules about what you are allowed to do with a cell phone in a restaurant. You’re not going to have a conversation with somebody — that’s rude. And others are not supposed to listen to your restaurant conversations that you have with people in the restaurant, even though technically of course, it’s trivial. I think we are going to develop new interesting rules and new technologies to help implement these social rules.

Another area I’m really excited about is the ability to capture energy, for example, using solar collectors. Solar collectors are getting exponentially better and are becoming competitive in a lot of domains with traditional energy sources. But the most beautiful things is they can be made small, used in a distributed manner. You don’t need that big central solar farm even though it might be very effective. You can actually have little solar panels on your house or even on gadgets, if they’re energy efficient enough. That means that you both reduce the risk of a collective failure but also that you get a lot of devices that can now function independently of the grid.

Then I think we are probably going to be able to combine this to fight a lot of emergent biological threats. Right now, we still have this problem that it takes a long time to identify a new pathogen. But I think we’re going to see more and more distributed sensors that can help us identify it quickly, global networks that make the medical professional aware that something new has shown up, and hopefully also ways of very quickly brewing up vaccines in an automated manner when something new shows up.

My vision is that within one or two decades, if something nasty shows up, the next morning, everybody could essentially have a little home vaccine machine manufacture those antibodies to make you resistant against that pathogen — whether that was a bio weapon or something nature accidentally brewed up.

Ariel: I never even thought about our own personalized vaccine machines. Is that something people are working on?

Anders: Not that much yet.

Ariel: Oh.

Anders: You need to manufacture antibodies cheaply and effectively. This is going to require some fairly advanced biotechnology or nanotechnology. But it’s very foreseeable. Basically, you want to have a specialized protein printer. This is something we’re moving in the direction of. I don’t think anybody’s right now doing it but I think it’s very clearly in the path where we’re already moving.

So right now in order to make a vaccine, you need to have this very time consuming process: For example in the case of flu vaccine, you identify the virus, you multiply the virus, you inject it into chicken eggs to get the antibodies and the antigens, you develop a vaccine, and if you did it all right, you have a vaccine out in a few months just in time for the winter flu — and hopefully it was for the version of the flu that was actually making the rounds. If you were unlucky, it was a different one.

But what if you could instead take the antigen, you sequence it — that’s just going to take you a few hours — you generate all the proteins, you run it through various software and biological screens to remove the ones that don’t fit, find the ones that are likely to be good targets for immune system, automatically generate the antibodies, automatically test them out so you find which ones might be bad for patients, and then test them out. Then you might be able to make a vaccine within weeks or days.

Ariel: I really like your vision for the near term future. I’m hoping that all of that comes true. Now, to end, as you look further out into the future — which you’ve clearly done a lot of — what are you most hopeful for?

Anders: I’m currently working on writing a book about what I call “Grand Futures.” Assuming humanity survives and gets its act together, however we’re supposed to do that, then what? How big could the future possibly be? It turns out that the laws of physics certainly allow us to do fantastic things. We might be able to spread literally over billions of light years. Settling space is definitely physically possible, but also surviving even as a normal biological species on earth for literally hundreds of millions of years — and that’s already not stretching it. It might be that if we go post-biological, we can survive up until proton decay in somewhere north of 10^30 years in the future. Of course, the amount of intelligence that could be generated, human brains are probably just the start.

We could probably develop ourselves or Artificial Intelligence to think enormously bigger, enormously much more deeply, enormously more profoundly. Again, this is stuff that I can analyze. There are questions about what the meaning of these thoughts would be, how deep the emotions of the future could be, et cetera, that I cannot possibly answer. But it looks like the future could be tremendously grand, enormously much bigger, just like our own current society would strike our stone age ancestors as astonishingly wealthy, astonishingly knowledgeable and interesting.

I’m looking at: what about the stability of civilizations? Historians have been going on a lot about the decline and fall of civilizations. Does that tell us an ultimate limit on what we can plan for? Eventually I got fed up reading historians and did some statistics and got some funny conclusions. But even if our civilization lasts long, it might become something very alien over time, so how do we handle that? How do you even make a backup of your civilization?

And then of course there are questions like “how long can we survive on earth?” And “when the biosphere starts failing in about a billion years, couldn’t we fix that?” What are the environmental ethics issues surrounding that? What about settling the solar system? how do you build and maintain your Dyson sphere? Then of course there’s the stellar settlement, the intergalactic settlement, then the ultimate limits of physics. What can we say about them and in what ways could physics be really different from what we expect and what does that do for our chances?

It all leads back to this question: so, what should we be doing tomorrow? What are the near term issues? Some of them are interesting like, okay, so if the future is super grand, we should probably expect that we need to safeguard ourselves against existential risk. But we might also have risks — not just going extinct, but causing suffering and pain. And maybe there are other categories we don’t know about. I’m looking a little bit at all the unknown super important things that we don’t know about yet. How do we search for them? If we discover something that turns out to be super important, how do we coordinate mankind to handle that?

Right now, this sounds totally utopian. Would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would also sound totally absurd. The future is big — we have a lot of centuries ahead of us, hopefully.

Max: When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, is actually looking pretty dead and wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

This is something I hope we can contribute to, we denizens of this planet, right now, here, in our lifetime. Because I think this is the most important time and place probably in cosmic history. After 13.8 billion years on this particular planet, we’ve actually developed enough technology, almost, to either drive ourselves extinct or to create super intelligence, which can spread out into the cosmos and do either horrible things or fantastic things. More than ever, life has become the master of its own destiny.

Allison: For me this pretty specific vision would really be a voluntary world, in which different entities, whether they’re AI or humans, can cooperate freely with each other to realize their interests. I do think that we don’t know where we want to end up, and we really have — if you look back 100 years, it’s not only that you wouldn’t have wanted to live there, but also many of the things that were regarded as moral back then are not regarded as moral anymore by most of us, and we can expect the same to hold true 100 years from now. I think rather than locking in any specific types of values, we ought to leave the space of possible values open.

Maybe right now you could try to do something like coherent extrapolated volition, which is, in AI safety, coined by Eliezer Yudkowsky to describe a goal function of a superintelligence that would execute your goals if you were more the person you wish you were, if we lived closer together, if we had more time to think and collaborate — so kind of a perfect version of human morality. I think that perhaps we could do something like that for humans, because we all come from the same evolutionary background. We all share a few evolutionary cornerstones, at least, that make us value family, or make us value a few others of those values, and perhaps we could do something like coherent extrapolated volition of some basic, very boiled down values that most humans would agree to. I think that may be possible, I’m not sure.

On the other hand, in a future where we succeed, at least in my version of that, we live not only with humans but with a lot of different mind architectures that don’t share our evolutionary background. For those mind architectures it’s not enough to try to do something like coherent extrapolated volition, because given that they have very different starting conditions, they will also end up valuing very different value sets. In the absence of us knowing what’s in their interests, I think really the only thing we can reasonably do is try to create a framework in which very different mind architectures can cooperate freely with each other, and engage in mutually beneficial relationships.

Ariel: Honestly, I really love that your answer of what you’re looking forward to is that it’s something for everybody. I like that.

Anthony: When you think about what life used to be for most humans, we really have come a long way. I mean, slavery was just fully accepted for a long time. Complete subjugation of women and sexism was just totally accepted for a really long time. Poverty was just the norm. Zero political power was the norm. We are in a place where, although imperfect, many of these things have dramatically changed; even if they’re not fully implemented; Our ideals and our beliefs of human rights and human dignity and equality have completely changed and we’ve implemented a lot of that in our society.

So what I’m hopeful about is that we can continue that process, and that the way that culture and society work 100 years from now, we would look at from now and say, “Oh my God, they really have their shit together. They have figured out how to deal with differences between people, how to strike the right balance between collective desires and individual autonomy, between freedom and constraint, and how people can feel liberated to follow their own path while not trampling on the rights of others.” These are not in principle impossible things to do, and we fail to do them right now in large part, but I would like to see our technological development be leveraged into a cultural and social development that makes all those things happen. I think that really is what it’s about.

I’m much less excited about more fancy gizmos, more financial wealth for everybody, more power to have more stuff and accomplish more and higher and higher GDP. Those are useful things, but I think they’re things toward an end, and that end is the sort of happiness and fulfillment and enlightenment of the conscious living beings that make up our world. So, when I think of a positive future, it’s very much one filled with a culture that honestly will look back on ours now and say, “Boy, they really were screwed up, and I’m glad we’ve gotten better and we still have a ways to go.” And I hope that our technology will be something that will in various ways make that happen, as technology has made possible the cultural improvements we have now.

Ariel: I think as a woman I do often look back at the way technology enabled feminism to happen. We needed technology to sort of get a lot of household chores accomplished — to a certain extent, I think that helped.

Anthony: There are pieces of cultural progress that don’t require technology, as we were talking about earlier, but are just made so much easier by it. Labor-saving devices helped with feminism; Just industrialization I think helped with serfdom and slavery — we didn’t have to have a huge number of people working in abject poverty and total control in order for some to have a decent lifestyle, we could spread that around. I think something similar is probably true of animal suffering and meat. It could happen without that — I mean, I fully believe that 100 years from now, or 200 years from now, people will look back at eating meat as just like a crazy thing that people used to do. It’s just the truth I think of what’s going to happen.

But it’ll be much, much easier if we have technologies that make that economically viable and easy rather than pulling teeth and a huge cultural fight and everything, which I think will be hard and long. We should be thinking about, if we had some technological magic wand, what are the social problems that we would want to solve with it, and then let’s look for that wand once we identify those problems. If we could make some social problem much better if we only had such and such technology, that’s a great thing to know, because technologies are something we’re pretty good at inventing. If they don’t violate the laws of physics, and there’s some motivation, we can often generate those things, so let’s think about what they are, what would it take to solve this sort of political informational mess where nobody knows what’s true and everybody is polarized?

That’s a social problem. It has a social solution. But there might be technologies that would be enormously helpful in making those social solutions easier. So what are those technologies? Let’s think about them. So I don’t think there’s a kind of magic bullet for a lot of these problems. But having that extra boost that makes it easier to solve the social problem I think is something we should be looking for for sure.

And there are lots of technologies that really do help — worth keeping in mind, I guess, as we spend a lot of our time worrying about the ill effects of them, and the dangers and so on. There is a reason we keep pouring all this time and money and energy and creativity into developing new technologies.

Ariel: I’d like to finish with one last question for everyone, and that is: what does existential hope mean for you?

Max: For me, existential hope is hoping for and envisioning a really inspiring future, and then doing everything we can to make it so.

Anthony: It means that we really give ourselves the space and opportunity to continue to progress our human endeavor — our culture, our society — to build a society that really is backstopping everyone’s freedom and actualization, compassion, enlightenment, in a kind of steady, ever-inventive process. I think we don’t often give ourselves as much credit as we should for how much cultural progress we’ve really made in tandem with our technological progress.

Anders: My hope for the future is that we get this enormous open-ended future. It’s going to contain strange and frightening things, but I also believe that most of it is going to be fantastic. It’s going to be roaring onward far, far, far into the long term future of the universe, probably changing a lot of the aspects of the universe.

When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope, to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen. And I think we are going to be surprised by many things in the future and some of them are going to be wonderful surprises. That is the real existential hope.

Gaia: When I think about existential hope, I think it’s sort of an unusual phrase. But to me it’s really about the idea of finding meaning, and the potential that each of us has to experience meaning in our lives. And I think that the idea of existential hope, and I should say, the existential part of that, is the concept that that fundamental capability is something that will continue in the very long-term and will not go away. You know, I think it’s the opposite of nihilism, it’s the opposite of the idea that everything is just meaningless and our lives don’t matter and nothing that we do matters.

If I’m feeling — if I’m questioning that, I like to go and read something like Viktor Frankl’s book Man’s Search for Meaning, which really reconnects me to these incredible, deep truths about the human spirit. That’s a book that tells the story of his time in a concentration camp at Auschwitz. And even in those circumstances, the ability that he found within himself and that he saw within people around him to be kind, and to persevere, and to really give of himself, and others to give of themselves. And there’s just something impossible, I think, to capture in language. Language is a very poor tool, in this case, to try to encapsulate the essence of what that is. I think it’s something that exists on an experiential level.

Allison: For me, existential hope is really trying to choose to make a difference, knowing that success is not guaranteed, but it’s really making a difference because we simply can’t do it any other way. Because not trying is really not an option. It’s the first time in history that we’ve created the technologies for our destruction and for our ascent. I think they’re both within our hands, and we have to decide how to use them. So I think existential hope is transcending existential angst, and transcending our current limitation, rather than trying to create meaning within them, and I think it’s the adequate mindset for the time that we’re in.

Ariel: And I still love this idea that existential hope means that we strive toward everyone’s personal ideal, whatever that may be. On that note, I cannot thank my guests enough for joining the show, and I also hope that this episode has left everyone listening feeling a bit more optimistic about our future. I wish you all a happy holiday and a happy new year!