Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.

As biotechnology and other emerging technologies become more powerful, the dual-use nature of research — that is, research that can have both beneficial and risky outcomes — is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?

On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues.

Topics discussed in this episode include:

  • Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
  • The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
  • The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
  • How scientists can anticipate whether the results of their research could be misused by someone else
  • To what extent does risk stem from technology, and to what extent does it stem from how we govern it?

Books and publications discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. Now I’ve been planning to do something about biotechnology this month anyways since it would go along so nicely with the new resource we just released which highlights the benefits and risks of biotech. I was very pleased when Catherine Rhodes agreed to be on the show. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance, or a lack of it.

But she has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. The timing of Catherine as a guest is also especially fitting given that just this week the science world was shocked to learn that a researcher out of China is claiming to have created the world’s first genetically edited babies.

Now neither she nor I have had much of a chance to look at this case too deeply but I think it provides a very nice jumping-off point to consider regulations, ethics, and risks, as they pertain to biology and all emerging sciences. So Catherine, thank you so much for being here.

Catherine: Thank you.

Ariel: I also want to add that we did have another guest scheduled to join us today who is unfortunately ill, and unable to participate, so Catherine, I am doubly grateful to you for being here today.

Before we get too far into any discussions, I was hoping to just go over some basics to make sure we’re all on the same page. In my readings of your work, you talk a lot about biorisk and biosecurity, and I was hoping you could just quickly define what both of those words mean.

Catherine: Yes, in terms of thinking about both biological risk and biological security, I think about the objects that we’re trying to protect. It’s about the protection of human, animal, and plant life and health, in particular. Some of that extends to protection of the environment. The risks are the risks to those objects and security is securing and protecting those.

Ariel: Okay. I’d like to start this discussion where we’ll talk about ethics and policy, looking first at the example of the gain-of-function experiments that caused another stir in the science community a few years ago. That was research which was made, I believe, on the H5N1 virus, also known as the avian flu, and I believe it made the virus more virulent. First, can you just explain what gain-of-function means? And then I was hoping you could talk a bit about what that research was, and what the scientific community’s reaction to it was.

Catherine: Gain-of-function’s actually quite a controversial term to have selected to describe this work, because a lot of what biologists do is work that would add a function to the organism that they’re working on, without that actually posing any security risk. In this context, it was a gain of a function that would make it perhaps more desirable for use as a biological weapon.

In this case, it was things like an increase in its ability to transmit between mammals, so in particular, they were getting it tracked to be transmittable between ferrets in a laboratory, and ferrets are a model for transmission between humans.

Ariel: You actually bring up an interesting point that I hadn’t thought about. To what extent does our choice of terminology affect how we perceive the ethics of some of these projects?

Catherine: I think it was perhaps in this case, it was more that the use of that term which was more done from perhaps the security and policy community side, made the conversation with scientists more difficult, as it was felt this was mislabeling our research, it’s affecting research that shouldn’t really come into this kind of conversation about security. So I think that was where it maybe caused some difficulties.

But I think also there’s understanding that needs to be the other way as well, that this isn’t not necessarily that all policymakers are going to have that level of detail about what they mean when they’re talking about science.

Ariel: Right. What was the reaction then that we saw from the scientific community and the policymakers when this research was published?

Catherine: There was firstly a stage of debate about whether those papers should be published or not. There was some guidance given by what’s called the National Science Advisory Board for Biosecurity in the US, that those papers should not be published in full. So, actually, the first part of the debate was about that stage of ‘should you publish this sort of research where it might have a high risk of misuse?’

That was something that the security community had been discussing for at least a decade, that there were certain experiments where they felt that they would meet a threshold of risk, where they shouldn’t be openly published or shouldn’t be published with their methodological details in full. I think for the policy and security community, it was expected that these cases would arise, but this hadn’t perhaps been communicated to the scientific community particularly well, and so I think it came as a shock to some of those researchers, particularly because the research had been approved initially, so they were able to conduct the research, but suddenly they would find that they can’t publish the research that they’ve done. I think that was where this initial point of contention came about.

It then became a broader issue. More generally, how do we handle these sorts of cases? Are there times when we should restrict publication? Or, is publication actually open publication, going to be a better way of protecting ourselves, because we’ll all know about the risks as well?

Ariel: Like you said, these scientists had gotten permission to pursue this research, so it’s not like it was questionable, or they had no reason to think it was too questionable to begin with. And yet, I guess there is that issue of how can scientists think about some of these questions more long term and maybe recognize in advance that the public or policymakers might find their research concerning? Is that something that scientists should be trying to do more of?

Catherine: Yes, and I think that’s part of this point about the communication between the scientific and policy communities, so that these things don’t come as a surprise or a shock. Yes, I think there was something in this. If we’re allowed to do the research, should we not have had more conversation at the earlier stages? I think in general I would say that’s where we need to get to, because if you’re trying to intervene at the stage of publication, it’s probably already too late to really contain the risk of publication, because for example, if you’ve submitted a journal article online, that information’s already out there.

So yes, trying to take it further back in the process, so that the beginning stages of designing research projects these things are considered, is important. That has been pushed forward by funders, so there are now some clauses about ‘have you reviewed the potential consequences of your research?’ That is one way of triggering that thinking about it. But I think there’s been a broader question further back about education and awareness.

It’s all right if you’re being asked that question, but do you actually have information that helps you know what would be a security risk? And what elements might you be looking for in your work? So, there’s this case more generally in how do we build awareness amongst the scientific community that these issues might arise, and train them to be able to spot some of the security concerns that may be there?

Ariel: Are we taking steps in that direction to try to help educate both budding scientists and also researchers who have been in the field for a while?

Catherine: Yes, there have been quite a lot of efforts in that area. Again, probably over the last decade or so, done by academic groups in civil society. It’s been something that’s been encouraged by states-parties to the Biological Weapons Convention have been encouraging education and awareness raising, and also the World Health Organization. It’s got a document on responsible life sciences research, and it also encourages education and awareness-raising efforts.

I think that those have further to go, and I think some of the barriers to those being taken up are the familiar things that it’s very hard to find space in a scientific curriculum to have that teaching, that more resources are needed in terms of where are the materials that you would go to. That is being built up.

I think also then talking about the scientific curriculums at maybe the undergraduate, postgraduate level, but how do you extend this throughout scientific careers as well? There needs to be a way of reaching scientists at all levels.

Ariel: We’re talking a lot about the scientists right now, but in your writings, you mention that there are three groups who have responsibility for ensuring that science is safe and ethical. Those are one, obviously the scientists, but then also you mention policymakers, and you mention the public and society. I was hoping you could talk a little bit about how you see the roles for each of those three groups playing out.

Catherine: I think these sorts of issues, they’re never going to be just the responsibility of one group, because there are interactions going on. Some of those interactions are important in terms of maybe incentives. So we talked about publication. Publication is of such importance within the scientific community and within their incentive structures. It’s so important to publish, that again, trying to intervene just at that stage, and suddenly saying, “No, you can’t publish your research” is always going to be a big problem.

It’s to do with the norms and the practices of science, but some of that, again, comes from the outside. Are there ways we can reshape those sorts of structures that would be more useful? Is one way of thinking about it. I think we need clear signals from policymakers as well, about when to take threats seriously or not. If we’re not hearing from policymakers that there are significant security concerns around some forms of research, then why should we expect the scientist to be aware of it?

Yes, also policy does have a control and governance mechanisms within it, so it can be very useful. In forms of deciding what research can be done, that’s often done by funders and government bodies, and not by the research community themselves. Trying to think how more broadly, to bring in the public dimension. I think what I mean there is that it’s about all of us being aware of this. It shouldn’t be isolating one particular community and saying, “Well, if things go wrong, it was you.”

Socially, we’ve got decisions to make about how we feel about certain risks and benefits and how we want to manage them. In the gain-of-function case, the research that was done had the potential for real benefits for understanding avian influenza, which could produce a human pandemic, and therefore there could be great public health benefits associated with some of this research that also poses great risks.

Again, when we’re dealing with something that for society, could bring both risks and benefits, society should play a role in deciding what balance it wants to achieve.

Ariel: I guess I want to touch on this idea of how we can make sure that policymakers and the public – this comes down to a three way communication. I guess my question is, how do we get scientists more involved in policy, so that policymakers are informed and there is more of that communication? I guess maybe part of the reason I’m fumbling over this question is it’s not clear to me how much responsibility we should be putting specifically on scientists for this, versus how much responsibility does go to the other groups.

Catherine: About science, it’s becoming more involved in policy. That’s another part of thinking of the relationship between science and policy, and science and society, is that we’ve got an expectation that part of what policymakers will consider is how to have regulation and governance that’s appropriate to scientific practice, and to emerging technologies, science and technology advances, then they need information from the scientific community about those things. There’s a responsibility of policymakers to seek some of that information, but also for scientists to be willing to engage in the other direction.

I think that’s the main answer to how they could be more informed, and what other ways there could be more communication? I think some of the useful ways that’s done at the moment is by having, say, meetings where there might be a horizon scanning element, so that scientists can have input on where we might see advances going. But if you also have within the participation, policymakers, and maybe people who know more about things like technology transfer, and startups, investments, so they can see what’s going on in terms of where the money’s going. Bringing those groups together to look at where the future might be going is quite a good way of capturing some of those advances.

And it helps inform the whole group, so I think those sorts of processes are good, and there are some examples of those, and there are some examples where the international science academies come together to do some of that sort of work as well, so that they would provide information and reports that can go forward to international policy processes. They do that for meetings at the Biological Weapons Convention, for example.

Ariel: Okay, so I want to come back to this broadly in a little bit, but first I want to touch on biologists and ethics and regulation a little bit more generally. Because I guess I keep thinking of the famous Asilomar meeting from I think it was in the late ’70s, in which biologists got together, recognized some of the risks in their field, and chose to pause the work that they were doing, because there were ethical issues. I tend to credit them with being more ethically aware than a lot of other scientific fields.

But it sounds like maybe that’s not the case. Was that just a special example in which scientists were unusually proactive? I guess, should we be worried about scientists and biosecurity, or is it just a few bad apples like we saw with this recent Chinese researcher?

Catherine: I think in terms of ethical awareness, it’s not that I don’t think biologists are ethically aware, but it is that there can be a lot of different things coming onto their agendas in that, and again, those can be pushed out by other practices within your daily work. So, I think for example, one of the things in biology, often it’s quite close to medicine, and there’s been a lot over the last few decades about how we treat humans and animals in research.

There’s ethics and biomedical ethics, there’s practices to do with consent and participation of human subjects, that people are aware of. It’s just that sometimes you’ve got such an overload of all these different issues you’re supposed to be aware of and responding to, so sustainable development and environmental protection is another one, that I think it’s going to be the case that often things will fall off the agenda or knowing which you should prioritize perhaps can be difficult.

I do think there’s this lack of awareness of the past history of biological warfare programs, and the fact that scientists have always been involved with them, and then looking forward to know how much more easy, because of the trends in technology, it may be for more actors to have access to such technologies and the implications that might have.

I think that picks up on what you were saying about, are we just concerned about the bad apples? Are there some rogue people out there that we should be worried about? I think there’s two parts to that, because there may be some things that are more obvious, where you can spot, “Yeah, that person’s really up to something they shouldn’t be.” I think there are probably mechanisms where people do tend to be aware of what’s going on in their laboratories.

Although, as you mentioned, the recent Chinese case, potentially CRISPR gene edited babies, it seems clear that people within that person’s laboratory didn’t know what was going on, the funders didn’t know what was going on, the government didn’t know what was going on, so yes, there will be some cases where there’s something very obvious that someone is doing bad.

I think that’s probably an easier thing to handle and to conceptualize, but when we’re now getting these questions about you can be doing the stuff, scientific work, and research, that’s for clear benefits, and you’re doing it for those beneficial purposes, but how do you work out whether the results of that could be misused by someone else? How do you frame whether you have any responsibility for how someone else would use it when they may well not be anywhere near you in a laboratory? They may be very remote, you probably have no contact with them at all, so how can you judge and assess how your work may be misused, and then try and make some decision about how you should proceed with it? I think that’s a more complex issue.

That does probably, as you say, speak to ‘are there things in scientific cultures, working practices, that might assist with dealing with that? Or might make it problematic?’ Again, I think I’ve picked up a few times, but there’s a lot going on in terms of the sorts of incentive structures that scientists are working in, which do more broadly meet up with global economic incentives. Again, not knowing the full details of the recent Chinese CRISPR case, there can often be almost racing dynamics between countries to have done some of this research and to be ahead in it.

I think that did happen with the gain-of-function experiments so that when the US had a moratorium on doing them, that China wrapped up its experiments in the same area. There’s all these kind of incentive structures that are going on as well, and I think those do affect wider scientific and societal practices.

Ariel: Okay. Quickly touching on some of what you were talking about, in terms of researchers who are doing things right, in most cases I think what happens is this case of dual use, where the research could go either way. I think I’m going to give scientists the benefit of the doubt and say most of them are actually trying to do good with their research. That doesn’t mean that someone else can’t come along later and then do something bad with it.

This is I think especially a threat with biosecurity, and so I guess, I don’t know that I have a specific question that you haven’t really gotten into already, but I am curious if you have ideas for how scientists can deal with the dual use nature of their research. Maybe to what extent does more open communication help them deal with it, or is open communication possibly bad?

Catherine: Yes. I think yes it’s possibly good and possibly bad. I think again, yeah, it’s a difficult question without putting their practice into context. Again, it shouldn’t be that just the scientist has to think through these issues of dual use and can it be misused. If there’s not really any new information coming out about how serious a threat this might be, so do we know that this is being pursued by any terrorist group? Do we know why that might be of a particular concern?

I think another interesting thing is that you might get combinations of technology that have developed in different areas, so you might get someone who does something that helps with the dispersal of an agent, that’s entirely disconnected from someone who might be working on an agent, that would be useful to disperse. Knowing about the context of what else is going on in technological development, and not just within your own work is also important.

Ariel: Just to clarify, what are you referring to when you say agent here?

Catherine: In this case, again, thinking of biology, so that might be a microorganism. If you were to be developing a biological weapon, you don’t just need to have a nasty pathogen. You would need some way of dispersing, disseminating that, for it to be weaponized. Those components may be for beneficial reasons going on in very different places. How would scientists be able to predict where those might combine and come together, and create a bigger risk than just their own work?

Ariel: Okay. And then I really want to ask you about the idea of the races, but I don’t have a specific question to be honest. It’s a concerning idea, and it’s something that we look at in artificial intelligence, and it’s clearly a problem with nuclear weapons. I guess what are concerns we have when we look at biological races?

Catherine: It may not even be necessarily specific to looking at biological races, but it is this thing, and again, not even thinking of maybe military science uses of technology, but about how we have very strong drivers for economic growth, and that technology advances will be really important to innovation and economic growth.

So, I think this does provide a real barrier to collective state action against some of these threats, because if a country can see an advantage of not regulating an area of technology as strongly, then they’ve got a very strong incentive to go for that. It’s working out how you might maybe overcome some of those economic incentives, and try and slow down some of the development of technology, or application of technology perhaps, to a pace where we can actually start doing these things like working out what’s going on, what the risks might be, how we might manage those risks.

But that is a hugely controversial kind of thing to put forward, because the idea of slowing down technology, which is clearly going to bring us these great benefits and is linked to progress and economic progress is a difficult sell to many states.

Ariel: Yeah, that makes sense. I think I want to turn back to the Chinese case very quickly. I think this is an example of what a lot of people fear, in that you have this scientist who isn’t being open with the university that he’s working with, isn’t being open with his government about the work he’s doing. It sounds like even the people who are working for him in the lab, and possibly even the parents of the babies that are involved may not have been fully aware of what he was doing.

We don’t have all the information, but at the moment, at least what little we have sounds like an example of a scientist gone rogue. How do we deal with that? What policies are in place? What policies should we be considering?

Catherine: I think I share where the concerns in this are coming from, because it looks like there’s multiple failures of the types of layers of systems that should have maybe been able to pick this up and stop it, so yes, we would usually expect that a funder of the research, or the institution the person’s working in, the government through regulation, the colleagues of a scientist would be able to pick up on what’s happening, have some ability to intervene, and that doesn’t seem to have happened.

Knowing that these multiple things can all fall down is worrying. I think actually an interesting thing about how we deal with this that there seems to be a very strong reaction from the scientific community working around those areas of gene editing, to all come together and collectively say, “This was the wrong thing to do, this was irresponsible, this is unethical. You shouldn’t have done this without communicating more openly about what you were doing, what you were thinking of doing.”

I think that’s really interesting to see that community push back which I think in those cases to me, where scientists are working in similar areas, I’d be really put off by that, thinking, “Okay, I should stay in line with what the community expects me to do.” I think that is important.

Where it also is going to kick in from the more top-down regulatory side as well, so whether China will now get some new regulation in place, do some more checks down through the institutional levels, I don’t know. Likewise, I don’t know whether internationally it will bring a further push for coordination on how we want to regulate those experiments.

Ariel: I guess this also brings up the question of international standards. It does look like we’re getting very broad international agreement that this research shouldn’t have happened. But how do we deal with cases where maybe most countries are opposed to some type of research and another country says, “No, we think it could be possibly ethical so we’re going to allow it?”

Catherine: I think this is again, the challenging situation. It’s interesting to me, this picks up, I’m trying to think whether this is maybe 15-20 years ago, but the debates about human cloning internationally, whether there should be a ban on human cloning. There was a declaration made, there’s a UN declaration against human cloning, but it fell down in terms of actually being more than a declaration, having something stronger in terms of an international law on this, because basically in that case, it was the differences between states’ views of the status of the embryo.

Regulating human reproductive research at the international level is very difficult because of some of those issues where like you say, there can be quite significant differences in ethical approaches taken by different countries. Again, in this case, I think what’s been interesting is, “Okay, if we’re going to come across a difficulty in getting an agreement between states and the governmental level, is there things that the scientific community or other groups can do to make sure those debates are happening, and that some common ground is being found to how we should pursue research in these areas, when we should decide it’s maybe safe enough to go down some of these lines?”

I think another point about this case in China was that it’s just not known whether it’s safe to be doing gene editing on humans yet. That’s actually one of the reasons why people shouldn’t be doing it regardless. I hope that gets some way to the answer. I think it is very problematic that we often will find that we can’t get broad international agreement on things, even when there seems to be some level of consensus.

Ariel: We’ve been talking a lot about all of these issues from the perspective of biological sciences, but I want to step back and also look at some of these questions more broadly. There’s two sides that I want to look at. One is just this question of how do we enable scientists to basically get into policy more? I mean, how can we help scientists understand how policymaking works and help them recognize that their voices in policy can actually be helpful? Or, do you think that we are already at a good level there?

Catherine: I would say we’re certainly not at an ideal level yet of science and policy. It does vary across different areas of course, so the thing that was coming up into my mind is in climate change, for example, having the intergovernmental panel doing their reports every few years. There’s a good, collaborative, international evidence base and good science policy process in that area.

But in other areas there’s a big deficit I would say. I’m most familiar with that internationally, but I think some of this scales down to the national level as well. Part of it is going in the other direction almost. When I spoke earlier about needs perhaps for education and awareness raising among scientists about some of these issues around how their research may be used, I think there’s also a need for people in policy to become more informed about science.

That is important. I’m trying to think what are the ways maybe scientists can do that? I think there’s some attempts, so when there’s international negotiations going on, to have … I think I’ve heard them described as mini universities, so maybe a week’s worth of quick updates on where the science is at before a negotiation goes on that’s relevant to that science.

I think one of the key things to say is that there are ways for scientists and the scientific community to have influence both on how policy develops and how it’s implemented, and a lot of this will go through intermediary bodies. In particular, the professional associations and academies that represent scientific communities. They will know, for example, thinking in the UK context, but I think this is similar in the US, there may be a consultation by parliament on how should we address a particular issue?

There was one in the UK a couple of years ago, how should we be regulating genetically modified insects? If a consultation like that’s going on and they’re asking for advice and evidence, there’s often ways of channeling that through academies. They can present statements that represent broader scientific consensus within their communities and input that.

The reason for mentioning them as intermediaries, again, it’s a lot of a burden to put on individual scientists to say, “You should all be getting involved in policy and informing policy. Another part of what you should be doing as part of your role,” but yes, realizing that you can do that as a collective, rather than it just having to be an individual thing I think is valuable.

Ariel: Yeah, there is the issue of, “Hey, in your free time, can you also be doing this?” It’s not like scientists have lots of free time. But one of the things that I get the impression is that scientists are sometimes a little concerned about getting involved with policymaking because they fear overregulation, and that it could harm their research and the good that they’re trying to do with their research. Is this fear justified? Are scientists hampered by policies? Are they helped by policies?

Catherine: Yeah, so it’s both. It’s important to know that the mechanisms of policy can play facilitative roles, they can promote science, as well as setting constraints and limits on it. Again, most governments are recognizing that the life sciences and biology and artificial intelligence and other emerging technologies are going to be really key for their economic growth.

They are doing things to facilitate and support that, and fund it, so it isn’t only about the constraints. However, I guess for a lot of scientists, the way you come across regulation, you’re coming across the bits that are the constraints on your work, or there are things that make you fill in a lot of forms, so it can just be perceived as something that’s burdensome.

But I would also say that certainly something I’ve noticed in recent years is that we shouldn’t think that scientists and technology communities aren’t sometimes asking for areas to be regulated, asking for some guidance on how they should be managing risks. Switching back to a biology example, but with gene drive technologies, the communities working on those have been quite proactive in asking for some forms of, “How do we govern the risks? How should we be assessing things?” Saying, “These don’t quite fit with the current regulatory arrangements, we’d like some further guidance on what we should be doing.”

I can understand that there might be this fear about regulation, but I also think something you said, could this be the source of the reluctance to engage with policy, and I think an important thing to say there is that actually if you’re not engaging with policy, it’s more likely that the regulation is going to be working in ways that are not intentionally, but could be restricting scientific practice. I think that’s really important as well, that maybe the regulation is created in a very well intended way, and it just doesn’t match up with scientific practice.

I think at the moment, internationally this is becoming a discussion around how we might handle the digital nature of biology now, when most regulation is to do with materials. But if we’re going to start regulating the digital versions of biology, so gene sequencing information, that sort of thing, then we need to have a good understanding of what the flows of information are, in which ways they have value within the scientific community, whether it’s fundamentally important to have some of that information open, and we should be very wary of new rules that might enclose it.

I think that’s something again, if you’re not engaging with the processes of regulation and policymaking, things are more likely to go wrong.

Ariel: Okay. We’ve been looking a lot about how scientists deal with the risks of their research, how policymakers can help scientists deal with the risks of their research, et cetera, but it’s all about the risks coming from the research and from the technology, and from the advances. Something that you brought up in a separate conversation before the podcast is to what extent does risk stem from technology, and to what extent can it stem from how we govern it? I was hoping we could end with that question.

Catherine: That’s a really interesting question to me, and I’m trying to work that out in my own research. One of the interesting and perhaps obvious things to say is it’s never down to the technology. It’s down to how we develop it, use it, implement it. The human is always playing a big role in this anyway.

But yes, I think a lot of the time governance mechanisms are perhaps lagging behind the development of science and technology, and I think some of the risk is coming from the fact that we may just not be governing something properly. I think this comes down to things we’ve been mentioning earlier. We need collectively both in policy, in the science communities, technology communities, and society, just to be able to get a better grasp on what is happening in the directions of emerging technologies that could have both these very beneficial and very destructive potentials, and what is it we might need to do in terms of really rethinking how we govern these things?

Yeah, I don’t have any answer for where the sources of risk are coming from, but I think it’s an interesting place to look, is that intersection between the technology development, and the development of regulation and governance.

Ariel: All right, well yeah, I agree. I think that is a really great question to end on, for the audience to start considering as well. Catherine, thank you so much for joining us today. This has been a really interesting conversation.

Catherine: Thank you.

Ariel: As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us on your preferred podcast platform.

US Government Releases Its Latest Climate Assessment, Demands Immediate Action

At the end of last week, amidst the flurry of holiday shopping, the White House quietly released Volume II of the Fourth National Climate Assessment (NCA4). The comprehensive report, which was compiled by the United States Global Change Research Program (USGCRP), is the culmination of decades of environmental research conducted by scientists from 13 different federal agencies. The scope of the work is truly striking, representing more than 300 authors and encompassing thousands of scientific studies.

Unfortunately, the report is also rather grim.

If climate change continues unabated, the assessment asserts that it will cost the U.S. economy hundreds of billions a year by the close of the century — causing some $155 billion in annual damages to labor and another $118 billion in damages to coastal property. In fact, the report notes that, unless we immediately launch “substantial and sustained global mitigation and regional adaptation efforts,” the impact on the agricultural sector alone will reach billions of dollars in losses by the middle of the century.

Notably, the NCA4 authors emphasize that these aren’t just warnings for future generations, pointing to several areas of the United States that are already grappling with the high economic cost of climate change. For example, a powerful heatwave that struck the Northeast left local fisheries devastated, and similar events in Alaska have dramatically slashed fishing quotas for certain stocks. Meanwhile, human activity is exacerbating Florida’s red tide, killing fish populations along the southwest coast.

Of course, the economy won’t be the only thing that suffers.

According to the assessment, climate change is increasingly threatening the health and well-being of the American people, and emission reduction efforts could ultimately save thousands of lives. Young children, pregnant women, and aging populations are identified as most at risk; however, the authors note that waterborne infectious diseases and global food shortages threaten all populations.

As with the economic impact, the toll on human health is already visible. For starters, air pollution is driving a rise in the number of deaths related to heart and lung problems. Asthma diagnoses have increased, and rising temperatures are causing a surge in heatstroke and other heat-related illnesses. And the report makes it clear that the full extent of the risk extends well beyond either the economy or human health, plainly stating that climate change threatens all life on our planet.

Ultimately, the authors emphasize the immediacy of the issue, noting that without immediate action, no system will be left untouched:

“Climate change affects the natural, built, and social systems we rely on individually and through their connections to one another….extreme weather and climate-related impacts on one system can result in increased risks or failures in other critical systems, including water resources, food production and distribution, energy and transportation, public health, international trade, and national security. The full extent of climate change risks to interconnected systems, many of which span regional and national boundaries, is often greater than the sum of risks to individual sectors.”

Yet, the picture painted by the NCA4 assessment is not entirely bleak. The report suggests that, with a concerted and sustained effort, the most dire damage can be undone and ultimate catastrophe averted. The authors note that this will require international cooperation centered on a dramatic reduction in global carbon dioxide emissions.

The 2015 Paris Agreement, in which 195 countries put forth emission reduction pledges, represented a landmark in international effort to curtail global warming. The agreement was designed to cap warming at 2 degrees Celsius, a limit scientists then believed would prevent the most severe and irreversible effects of climate change. That limit has since been lowered to 1.5 degrees Celsius. Unfortunately, current models predict that the even if countries hit their current pledges, temperatures will still climb to 3.3 degrees Celsius by the end of the century. The Paris Agreement offers a necessary first step, but in light of these new predictions, pledges must be strengthened.

Scientists hope the findings in the National Climate Assessment will compel the U.S. government to take the lead in updating their climate commitments.

Handful of Countries – Including the US and Russia – Hamper Discussions to Ban Killer Robots at UN

This press release was originally released by the Campaign to Stop Killer Robots and has been lightly edited.

Geneva, 26 November 2018 – Reflecting the fragile nature of multilateralism today, countries have agreed to continue their diplomatic talks on lethal autonomous weapons systems—killer robots—next year. But the discussions will continue with no clear objective and participating countries will have even less time dedicated to making decisions than they’ve had in the past. The outcome at the Convention on Conventional Weapons (CCW) annual meeting—which concluded at 11:55 PM on Friday, November 23—has again demonstrated the weakness of the forum’s decision-making process, which enables a single country or small group of countries to thwart more ambitious measures sought by a majority of countries.

Killer robots are weapons systems that would select and attack targets without meaningful human control over the process — that is, a weapon that could target and kill people without sufficient human oversight.

“We’re dismayed that could not agree on a more ambitious mandate aimed at negotiating a treaty to prevent the development of fully autonomous weapons,” said Mary Wareham of Human Rights Watch, coordinator of the Campaign to Stop Killer Robots. “This weak outcome underscores the urgent need for bold political leadership and for consideration of  another route to create a new treaty to ban these weapons systems, which would select and attack targets without meaningful human control.”

“The security of the world and future of humanity hinges on achieving a preemptive ban on killer robots,” Wareham added.

The Campaign to Stop Killer Robots urges all countries to heed the call of the UN Secretary-General and prohibit these weapons, which he has deemed “politically unacceptable and morally repugnant.”

Since the first CCW meeting on killer robots in 2014, most of the participating countries have concluded that current international humanitarian and human rights law will need to be strengthened to prevent the development, production, and use of fully autonomous weapons. This includes 28 countries seeking to prohibit fully autonomous weapons. This past week, El Salvador and Morocco added their names to the list of countries calling for a ban. Austria, Brazil, and Chile have formally proposed the urgent negotiation of “a legally-binding instrument to ensure meaningful human control over the critical functions” of weapons systems.

None of the 88 countries participating in the CCW meeting objected to continuing the formal discussions on lethal autonomous weapons systems. However, Russia, Israel, Australia, South Korea, and the United States have indicated they cannot support negotiation of a new treaty via the CCW or any other process. And Russia alone successfully lobbied to limit the amount of time that states will meet in 2019, reducing the talks from just 10 days to only 7 days.

Seven days is insufficient for the CCW to tackle this challenge, and for the Campaign to Stop Killer Robots, the fact the CCW talks on killer robots will proceed next year is no guarantee of a meaningful outcome.

“It seems ever more likely that concerned will consider other avenues to create a new international treaty to prohibit fully autonomous weapons,” said Wareham. “The Campaign to Stop Killer Robots stands ready to work to secure a new treaty through any means possible.”

The CCW is not the only group within the United Nations that can pass a legally-binding, international treaty. In the past, the CCW has been tasked with banning antipersonnel landmines, cluster munitions, and nuclear weapons, but in each case, because the CCW requires consensus among all participating countries, the group was never able to prohibit the weapons in question. Instead, fueled by mounting public pressure, concerned countries turned to other bodies within the UN to finally establish treaties that banned the each of these inhumane weapons. But even then, these diplomatic efforts only succeeded because of the genuine partnerships between like-minded countries, UN agencies, the International Committee of the Red Cross and dedicated coalitions of non-governmental organizations.

This past week’s CCW meeting approved Mr. Ljupco Jivan Gjorgjinski of the Former Yugoslav Republic of Macedonia to chair next year’s deliberations on LAWS, which will be divided into two meetings: March 25-29 and August 20-21. The CCW’s annual meeting, at which decisions will be made about future work on autonomous weapons, will be held on November 13-15.

“Over the coming year our dynamic campaigners around the world are intensifying their outreach at the national and regional levels,” said Wareham. “We encourage anyone concerned by the disturbing trend towards killer robots to express their strong desire for their government to endorse and work for a ban on fully autonomous weapons without delay. Only with the public’s support will the ban movement prevail.”

To learn more about how you can help, visit autonomousweapons.org.

Benefits & Risks of Biotechnology

“This is a whole new era where we’re moving beyond little edits on single genes to being able to write whatever we want throughout the genome.”

-George Church, Professor of Genetics at Harvard Medical School

What is biotechnology?

How are scientists putting nature’s machinery to use for the good of humanity, and how could things go wrong?

Biotechnology is nearly as old as humanity itself. The food you eat and the pets you love? You can thank our distant ancestors for kickstarting the agricultural revolution, using artificial selection for crops, livestock, and other domesticated animals. When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese!

When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing, or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells.

In fact, the most exciting biotechnology advances of recent times are occurring at the microscopic level (and smaller!) within the membranes of cells. After decades of basic research into decoding the chemical and genetic makeup of cells, biologists in the mid-20th century launched what would become a multi-decade flurry of research and breakthroughs. Their work has brought us the powerful cellular tools at biotechnologists’ disposal today. In the coming decades, scientists will use the tools of biotechnology to manipulate cells with increasing control, from precision editing of DNA to synthesizing entire genomes from their basic chemical building blocks. These cells could go on to become bomb-sniffing plants, miracle cancer drugs, or ‘de-extincted’ wooly mammoths. And biotechnology may be a crucial ally in the fight against climate change.

But rewriting the blueprints of life carries an enormous risk. To begin with, the same technology being used to extend our lives could instead be used to end them. While researchers might see the engineering of a supercharged flu virus as a perfectly reasonable way to better understand and thus fight the flu, the public might see the drawbacks as equally obvious: the virus could escape, or someone could weaponize the research. And the advanced genetic tools that some are considering for mosquito control could have unforeseen effects, possibly leading to environmental damage. The most sophisticated biotechnology may be no match for Murphy’s Law.

While the risks of biotechnology have been fretted over for decades, the increasing pace of progress – from low cost DNA sequencing to rapid gene synthesis to precision genome editing – suggests biotechnology is entering a new realm of maturity regarding both beneficial applications and more worrisome risks. Adding to concerns, DIY scientists are increasingly taking biotech tools outside of the lab. For now, many of the benefits of biotechnology are concrete while many of the risks remain hypotheticals, but it is better to be proactive and cognizant of the risks than to wait for something to go wrong first and then attempt to address the damage.

How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology. So do nearly all of your cotton clothes.

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test. It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs.

What are the risks of biotechnology?

Along with excitement, the rapid progress of research has also raised questions about the consequences of biotechnology advances. Biotechnology may carry more risk than other scientific fields: microbes are tiny and difficult to detect, but the dangers are potentially vast. Further, engineered cells could divide on their own and spread in the wild, with the possibility of far-reaching consequences. Biotechnology could most likely prove harmful either through the unintended consequences of benevolent research or from the purposeful manipulation of biology to cause harm. One could also imagine messy controversies, in which one group engages in an application for biotechnology that others consider dangerous or unethical.

 

1. Unintended Consequences

Sugarcane farmers in Australia in the 1930’s had a problem: cane beetles were destroying their crop. So, they reasoned that importing a natural predator, the cane toad, could be a natural form of pest control. What could go wrong? Well, the toads became a major nuisance themselves, spreading across the continent and eating the local fauna (except for, ironically, the cane beetle).

While modern biotechnology solutions to society’s problems seem much more sophisticated than airdropping amphibians into Australia, this story should serve as a cautionary tale. To avoid blundering into disaster, the errors of the past should be acknowledged.

  • In 2014, the Center for Disease Control came under scrutiny after repeated errors led to scientists being exposed to Ebola, anthrax, and the flu. And a professor in the Netherlands came under fire in 2011 when his lab engineered a deadly, airborne version of the flu virus, mentioned above, and attempted to publish the details. These and other labs study viruses or toxins to better understand the threats they pose and to try to find cures, but their work could set off a public health emergency if a deadly material is released or mishandled as a result of human error.
  • Mosquitoes are carriers of disease – including harmful and even deadly pathogens like Zika, malaria, and dengue – and they seem to play no productive role in the ecosystem. But civilians and lawmakers are raising concerns about a mosquito control strategy that would genetically alter and destroy disease-carrying species of mosquitoes. Known as a ‘gene drive,’ the technology is designed to spread a gene quickly through a population by sexual reproduction. For example, to control mosquitoes, scientists could release males into the wild that have been modified to produce only sterile offspring. Scientists who work on gene drive have performed risk assessments and equipped them with safeguards to make the trials as safe as possible. But, since a man-made gene drive has never been tested in the wild, it’s impossible to know for certain the impact that a mosquito extinction could have on the environment. Additionally, there is a small possibility that the gene drive could mutate once released in the wild, spreading genes that researchers never planned for. Even armed with strategies to reverse a rogue gene drive, scientists may find gene drives difficult to control once they spread outside the lab.
  • When scientists went digging for clues in the DNA of people who are apparently immune to HIV, they found that the resistant individuals had mutated a protein that serves as the landing pad for HIV on the surface of blood cells. Because these patients were apparently healthy in the absence of the protein, researchers reasoned that deleting its gene in the cells of infected or at-risk patients could be a permanent cure for HIV and AIDS. With the arrival of the new tool, a set of ‘DNA scissors’ called CRISPR/Cas9, that holds the promise of simple gene surgery for HIV, cancer, and many other genetic diseases, the scientific world started to imagine nearly infinite possibilities. But trials of CRISPR/Cas9 in human cells have produced troubling results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. While a bad haircut might be embarrassing, the wrong cut by CRISPR/Cas9 could be much more serious, making you sicker instead of healthier. And if those edits were made to embryos, instead of fully formed adult cells, then the mutations could permanently enter the gene pool, meaning they will be passed on to all future generations. So far, prominent scientists and prestigious journals are calling for a moratorium on gene editing in viable embryos until the risks, ethics, and social implications are better understood.

 

2. Weaponizing biology

The world recently witnessed the devastating effects of disease outbreaks, in the form of Ebola and the Zika virus – but those were natural in origin. The malicious use of biotechnology could mean that future outbreaks are started on purpose. Whether the perpetrator is a state actor or a terrorist group, the development and release of a bioweapon, such as a poison or infectious disease, would be hard to detect and even harder to stop. Unlike a bullet or a bomb, deadly cells could continue to spread long after being deployed. The US government takes this threat very seriously, and the threat of bioweapons to the environment should not be taken lightly either.

Developed nations, and even impoverished ones, have the resources and know-how to produce bioweapons. For example, North Korea is rumored to have assembled an arsenal containing “anthrax, botulism, hemorrhagic fever, plague, smallpox, typhoid, and yellow fever,” ready in case of attack. It’s not unreasonable to assume that terrorists or other groups are trying to get their hands on bioweapons as well. Indeed, numerous instances of chemical or biological weapon use have been recorded, including the anthrax scare shortly after 9/11, which left 5 dead after the toxic cells were sent through the mail. And new gene editing technologies are increasing the odds that a hypothetical bioweapon targeted at a certain ethnicity, or even a single individual like a world leader, could one day become a reality.

While attacks using traditional weapons may require much less expertise, the dangers of bioweapons should not be ignored. It might seem impossible to make bioweapons without plenty of expensive materials and scientific knowledge, but recent advances in biotechnology may make it even easier for bioweapons to be produced outside of a specialized research lab. The cost to chemically manufacture strands of DNA is falling rapidly, meaning it may one day be affordable to ‘print’ deadly proteins or cells at home. And the openness of science publishing, which has been crucial to our rapid research advances, also means that anyone can freely Google the chemical details of deadly neurotoxins. In fact, the most controversial aspect of the supercharged influenza case was not that the experiments had been carried out, but that the researchers wanted to openly share the details.

On a more hopeful note, scientific advances may allow researchers to find solutions to biotechnology threats as quickly as they arise. Recombinant DNA and biotechnology tools have enabled the rapid invention of new vaccines which could protect against new outbreaks, natural or man-made. For example, less than 5 months after the World Health Organization declared Zika virus a public health emergency, researchers got approval to enroll patients in trials for a DNA vaccine.

The ethics of biotechnology

Biotechnology doesn’t have to be deadly, or even dangerous, to fundamentally change our lives. While humans have been altering genes of plants and animals for millennia — first through selective breeding and more recently with molecular tools and chimeras — we are only just beginning to make changes to our own genomes (amid great controversy).

Cutting-edge tools like CRISPR/Cas9 and DNA synthesis raise important ethical questions that are increasingly urgent to answer. Some question whether altering human genes means “playing God,” and if so, whether we should do that at all. For instance, if gene therapy in humans is acceptable to cure disease, where do you draw the line? Among disease-associated gene mutations, some come with virtual certainty of premature death, while others put you at higher risk for something like Alzheimer’s, but don’t guarantee you’ll get the disease. Many others lie somewhere in between. How do we determine a hard limit for which gene surgery to undertake, and under what circumstances, especially given that the surgery itself comes with the risk of causing genetic damage? Scholars and policymakers have wrestled with these questions for many years, and there is some guidance in documents such as the United Nations’ Universal Declaration on the Human Genome and Human Rights.

And what about ways that biotechnology may contribute to inequality in society? Early work in gene surgery will no doubt be expensive – for example, Novartis plans to charge $475,000 for a one-time treatment of their recently approved cancer therapy, a drug which, in trials, has rescued patients facing certain death. Will today’s income inequality, combined with biotechnology tools and talk of ‘designer babies’, lead to tomorrow’s permanent underclass of people who couldn’t afford genetic enhancement?

Advances in biotechnology are escalating the debate, from questions about altering life to creating it from scratch. For example, a recently announced initiative called GP-Write has the goal of synthesizing an entire human genome from chemical building blocks within the next 10 years. The project organizers have many applications in mind, from bringing back wooly mammoths to growing human organs in pigs. But, as critics pointed out, the technology could make it possible to produce children with no biological parents, or to recreate the genome of another human, like making cellular replicas of Einstein. “To create a human genome from scratch would be an enormous moral gesture,” write two bioethicists regarding the GP-Write project. In response, the organizers of GP-Write insist that they welcome a vigorous ethical debate, and have no intention of turning synthetic cells into living humans. But this doesn’t guarantee that rapidly advancing technology won’t be applied in the future in ways we can’t yet predict.

What are the tools of biotechnology?

 

1. DNA Sequencing

It’s nearly impossible to imagine modern biotechnology without DNA sequencing. Since virtually all of biology centers around the instructions contained in DNA, biotechnologists who hope to modify the properties of cells, plants, and animals must speak the same molecular language. DNA is made up of four building blocks, or bases, and DNA sequencing is the process of determining the order of those bases in a strand of DNA. Since the publication of the complete human genome in 2003, the cost of DNA sequencing has dropped dramatically, making it a simple and widespread research tool.

Benefits: Sonia Vallabh had just graduated from law school when her mother died from a rare and fatal genetic disease. DNA sequencing showed that Sonia carried the fatal mutation as well. But far from resigning to her fate, Sonia and her husband Eric decided to fight back, and today they are graduate students at Harvard, racing to find a cure. DNA sequencing has also allowed Sonia to become pregnant, since doctors could test her eggs for ones that don’t have the mutation. While most people’s genetic blueprints don’t contain deadly mysteries, our health is increasingly supported by the medical breakthroughs that DNA sequencing has enabled. For example, researchers were able to track the 2014 Ebola epidemic in real time using DNA sequencing. And pharmaceutical companies are designing new anti-cancer drugs targeted to people with a specific DNA mutation. Entire new fields, such as personalized medicine, owe their existence to DNA sequencing technology.

Risks: Simply reading DNA is not harmful, but it is foundational for all of modern biotechnology. As the saying goes, knowledge is power, and the misuse of DNA information could have dire consequences. While DNA sequencing alone cannot make bioweapons, it’s hard to imagine waging biological warfare without being able to analyze the genes of infectious or deadly cells or viruses. And although one’s own DNA information has traditionally been considered personal and private, containing information about your ancestors, family, and medical conditions,  governments and corporations increasingly include a person’s DNA signature in the information they collect. Some warn that such databases could be used to track people or discriminate on the basis of private medical records – a dystopian vision of the future familiar to anyone who’s seen the movie GATTACA. Even supplying patients with their own genetic information has come under scrutiny, if it’s done without proper context, as evidenced by the dispute between the FDA and the direct-to-consumer genetic testing service 23andMe. Finally, DNA testing opens the door to sticky ethical questions, such as whether to carry to term a pregnancy after the fetus is found to have a genetic mutation.

 

2. Recombinant DNA

The modern field of biotechnology was born when scientists first manipulated – or ‘recombined’ –  DNA in a test tube, and today almost all aspects of society are impacted by so-called ‘rDNA’. Recombinant DNA tools allow researchers to choose a protein they think may be important for health or industry, and then remove that protein from its original context. Once removed, the protein can be studied in a species that’s simple to manipulate, such as E. coli bacteria. This lets researchers reproduce it in vast quantities, engineer it for improved properties, and/or transplant it into a new species. Modern biomedical research, many best-selling drugs, most of the clothes you wear, and many of the foods you eat rely on rDNA biotechnology.

Benefits: Simply put, our world has been reshaped by rDNA. Modern medical advances are unimaginable without the ability to study cells and proteins with rDNA and the tools used to make it, such as PCR, which helps researchers ‘copy and paste’ DNA in a test tube. An increasing number of vaccines and drugs are the direct products of rDNA. For example, nearly all insulin used in treating diabetes today is produced recombinantly. Additionally, cheese lovers may be interested to know that rDNA provides ingredients for a majority of hard cheeses produced in the West. Many important crops have been genetically modified to produce higher yields, withstand environmental stress, or grow without pesticides. Facing the unprecedented threats of climate change, many researchers believe rDNA and GMOs will be crucial in humanity’s efforts to adapt to rapid environmental changes.

Risks: The inventors of rDNA themselves warned the public and their colleagues about the dangers of this technology. For example, they feared that rDNA derived from drug-resistant bacteria could escape from the lab, threatening the public with infectious superbugs. And recombinant viruses, useful for introducing genes into cells in a petri dish, might instead infect the human researchers. Some of the initial fears were allayed when scientists realized that genetic modification is much trickier than initially thought, and once the realistic threats were identified – like recombinant viruses or the handling of deadly toxins –  safety and regulatory measures were put in place. Still, there are concerns that rogue scientists or bioterrorists could produce weapons with rDNA. For instance, it took researchers just 3 years to make poliovirus from scratch in 2006, and today the same could be accomplished in a matter of weeks. Recent flu epidemics have killed over 200,000, and the malicious release of an engineered virus could be much deadlier – especially if preventative measures, such as vaccine stockpiles, are not in place.

3. DNA Synthesis

Synthesizing DNA has the advantage of offering total researcher control over the final product. With many of the mysteries of DNA still unsolved, some scientists believe the only way to truly understand the genome is to make one from its basic building blocks. Building DNA from scratch has traditionally been too expensive and inefficient to be very practical, but in 2010, researchers did just that, completely synthesizing the genome of a bacteria and injecting it into a living cell. Since then, scientists have made bigger and bigger genomes, and recently, the GP-Write project launched with the intention of tackling perhaps the ultimate goal: chemically fabricating an entire human genome. Meeting this goal – and within a 10 year timeline – will require new technology and an explosion in manufacturing capacity. But the project’s success could signal the impact of synthetic DNA on the future of biotechnology.

Benefits: Plummeting costs and technical advances have made the goal of total genome synthesis seem much more immediate. Scientists hope these advances, and the insights they enable, will ultimately make it easier to make custom cells to serve as medicines or even bomb-sniffing plants. Fantastical applications of DNA synthesis include human cells that are immune to all viruses or DNA-based data storage. Prof. George Church of Harvard has proposed using DNA synthesis technology to ‘de-extinct’ the passenger pigeon, wooly mammoth, or even Neanderthals. One company hopes to edit pig cells using DNA synthesis technology so that their organs can be transplanted into humans. And DNA is an efficient option for storing data, as researchers recently demonstrated when they stored a movie file in the genome of a cell.

Risks: DNA synthesis has sparked significant controversy and ethical concerns. For example, when the GP-Write project was announced, some criticized the organizers for the troubling possibilities that synthesizing genomes could evoke, likening it to playing God. Would it be ethical, for instance, to synthesize Einstein’s genome and transplant it into cells? The technology to do so does not yet exist, and GP-Write leaders have backed away from making human genomes in living cells, but some are still demanding that the ethical debate happen well in advance of the technology’s arrival. Additionally, cheap DNA synthesis could one day democratize the ability to make bioweapons or other nuisances, as one virologist demonstrated when he made the horsepox virus (related to the virus that causes smallpox) with DNA he ordered over the Internet. (It should be noted, however, that the other ingredients needed to make the horsepox virus are specialized equipment and deep technical expertise.)

 

4. Genome Editing

Many diseases have a basis in our DNA, and until recently, doctors had very few tools to address the root causes. That appears to have changed with the recent discovery of a DNA editing system called CRISPR/Cas9. (A note on terminology – CRISPR is a bacterial immune system, while Cas9 is one protein component of that system, but both terms are often used to refer to the protein.) It operates in cells like a DNA scissor, opening slots in the genome where scientists can insert their own sequence. While the capability of cutting DNA wasn’t unprecedented, Cas9 dusts the competition with its effectiveness and ease of use. Even though it’s a biotech newcomer, much of the scientific community has already caught ‘CRISPR-fever,’ and biotech companies are racing to turn genome editing tools into the next blockbuster pharmaceutical.

Benefits: Genome editing may be the key to solving currently intractable genetic diseases such as cystic fibrosis, which is caused by a single genetic defect. If Cas9 can somehow be inserted into a patient’s cells, it could fix the mutations that cause such diseases, offering a permanent cure. Even diseases caused by many mutations, like cancer, or caused by a virus, like HIV/AIDS, could be treated using genome editing. Just recently, an FDA panel recommended a gene therapy for cancer, which showed dramatic responses for patients who had exhausted every other treatment. Genome editing tools are also used to make lab models of diseases, cells that store memories, and tools that can detect epidemic viruses like Zika or Ebola. And as described above, if a gene drive, which uses Cas9, is deployed effectively, we could eliminate diseases such as malaria, which kills nearly half a million people each year.

Risks: Cas9 has generated nearly as much controversy as it has excitement, because genome editing carries both safety issues and ethical risks. Cutting and repairing a cell’s DNA is not risk-free, and errors in the process could make a disease worse, not better. Genome editing in reproductive cells, such as sperm or eggs, could result in heritable genetic changes, meaning dangerous mutations could be passed down to future generations. And some warn of unethical uses of genome editing, fearing a rise of ‘designer babies’ if parents are allowed to choose their children’s traits, even though there are currently no straightforward links between one’s genes and their intelligence, appearance, etc. Similarly, a gene drive, despite possibly minimizing the spread of certain diseases, has the potential to create great harm since it is intended to kill or modify an entire species. A successful gene drive could have unintended ecological impacts, be used with malicious intent, or mutate in unexpected ways. Finally, while the capability doesn’t currently exist, it’s not out of the realm of possibility that a rogue agent could develop genetically selective bioweapons to target individuals or populations with certain genetic traits.

 

Recommended References

Videos

Research Papers

Books

Informational Documents

Articles

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

Podcast: Can We Avoid the Worst of Climate Change? with Alexander Verbeek and John Moorhead

“There are basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” – Alexander Verbeek

On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.

Topics discussed in this episode include:

  • Why the difference between 1.5 and 2 degrees C of global warming is so important, and why we can’t exceed 2 degrees C of warming
  • Why the economy needs to fundamentally change to save the planet
  • The inequality of climate change
  • Climate change’s relation to international security problems
  • How we can avoid the most dangerous impacts of climate change: runaway climate change and a “Hothouse Earth”
  • Drawdown’s 80 existing technologies and practices to solve climate change
  • “Trickle up” climate solutions — why individual action is just as important as national and international action
  • What all listeners can start doing today to address climate change

Publications and initiatives discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hi everyone, Ariel Conn here with the Future of Life Institute. Now, this month’s podcast is going live on Halloween, so I thought what better way to terrify our listeners than with this month’s IPCC report. If you’ve been keeping up with the news this month, you’re well aware that the report made very dire predictions about what a future warmer world will look like if we don’t keep global temperatures from rising more than 1.5 degrees Celsius. Then of course there were all of the scientists’ warnings that came out after the report about how the report underestimated just how bad things could get.

It was certainly enough to leave me awake at night in a cold sweat. Yet the report wasn’t completely without hope. The authors seem to still think that we can take action in time to keep global warming to 1.5 degrees Celsius. So to consider this report, the current state of our understanding of climate change, and how we can ensure global warming is kept to a minimum, I’m excited to have Alexander Verbeek and John Moorhead join me today.

Alexander is a Dutch environmentalist, diplomat, and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. Over the past 28 years, he has worked on international security, humanitarian, and geopolitical risk issues, and the linkage to the Earth’s accelerating environmental crisis. He created the Planetary Security Initiative held at The Hague’s Peace Palace where representatives from 75 countries meet annually on the climate change-security relationship. He spends most of his time speaking and advising on planetary change to academia, global NGOs, private firms, and international organizations.

John is President of Drawdown Switzerland in addition to being a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com. He advises and informs on climate solutions that are economy, society, and environment positive. He affects change by engaging on the solutions to global warming with youth, business, policy makers, investors, civil society, government leaders, et cetera. Drawdown Switzerland an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming in Switzerland and internationally by investment at scale in Drawdown Solutions. So John and Alexander, thank you both so much for joining me today.

Alexander: It’s a pleasure.

John: Hi Ariel.

Ariel: All right, so before we get too far into any details, I want to just look first at the overall message of the IPCC report. That was essentially: two degrees warming is a lot worse than 1.5 degrees warming. So, I guess my very first question is why did the IPCC look at that distinction as opposed to anything else?

Alexander: Well, I think it’s a direct follow up from the negotiations in the Paris Agreement, where in a very late stage after the talk for all the time about two degrees, at a very late stage the text included the reference to aiming for 1.5 degrees. At that moment, it invited the IPCC to produce a report by 2018 about what the difference actually is between 1.5 and 2 degrees. Another major conclusion is that it is still possible to stay below 1.5 degrees, but then we have to really urgently really do a lot, and that is basically cut in the next 12 years our carbon pollution with 45%. So that means we have no day to lose, and governments, basically everybody, business and people, everybody should get in action. The house is on fire. We need to do something right now.

John: In addition to that, we’re seeing a whole body of scientific study that’s showing just how difficult it would be if we were to get to 2 degrees and what the differences are. That was also very important. Just for your US listeners, I just wanted to clarify because we’re going to be talking in degrees centigrade, so for the sake of argument, if you just multiply by two, every time you hear one, it’s two degrees Fahrenheit. I just wanted to add that.

Ariel: Okay great, thank you. So before we talk about how to address the problem, I want to get more into what the problem actually is. And so first, what is the difference between 1.5 degrees Celsius and 2 degrees Celsius in terms of what impact that will have on the planet?

John: So far we’ve already seen a one degree C increase. The impacts that we’re seeing, they were all predicted by the science, but in many cases we’ve really been quite shocked at just how quickly global warming is happening and the impacts it’s having. I live here in Switzerland, and we’re just now actually experiencing another drought, but in the summer we had the worst drought in eastern Switzerland since 1847. Of course we’ve seen the terrible hurricanes hitting the United States this year and last. That’s one degree. So 1.5 degrees increase, I like to use the analogy of our body temperature: If you’re increasing your body temperature by two degrees Fahrenheit, that’s already quite bad, but if you then increase it by three degrees Fahrenheit, or four, or five, or six, then you’re really ill. That’s really what happens with global warming. It’s not a straight line.

For instance, the difference between 1.5 degrees and two degrees is that heat waves are forecast to increase by over 40%. There was another study that showed that fresh water supply would decrease by 9% in the Mediterranean for 1.5 degrees, but it would decrease by 17% if we got to two degrees. So that’s practically doubling the impact for a change of 1.5 degrees. I can go on. If you look at wheat production, the difference between two and 1.5 degrees is a 70% loss in yield. Sea level rise would be 50 centimeters versus 40 centimeters, and 10 centimeters doesn’t sound like that much, but it’s a huge amount in terms of increase.

Alexander: Just to illustrate that a bit, if you have just a 10 centimeters increase, that means that 10 million people extra will be on the move. Or to formulate it another way, I remember when Hurricane Sandy hit New York and the subway flooded. At that moment we had, and that’s where we now are more or less, we have had some 20 centimeters of sea level rise since the industrial revolution. If we didn’t have those 20 centimeters, the subways would not have flooded. So it sounds like nothing, but it has a lot of impacts. I think another one that I saw that was really striking is the impact on nature, the impact on insects or on coral reefs. So if you have two degrees, there’s hardly any coral reef left in the world, whereas if it would be 1.5 degrees, we would still lose 70-90%, but there could still be some coral reefs left.

John: That’s a great example I would say, because currently it’s 50% of coral reefs at one degree increase have already died off. So at 1.5, we could reach 90%, and two degrees we will have practically wiped off all coral reefs.

Alexander: And the humanitarian aspects are massive. I mean John just mentioned water. I think one of these things we will see in the next decade or next two decades is a lot of water related problems. The amount of people that will not have access to water is increasing rapidly. It may double in the next decade. So any indication here that we have in the report on how much more problems we will see with water if we have that half degree extra is a very good warning. If you see the impact of not enough water on the quality of life of people, on people going on the move, increased urbanization, more tensions in the city because there they also have problems with having enough water, and of course water is related to energy and especially food production. So its humanitarian impacts of just that half degree extra is massive.

Then last thing here, we’re talking about global average. In some areas, if let’s say globally it gets two degrees warmer, in landlocked countries for instance, it will go much faster, or in the Arctic, it goes like twice as fast with enormous impacts and potential positive feedback loops that might end up with.

Ariel: That was something interesting for me to read. I’ve heard about how the global average will increase 1.5 to two degrees, but I hadn’t heard until I read this particular report that that can mean up to 3.5 degrees Celsius in certain places, that it’s not going to be equally distributed, that some places will get significantly hotter. Have models been able to predict where that’s likely to happen?

John: Yeah, and not only that, it’s already happening. That’s also one of the problems we face when we describe global warming in terms of one number, an average number, is that it doesn’t portray the big differences that we’re seeing in terms of global warming. For instance, in the case of Switzerland we’re already at a two degree centigrade increase, and that’s had huge implications for Switzerland already. We’re a landlocked country. We have beautiful mountains as you know, and beautiful lakes as well, but we’re currently seeing things that we hadn’t seen before, which is some of our lakes are starting to dry out in this current drought period. Lake levels have dropped very significantly. Not the major ones that are fed by glaciers, but the glaciers themselves, out of 80 glaciers that are tracked in Switzerland, 79 are retreating. They’re losing mass.

That’s having impacts, and in terms of extreme weather, just this last summer we saw these incredible – what Al Gore calls water bombs – that happened in Lausanne and Eschenz, two of our cities, where we saw centimeters, months worth of rain, fall in the space of just a few minutes. This is caused all sorts of damages as well.

Just a last point about temperature differences is that, for instance, northern Europe this last summer, we saw four, five degrees, much warmer, which caused so much drying out that we saw forest fires that we hadn’t seen in places like Sweden or Finland and so on. We also saw in February of this year what the scientists call a temperature anomaly of 20 degrees, which meant that for a few days it was warmer in the North Pole than it was in Poland because of this temperature anomaly. Averages help us understand the overall trends, but they also hide differences that are important to consider as well.

Alexander: Maybe the word global warming is, let’s say for a general public, not the right word because it sounds a bit like “a little bit warmer,” and if it’s now two degrees warmer than yesterday, I don’t care so much. Maybe “climate weirding” or “climate chaos” are better because we will just get more extremes. Let’s say you follow for instance how the jet stream is moving, it used to have rather quick pulls going around the planet at the height where the jets like to fly at about 10 kilometers. It is now, because there’s less temperature difference between the equator and the poles, it’s getting slower. It’s getting a bit lazy.

That means two things. It means on the one hand that you see that once you have a certain weather pattern, it sticks longer, but the other thing is by this lazy jet stream to compare it a bit like a river that enters the flood lands and starts to meander, is that the waves are getting bigger. Let’s say if it used to be that the jet stream brought cold air from Iceland to the Netherlands where I’m from, since it is now wavier, it brings now cold weather all the way from Greenland, and same with warm weather. It comes from further down south and it sticks longer in that pattern so you get longer droughts, you get longer periods of rain, it all gets more extreme. So a country like the Netherlands which is a delta where we always deal with too much water, and like many other countries in the world, we experience drought now which is something that we’re not used to. We have to ask foreign experts how do you deal with drought, because we always tried to pump the water out.

John: Yeah I think the French, as often is the case, have the best term for it. It’s called dérèglement climatique which is this idea of climate disruption.

Ariel: I’d like to come back to some of the humanitarian impacts because one of the things that I see a lot is this idea that it’s the richer, mostly western but not completely western countries that are causing most of the problems, and yet it’s the poorer countries that are going to suffer the most. I was wondering if you guys could touch on that a little bit?

Alexander: Well I think everything related to climate change is about that it is unfair. It is created by countries that generally are less impacted by now, so we started let’s say in western Europe with the industrial revolution and came followed by the US that took over. Historically the US produced the most. Then you have a different groups of countries. Let’s take a country in Sahel like Burkina Faso for instance. They contributed practically zero to the whole problem, but the impact is much more on their sides. Then there’s kind of a group of countries in between. Let’s say a country like China that for a long time did not contribute much to the problem and is now rapidly catching up. Then you get this difficult “tragedy of the commons” behavior that everybody points at somebody else for their part, what they have done, and either because they did it in past or because they do it now, everybody can use the statistics in their advantage, apart from these really really poor countries that are getting the worst.

I mean a country like Tuvalu is just disappearing. That’s one of those low-lying natural states in the Pacific. They contributed absolutely zero and their country is drowning. They can point at everybody else and nobody will point at them. So there is a huge call for that this is an absolutely globalized problem that you can only solve by respecting each other, by cooperating together, and by understanding that if you help other countries, it’s not only your moral obligation but it’s also in your own interest to help the others to solve this.

John: Yeah. Your listeners would most likely also be aware of the sustainable development goals, which are the objectives the UN set for 2030. There are 17 of them. They include things like no poverty, zero hunger, health, education, gender equality, et cetera. If you look at who is being impacted by a 2 degree and a 1.5 degree world, then you can see that it’s particularly in the developing and the least developed countries that the impact is felt the most, and that these SDGs are much more difficult if not impossible to reach in a 2 degree world. Which again is why it’s so important for us to stay within 1.5 degrees.

Ariel: And so looking at this from more of a geopolitical perspective, in terms of trying to govern and address… I guess this is going to be a couple questions. In terms of trying to prevent climate change from getting too bad, what do countries broadly need to be doing? I want to get into specifics about that question later, but broadly for now what do they need to be doing? And then, how do we deal with a lot of the humanitarian impacts at a government level if we don’t keep it below 1.5 degrees?

Alexander: A broad answer would be two things: get rid of the carbon pollution that we’re producing every day as soon as possible. So phase out fossil fuels. The other that’s a broad answer would be a parallel to what John was just talking about. We have the agenda 2030. We have those 17 sustainable development goals. If we would all really follow that and live up to that, we’d actually get a much better world because all of these things are integrated. If you just look at climate change in isolation you are not going to get there. It’s highly integrated to all those related problems.

John: Yeah, just in terms of what needs to be done broadly speaking, it’s the adoption of renewable energy, scaling up massively the way we produce electricity using renewables. The IPCC suggested there should be 85% and there are others that say we can even get to 100% renewables by 2050. The other side is everything to do with land use and food, our diet has a huge impact as well. On the one hand as Alexander has said very well, we need to cut down on emissions that are caused by industry and fossil fuel use, but on the other hand what’s really important is to preserve our natural ecosystems that protect us, and add forest, not deforest. We need to naturally scale up the capture of carbon dioxide. Those are the two pieces of the puzzle.

Alexander: Don’t want to go too much into details, but all together it ultimately asks for a different kind of economy. In our latest elections when I looked at the election programs, every party whether left or right or in the middle, they all promise something like, “when we’re in government, they’ll be something like 3% of economic growth every year.” But if you grow 3% every year, that means that every 20 years you double your economy. That means every 40 years you quadruple your economy, which might be nice if it will be only the services industry, but if you talk about production we can not let everything grow in the amount of resources that we use and the amount of waste we produce, when the Earth itself is not growing. So apart from moving to renewables, it is also changing the way how we use everything around and how we consume.

You don’t have to grow when you have it this good already, but it’s so much in the system that we have used the past 200, 250 years. Everything is based on growth. And as the Club of Romes said in the early ’70s, there’s limits to growth unless our planet would be something like a balloon that somebody would blow air in and it would be growing, then you would have different system. But as long as that is not the case and as long as there’s no other planets where we can fly to, that is the question where it’s very hard to find an answer. You can conclude that we can not grow, but how do we change that? That’s probably a completely different podcast debate, but it’s something I wanted to flag here because at the end of today you always end up with this question.

Ariel: This is actually, this is very much something that I wanted to come back to, especially in terms of what individuals can do, I think consuming less is one of the things that we can do to help. So I want to come back to that idea. I want to talk a little bit more though about some of the problems that we face if we don’t address the problem, and then come back to that. So, first going back to the geopolitics of addressing climate change if it happens, I think, again, we’ve talked about some of the problems that can arise as a result of climate change, but climate change is also thought of as a threat multiplier. So it could trigger other problems. I was hoping you could talk a little bit about some of the threats that governments need to be aware of if they don’t address climate change, both in terms of what climate change could directly cause and what it could indirectly cause.

Alexander: There’s so much we can cover here. Let’s start with security, it’s maybe the first one you think of. You’ll read in the paper about climate wars and water wars and those kind of popular words, which of course is too simplified. But, there is a clear correlation between changing climates and security.

We’ve seen it in many places. You see it in the place where we’re seeing more extreme weather now, so let’s say in the Sahel area, or in the Middle East, there’s a lot of examples where you just see that because of rising temperatures and because of less rainfall which is consistently going on now, it’s getting worse now. The combination is worse. You get more periods of drought, so people are going on the move. Where are they going to? Well normally, unlike many populists like to claim in some countries, they’re not immediately going to the western countries. They don’t go too far. People don’t want to move too far so they go to an area not too far away, which is a little bit less hit by this drought, but by the fact that they arrived there, they increased pressures on the little water and food and other resources that they have. That creates, of course, tensions with the people that are already there.

So think for instance about the Nomadic herdsman and the more agricultural farmers that you have and the kind of tension. They all need a little bit of water, so you see a lot of examples. There’s this well known graph where you see the world’s food prices over the past 10 years. There were two big spikes where suddenly the food prices as well as the energy prices rapidly went up. The most well known is in late 2010. Then if you plot on that graph the revolutions and uprisings and unrest in the world, you see that as soon as the world’s food price gets above, let’s say, 200, you see that there is so much more unrest. The 2010 one led soon after to the Arab Spring, which is not an automatic connection. In some countries there was no unrest, and they had the same drought, so it’s not a one on one connection.

So I think you used the right word of saying a threat multiplier. On top of all the other problems they have with bad governance and fragile economies and all kinds of other development aspects that you find back in those same SDGs that were mentioned, if you add to that the climate change problem, you will get a lot of unrest.

But let me add one last thing here. It’s not just about security. There’s also, there’s an example for instance, when Bangkok was flooding, the factory that produced chips was flooded. The chip prices worldwide suddenly rose like 10%, but there was this factory in the UK that produced perfectly ready cars to sell. The only thing they missed was this few-centimeters big electronic chip that needed to be in the car. So they had to close the factory for like 6 weeks because of a flooding in Bangkok. That just shows that this interconnected worldwide economy that we have, you’re nowhere in the world safe from the impacts of climate change.

Ariel: I’m not sure if it was the same flood, but I think Apple had a similar problem, didn’t they? Where they had a backlog of problems with hard drives or something because the manufacturer, I think in Thailand, I don’t remember, flooded.

But anyway, one more problem that I want to bring up, and that is: at the moment we’re talking about actually taking action. I mean even if we only see global temperatures rise to two degrees Celsius, that will be because we took action. But my understanding is, on our current path we will exceed two degrees Celsius. In fact, the US National Highway Traffic Safety Administration Report that came out recently basically says that a 4 degree increase is inevitable. So I want to talk about what the world looks like at that level, and then also what runaway climate change is and whether you think we’re on a path towards runaway climate change, or if that’s still an extreme that hopefully won’t happen.

John: There’s a very important discussion that’s going on around at what point we will reach that tipping point where because of positive feedback loops, it’s just going to get worse and worse and worse. There’s been some very interesting publications lately that were trying to understand at what level that would happen. It turns out that the assessment is that it’s probably around 2 degrees. At the moment, if you look at the Paris Agreement and what all the countries have committed to and you basically take all those commitments which, you were mentioning the actions that already have been started, and you basically play them out until 2030, we would be on a track that would take us to 3 degrees increase, ultimately.

Ariel: And to clarify, that’s still with us taking some level of action, right? I mean, when you talk about that, that’s still us having done something?

John: Yeah, if you add up all the countries’ plans that they committed to and they fully implement them, it’s not sufficient. We would get to 3 degrees. But that’s just to say just how much action is required, we really need to step up the effort dramatically. That’s basically what the 1.5 degrees IPCC report tells us. If we were to get already to 2 degrees, let’s not talk about 3 degrees in the moment. But what could happen is that we would reach this tipping point into what scientists are describing a “Hothouse Earth.” What that means is that you get so much ice melting — now, the ice and snow serve an important protective function. They reflect back out, because it’s white it reflects back out a lot of the heat. If all that melts and is replaced by much darker land mass or ocean, then that heat is gonna be absorbed, not reflected. So that’s one positive feedback loop that constantly makes it even warmer, and that melts more ice, et cetera.

Another one is the permafrost, where the permafrost, as its name suggests, is frozen in the northern latitudes. The risk is that it starts to melt. It’s not the permafrost itself, it’s all the methane that it contains, which is a very powerful greenhouse gas which would then get released. That leads to warmer temperatures which melts even more of the permafrost et cetera.

That’s the whole idea of runaway, then we completely lose control, all the natural cooling systems, the trees and so on start to die back as well, and so we get four, five, six … But as I mentioned earlier, 4 could be 7 in some parts of the world and it could be 2 or 3 in others. It would make large parts of the world basically uninhabitable if you take it to the extreme of where it could all go.

Ariel: Do we have ideas of how long that could take? Is that something that we think could happen in the next 100 years or is that something that would still take a couple hundred years?

John: Whenever we talk about the temperature increases, we’re looking at the end of the century, so that’s 2100, but that’s less than 100 years.

Ariel: Okay.

Alexander: The problem is looking to, at the end of the century, this always come back to “end of the century.” It sounds so far away, it’s just 82 years. I mean if you flip back, you’re in 1936. My father was a boy of 10 years old and it’s not that far away. My daughter might still live in 2100, but by that time she’ll have children and maybe grandchildren that have to live through the next century. It’s not that once we are at the year 2100 that the problem suddenly stops. We talk about an accelerating problem. If you stay on the business-as-usual scenario and you mitigate hardly anything, then it’s 4 degrees at the end of the century, but the temperatures keep rising.

As we already said, 4 degrees at the end of the century, that is kind of average. In the worst case scenario, it might as well be 6. It could also be less. And in the Arctic it could be anywhere between let’s say 6 or maybe even 11. It’s typically the Arctic where you have this methane, what John was just talking about, so we don’t want to get some kind of Venus, you know. This is typically the world we do not want. That makes it why it’s so extremely important to take measures now because anything you do now is a fantastic investment in the future.

If you look at risks on other things, Dick Cheney a couple of years ago said, if there’s only 1% chance that terrorists will get weapons of mass destruction we should act as if they have them. Why don’t we do it in this case? If there’s only 1% chance that we would get complete destruction of the planet as we know it, we have to take urgent action. So why do it on the one risk that hardly kills people if you look on big numbers, however bad terrorism is, and now we talk something about a potential massive killer of millions of people and we just say, “Yeah, well you know, only 50% chance that we get in this scenario or that scenario.”

What would you do if you were sitting in a plane and at takeoff the pilot says, “Hi guys. Happy to be on board. This is how you buckle and unbuckle your belt. And oh by the way, we have 50% chance that we’re gonna make it today. Hooray, we’re going to take off.” Well you would get out of the plane. But you can’t get out of this planet. So we have to take action urgently, and I think the report that came out is excellent.

The problem is, if you’re reading it a bit too much and everybody is focusing on it now, you get into this energetic mood like, “Hey. We can do it!” We only talk about corals. We only talk about this because suddenly we’re not talking about the three or four or five degree scenarios, which is good for a change because it gives hope. I know that in talks like this I always try to give as much hope as I can and show the possibilities, but we shouldn’t forget about how serious the thing is that we’re actually talking about. So now we go back to the positive side.

Ariel: Well I am all for switching to the positive side. I find myself getting increasingly cynical about our odds of success, so let’s try to fix that in whatever time we have left.

John: Can I just add just briefly, Alex, because I think that’s a great comment. It’s something that I’m also confronted with sometimes by fellow climate change folk, is that they come up to me, and this is after they’ve heard me talk about what the solutions are. They tell me, “Don’t make it sound too easy either.” But I think it’s a question of balance and I think that when we do talk about the solutions and we’ll hear about them, but do bear in mind just how much change is involved. I mean it is really very significant change that we need to embark on to avoid 1.5 or beyond.

Alexander: There’s basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.

It is only because those that have so much political power are so closely connected to the big corporations that look for short-term profits, and certainly not all of them, but the ones that are really influential, and I’m certainly thinking about the country of our host today. They have so much impact on the policies that are made and their sole interest is just the next quarterly financial report that comes out. That is not in the interest of the people of this planet.

Ariel: So this is actually a good transition to a couple of questions that I have. I actually did start looking at the book Drawdown, which talks about, what is it, 80 solutions? Is that what they discuss?

John: Yeah, 80 existing solutions or technologies or practices, and then there’s 20 what they call coming attractions which would be in addition to that. But it’s the 80 we’re talking about, yeah.

Ariel: Okay, so I started reading that and I read the introduction and the first chapter and felt very, very hopeful. I started reading about some of the technologies and I still felt hopeful. Then as I continued reading it and began to fully appreciate just how many technologies have to be implemented, I started to feel less hopeful. And so, going back, before we talk too much about the specific technologies, I think as someone who’s in the US, one of the questions that I have is even if our federal government isn’t going to take action, is it still possible for those of us who do believe that climate change is an issue to take enough action that we can counter that?

John: That’s an excellent question and it’s a very apropos question as well. My take on this is I had the privilege of being at the Global Climate Action Summit in San Francisco. You’re living it, but I think it’s two worlds basically in the United States at the moment, at least two worlds. What really impressed me, however, was that you had people of all political persuasions, you had indigenous people, you had the head of the union, you had mayors, city leaders. You also had some country leaders as well who were there, particularly those who are gonna be most impacted by climate change. What really excited me was the number of commitments that were coming at us throughout the days of, one city that’s gonna go completely renewable and so on.

We had so many examples of those. And in particular, if you’re talking about the US, California, which actually if it was its own country would be the fifth economy I believe — they’re committed to achieving 100% renewable energy by 2050. There was also the mayor of Houston, for instance, who explained how quickly he wanted to also achieve 100% renewables. That’s very exciting and that movement I think is very important. It would be of course much much better to have nations’ leaders as well to fully back this, but I think that there’s a trickle-up aspect, and I don’t know if this is the right time to talk about exponential growth that can happen. Maybe when we talk about the specific solutions we can talk about just how quickly they can go, particularly when you have a popular movement around saving the climate.

A couple of weeks ago I was in Geneva. There was a protest there. Geneva is quite a conservative city actually. I mean you’ve got some wonderful chocolate as you know, but also a lot of banks and so on. At the march, there were, according to the organizers, 7000 people. It was really impressive to see that in Geneva which is not that big a city. The year before at the same march there were 500. So we’re more than increasing the numbers by 10, and I think that there’s a lot of communities and citizens that are being affected that are saying, “I don’t care what the federal government’s doing. I’m gonna put a solar panel on my roof. I’m going to change my diet, because it’s cheaper, it saves me money, and it also is much healthier to do that and with much more resilience,” when a hurricane comes around for instance.

Ariel: I think now is a good time to start talking about what some of the solutions are. I wanna come back to the idea of trickle up, because I’m still gonna ask you guys more questions about individual action as well, but first let’s talk about some of the things that we can be doing now. What are some of the technological developments that exist today that have the most promise that we should be investing more in and using more?

John: What I perhaps wanted to do is just take a little step back, because the IPCC does talk about some very unpleasant things that could happen to our planet, but they also talk about what the steps are to stay within 1.5 degrees. Then there’s some other plans we can discuss that also achieve that. So what does the IPCC tell us? You mentioned it earlier. First of all, we need to significantly cut, every decade actually, by half, the carbon dioxide emission and greenhouse gas emissions. That’s something called the Carbon Law. It’s very convenient because you can imagine defining what your objective is and say okay, every 10 years I need to cut in half the emissions. That’s number one.

Number two is that we need to go dramatically to renewables. There’s no other way, because of the emissions that fossil fuels produce, they will no longer be an option. We have to go renewable as quickly as possible. It can be done by 2050. There’s a professor at Stanford called Mark Jacobson who with an international team has mapped out the way to get to 100% renewables for 139 countries. It’s called The Solutions Project. Number Three has to do with fossil fuels. What the IPCC says is that there should be practically no coal being used in 2050. That’s where there are some differences.

Basically, as I mentioned earlier, on the one hand you have your emissions and on the other hand you have this capture, the sequestration of carbon by soils and by vegetation. They’re both in balance. One is putting CO2 into the air, and the other is taking it out. So we need to favor obviously the sequestration. It’s an area under the curve problem. You have a certain budget that’s associated with that temperature increase. If you emit more, you need to absorb more. There’s just no two ways about it.

The IPCC is actually in that respect quite conservative, because they’re saying there still will be coal around. Whereas there are other plans such as Drawdown and the Exponential Climate Action Roadmap, as well as The Solutions Project which I just mentioned, which get us to 100% renewables by 2050, and so zero emissions for sake of argument.

The other difference I would say with the IPCC is that because you are faced with this tremendous problem of all this carbon dioxide we need to take out of the atmosphere, which is where Drawdown comes from. The term means to draw out of the atmosphere the carbon dioxide. There’s this technology which is around, it’s basically called energy crops. You basically grow crops for energy. That gives us a little bit of an issue because it encourages politicians to think that there’s a magic wand that we’ll be able to use in the future to all of a sudden be able to remove the carbon dioxide. I’m not saying that we may very well have to get there, what I am saying is that we can, with for instance Drawdown’s 80 solutions, get there.

Now in terms of the promise, the thing that I think is important is that the thinking has to evolve from the magic bullet syndrome that we all live every day, we always want to find that magic solution that’ll solve everything, to thinking more holistically about the whole of the Earth’s planetary system and how they interact and how we can achieve solutions that way.

Alexander: Can I ask something John? Can you summarize that Drawdown relies with its 80 technologies, completely on proven technology whereas in the recent 1.5 report, I have the impression that they practically, for every solution that they come up with, they rely on still unproven technologies that are still on the drawing table or maybe tested on a very small scale? Is there a difference between those two approaches?

John: Not exactly. I think there’s actually a lot of overlap. There’s a lot of the same solutions that are in Drawdown are in all climate solutions, so we come back to the same set which is actually very reassuring because that’s the way science works. It empirically tests and models all the different solutions. So what I always find very reassuring is whenever I read different approaches, I always look back at Drawdown and I say, “Okay yes, that’s in the 80 solutions.” So I think there is actually a lot of over overlap. A lot of IPCC is Drawdown solutions, but the IPCC works a bit differently because the scientists have to work with governments in terms of coming up with proposals, so there is a process of negotiation of how far can we take this which scientists such as the Project Drawdown scientists are unfettered by that.

They just go out and they look for what’s best. They don’t care if it’s politically sensitive or not, they will say what they need to say. But I think the big area of concern is this famous bio-energy carbon capture and storage (BECCS), which are these energy crops that you grow and then you capture the carbon dioxide. So you actually are capturing carbon dioxide. There’s both moral hazard because politicians will say, “Okay. I’m just going to wait until BECCS comes round and that will solve all our problems,” on the one hand. On the other hand it does pose us with some serious questions about competition of land for producing crops versus producing crops for energy.

Ariel: I actually want to follow up with Alexander’s question really quickly because I’ve gotten a similar impression that some of the stuff in the IPCC report is for technologies that are still in development. But my understanding is that the Drawdown solutions are in theory at least, if not in practice, ready to scale up.

John: They’re existing technologies, yeah.

Ariel: So when you say there’s a lot of overlap, is that me or us misunderstanding the IPCC report or are there solutions in the IPCC report that aren’t ready to be scaled up?

John: The approaches are a bit different. The approaches that Drawdown takes is a bottom up approach. They basically unleashed 65 scientists to go out and look for the best solutions. So they go out and they look at all the literature. And it just so happens that nuclear energy is one of them. It doesn’t produce greenhouse gas emissions. It is a way of producing energy that doesn’t cause climate change. A lot of people don’t like that of course, because of all the other problems we have with nuclear. But let me just reassure you very quickly that there are three scenarios for Drawdown. It goes from so-called “Plausible,” which I don’t like as a name because it suggests that the other ones might not be plausible, but it’s the most conservative one. Then the second one is “Drawdown.” Then the third one is “Optimum.”

Optimum doesn’t include solutions that are called with regrets, such as nuclear. So when you go optimum, basically it’s 100% renewable. There’s no nuclear energy in there either in the mix. That’s very positive. But in terms of the solutions, what they look at, what IPCC looks at is the trajectory that you could achieve given the existing technologies. So they talk about renewables, they talk about fossil fuels going down to net zero, they talk about natural climate solutions, but perhaps they don’t talk about, for instance, educating girls, which is one of the most important Drawdown solutions because of the approach that Drawdown takes where they look at everything. Sorry, that’s a bit of a long answer to your question.

Alexander: That’s actually part of the beauty of Drawdown, that they look so broadly, that educating girls… So a girl leaving school at 12 got on average like five children and a girl that you educate leaving school at the age of 18 on average has about two children, and they will have a better quality of life. They will put much less pressure on the planet. So this more holistic approach of Drawdown I like very much and I think it’s good to see so much overlap between Drawdown and IPCC. But I was struck by IPCC that it relies so heavily on still unproven technologies. I guess we have to bet on all our horses and treat this a bit as a kind of wartime economy. If you see the creativity and the innovation that we saw during the second World War in the field of technology as well as government by the way, and if you see, let’s say, the race to the moon, the amazing technology that was developed in such a short time.

Once you really dedicate all your knowledge and your creativity and your finances and your political will into solving this, we can solve this. That is what Drawdown is saying and that is also what the IPCC 1.5 is saying. We can do it, but we need the political will and we need to mobilize the strengths that we have. Unfortunately, when I look around worldwide, the trend is in many countries exactly the opposite. I think Brazil might soon be the latest one that we should be worried about.

John: Yeah.

Ariel: So this is, I guess where I’m most interested in what we can do and also possibly the most cynical, and this comes back to this trickle up idea that you were talking about. That is, we don’t have the political will right now. So what do those of us who do have the will do? How do we make that transition of people caring to governments caring? Because I do, maybe this is me being optimistic, but I do think if we can get enough people taking individual action, that will force governments to start taking action.

John: So trickle up, grassroots, I think we’re in the same sort of idea. I think it’s really important to talk a little bit, and then we will get into the solutions, but to talk about not just as the solutions to global warming, but to a lot of other problems as well such as air pollution, our health, the pollution that we see in the environment. And actually Alexander you were talking earlier about the huge transformation. But transformation does not necessarily always have to mean sacrifice. It doesn’t also have to mean that we necessarily, although it’s certainly a good idea, for instance, I think you were gonna ask a question also about flying, to fly less there’s no doubt about that. To perhaps not buy the 15th set of clothes and so on so forth.

So there certainly is an element of that, although the positive side of that is the circular economy. In fact, these solutions, it’s not a question of no growth or less growth, but it’s a question of different growth. I think in terms of the discussion in climate change, one mistake that we have made is emphasized too much the “don’t do this.” I think that’s also what’s really interesting about Drawdown, is that there’s no real judgments in there. They’re basically saying, “These are the facts.” If you have a plant-based diet, you will have a huge impact on the climate versus if you eat steak every day, right? But it’s not making a judgment. Rather than don’t eat meat it’s saying eat plant-based foods.

Ariel: So instead of saying don’t drive your car, try to make it a competition to see who can bike the furthest each week or bike the most miles?

John: For example, yeah. Or consider buying an electric car if you absolutely have to have a car. I mean in the US it’s more indispensable than in Europe.

Alexander: It means in the US that when you build new cities, try to build them in a more clever way than the US has been doing up until now because if you’re in America and you want to buy whatever, a new toothbrush, you have to get in your car to go there. When I’m in Europe, I just walk out of the door and within 100 meters I can buy a toothbrush somewhere. I walk or I go on a bicycle.

John: That might be a longer-term solution.

Alexander: Well actually it’s not. I mean in the next 30 years, the amount of investment they can place new cities is an amount of 90 trillion dollars. The city patterns that we have in Europe were developed in the Middle Ages in the centers of cities, so although it is urgent and we have to do a lot of things, you should also think about the investments that you make now that will be followed for hundreds of years. We shouldn’t keep repeating the mistakes from the past. These are the kinds of things we should also talk about. But to come back to your question on what we can do individually, I think there is so much that you can do that helps the planet.

Of course, you’re only one out of seven billion people, although if you listen to this podcast it is likely that you are in that elite out of that seven billion that is consuming much more of the planet, let’s say, than your quota that you should be allowed to. But it means, for instance, changing your diet, and then if you go to a plant-based diet, the perks are not only that it is good for the planet, it is good for yourself as well. You live longer. You have less chance of developing cancer or heart disease or all kinds of other things you don’t want to have. You will live longer. You will have for a longer time a healthier life.

It means actually that you discover all kinds of wonderful recipes that you had never heard of before when you were still eating steak every day, and it is actually a fantastic contribution for the animals that are daily on an unimaginable scale tortured all over the world, locked up in small cages. You don’t see it when you buy it at a butcher, but you are responsible because they do that because you are the consumer. So stop doing that. Better for the planet. Better for the animals. Better for yourself. Same with use your bicycle, walk more. I still have a car. It is 21 years old. It’s the only car I ever bought in my life, and I use it maximum 20 minutes per month. I’m not even buying an electrical vehicle because I still got an old one. There’s a lot that you can do and it has more advantages than just to the planet.

John: Absolutely. Actually, walkable cities is one of the Drawdown solutions. Maybe I can just mention very quickly. I’ll just list out of the 80 solutions, there was a very interesting study that showed that there are 30 of them that we could put into place today, and that that added up to about 40% of the greenhouse gases that we’ll be able to remove.

I’ll just list them quickly. The ones at the end, they’re more, if you are in an agricultural setting, which of course is probably not the case for many of your listeners. But: reduced food waste, plant-rich diets, clean cookstoves, composting, electric vehicles we talked about, ride sharing, mass transit, telepresence (basically video conferencing, and there’s a lot of progress being made there which means we perhaps don’t need to take that airplane.) Hybrid cars, bicycle infrastructure, walkable cities, electric bicycles, rooftop solar, solar water (so that’s heating your hot water using solar.) Methane digesters (it’s more in an agricultural setting where you use biomass to produce methane.) Then you have LED lighting, which is a 90% gain compared to incandescent. Household water saving, smart thermostats, household recycling and recyclable paper, micro wind (there are some people that are putting a little wind turbine on their roof.)

Now these have to do with agriculture, so they’re things like civil pasture, tropical staple trees, tree intercropping, regenerative agriculture, farmland restoration, managed grazing, farmland irrigation and so on. If you add all those up it’s already 37% of the solution. I suspect that the 20 is probably a good 20%. Those are things you can do tomorrow — today.

Ariel: Those are helpful, and we can find those all at drawdown.org; that’ll also list all 80. So you’ve brought this up a couple times, so let’s talk about flying. This was one of those things that really hit home for me. I’ve done the carbon footprint thing and I have an excellent carbon footprint right up until I fly and then it just explodes. As soon as I start adding the footprint from my flights it’s just awful. I found it frustrating that one, so many scientists especially have … I mean it’s not even that they’re flying, it’s that they have to fly if they want to develop their careers. They have to go to conferences. They have to go speak places. I don’t even know where the responsibility should lie, but it seems like maybe we need to try to be cutting back on all of this in some way, that people need to be trying to do more. I’m curious what you guys think about that.

Alexander: Well start by paying tax, for instance. Why is it — well I know why it is — but it’s absurd that when you fly an airplane you don’t pay tax. You can fly all across Europe for like 50 euros or 50 dollars. That is crazy. If you would do the same by your car, you pay tax on the petrol that you buy, and worse, you are not charged for the pollution that you cause. We know that airplanes are heavily polluting. It’s not only the CO2 that they produce, but where they produce, how they produce. It works three to four times faster than all the CO2 that you produce if you drive your car. So we know how bad it is, then make people pay for it. Just make flying more expensive. Pay for the carbon you produce. When I produce waste at home, I pay to my municipality because they pick it up and they have to take care of my garbage, but if I put garbage in the atmosphere, somehow I don’t go there. Actually, it is by all sorts of strange ways, it’s actually subsidized because you don’t pay a tax for it, so there’s worldwide like five or six times as much subsidies on fossil fuels than there is on renewables.

We completely have to change the system. Give people a budget maybe. I don’t know, there could be many solutions. You could say that everybody has the right to search a budget for flying or for carbon, and you can maybe trade that or swap it or whatever. There’s some NGOs that do it. They say to, I think the World Wildlife Fund, but correct me if I’m wrong. All the people working there, they get not only a budget for the projects, they also get a carbon budget. You just have to choose, am I going to this conference or going to that conference, or should I take the train, and you just keep track of what you are doing. That’s something we should maybe roll out on a much bigger scale and make it more expensive.

John: Yeah, the whole idea of a carbon tax, I think is key. I think that’s really important. Some other thoughts: Definitely reduce, do you really absolutely need to make that trip, think about it. Now with webcasting and video conferencing, we can do a lot more without flying. The other thing I suggest is that when you at some point you absolutely do have to travel, try to combine it with as many other things as possible that are perhaps not directly professional. If you are already in the climate change field, then at least you’re traveling for a reason. Then it’s a question of the offsets. Using calculators you can see what the emissions were and pay for what’s called an offset. That’s another option as well.

Ariel: I’ve heard mixed things about offsets. In some cases I see that yes, you should absolutely buy them, and you should. If you fly, you should get them. But that in a lot of cases they’re a bandaid or they might be making it seem like it’s okay to do this when it’s still not the solution. I’m curious what your thoughts on that are.

John: For me, something like an offset, as much as possible should be a last resort. You absolutely have to make the trip, it’s really important, and you offset your trip. You pay for some trees to be planted in the rainforest for instance. There are loads of different possibilities to do so. It’s not a good idea. Unfortunately Switzerland’s plan, for instance, includes a lot of getting others to reduce emissions. That’s really, you can argue that it’s cheaper to do it that way and somebody else might do it more cheaply for you so to speak. So cheaper to plant a tree and it’ll have more impact in the rainforest than in Switzerland. But on the other hand, it’s something which I think we really have to avoid, also because in the end the green economy is where the future lies and where we need to transform to. So if we’re constantly getting others to do the decarbonization for us, then we’ll be stuck with an industry which is ultimately will become very expensive. That’s not a good idea either.

Alexander: I think also the prices are absolutely unrealistic. If you fly, let’s say, from London to New York, your personal, just the fact that you were in the plane, not all the other people, the fact you were in the plane is responsible for three square meters of the Arctic that is melting. You can offset that by paying something like, what is it, 15 or 20 dollars for offsetting that flight. That makes ice in the Arctic extremely cheap. A square meter would be worth something like seven dollars. Well I personally would believe that it’s worth much more.

Then the thing is, then they’re going to plant a tree that takes a lot of time to grow. By the time it’s big, it’s getting CO2 out of the air, are they going to cut it and make newspapers out of it which you then burn in a fireplace, the carbon is still back to where it was. So you need to really carefully think what you’re doing. I feel it is very much a bit like going to a priest and say like, “I have flown. Oh, I have sinned, but I can now do a few prayers and I pay these $20 and now it’s fine. I can book my next flight.” That is not the way it should be. Punish people up front to pay the tickets. Pay the price for the pollution and for the harm that you are causing to this planet and to your fellow citizens on this planet.

John: Couldn’t agree more. But there are offset providers in the US, look them up. See which one you like the best and perhaps buy more offsets. Economy is half the carbon than Business class, I hate to say.

Alexander: Something for me which you mentioned there, I decided long ago, six, seven years ago, that I would never ever in my life fly Business again. I’m not, as somebody who had a thrombosis and the doctors advised me that I should take business, I don’t. I still fly. I’m very much like Ariel that my footprint is okay until the moment that I start adding flying because I do that a lot for my job. Let’s say in the next few weeks, I have a meeting in the Netherlands. I have only 20 days later a meeting in England. I stay in the Netherlands. In between I do all my travel to Belgium and France and the UK, I do everything by train. It’s only that by plane I’m going back from London to Stockholm, because I couldn’t find any reasonable way to go back. I wonder why don’t we have high speed train connections all the way up to Stockholm here.

Ariel: We talked a lot about taxing carbon. I had an interesting experience last week where I’m doing what I can to try to not drive if I’m in town. I’m trying to either bike or take the bus. What often happens is that works great until I’m running late for something, and then I just drive because it’s easier. But the other week, I was giving a little talk on the campus at CU Boulder, and the parking on CU Boulder is just awful. There is absolutely no way that, no matter how late I’m running, it’s more convenient for me to take my car. It never even once dawned on me to take the car. I took a bus. It’s that much easier. I thought that was really interesting because I don’t care how expensive you make gas or parking, if I’m running late I’m probably gonna pay for it. Whereas if you make it so inconvenient that it just makes me later, I won’t do that. I was wondering if you have any other, how can we do things like that where there’s also this inconvenience factor?

Alexander: Have a look at Europe. Well coincidentally I know CU Boulder and I know how difficult the parking is. That’s the brilliance of Boulder where I see a lot of brilliant things. It’s what we do in Europe. I mean one of the reasons why I never ever use a car in Stockholm is that I have no clue how or where to park it, nor can I read the signs because my Swedish is so bad. I’m afraid of a ticket. I never use the car here. Also because we have such perfect public transport. The latest thing they have here is the VOI that just came out like last month, which is, I don’t know the word, we call it “step” in Dutch. I don’t know what you call that in English, whether it’s the same word or not, but it’s like these two-wheeled things that kids normally have. You know?

They are now here electric, so you download an app on your mobile phone and you see one of them in the street because they’re everywhere now. Type in a code and then it unlocks. Then it starts using your time. So for every minute, you pay like 15 cents. So all these electric little things that are everywhere for free, you just drive all around town and you just drop them wherever you like. When you need one, you look on your app and the app shows you where the nearest one is. It’s an amazing way of transport and it’s just, a month ago you saw just one or two. Now they are everywhere. You’re on the streets, you see one. It’s an amazing new way of transport. It’s very popular. It just works on electricity. It makes things so much more easy to reach everywhere in the city because you go at least twice as fast as walking.

John: There was a really interesting article in The Economist about parking. Do you know how many parking spots The Shard, the brand new building in London, the skyscraper has? Eight. The point that’s being made in terms of what you were just asking about in terms of inconvenience, in Europe it just really, in most cases it really doesn’t make any sense at all to take a car into the city. It’s a nightmare.

Before we talk more about personal solutions, I did want to make some points about the economics of all these solutions because what’s really interesting about Drawdown as well is that they looked at both what you would save and what it would cost you to save that over the 30 years that you would put in place those solutions. They came up with some things which at first sight are really quite surprising, because you would save 74.4 trillion dollars for an investment or a net cost of 29.6 trillion.

Now that’s not for all the solutions, so it’s not exactly that. In some of the solutions it’s very difficult to estimate. For instance, the value of educating girls. I mean it’s inestimable. But the point that’s also made is that if you look at The Solutions Project, Professor Jacobson, they also looked at savings, but they looked at other savings that I think are much more interesting and much more important as well. You would basically see a net increase of over 24 million long-term jobs that you would see an annual decrease in four to seven million air pollution deaths per year.

You would also see the stabilization of energy prices, because think of the price of oil where it goes from one day to the next, and annual savings of over 20 trillion in health and climate costs. Which comes back to, when you’re doing those solutions, you are also saving money, but you are also saving more importantly peoples’ lives, the tragedy of the commons, right? So I think it’s really important to think about those solutions. I mean we know very well why we are still using fossil fuels, it’s because of the massive subsidies and support that they get and the fact that vested interests are going to defend their interests.

I think that’s really important to think about in terms of those solutions. They are becoming more and more possible. Which leads me to the other point that I’m always asked about, which is, it’s not going fast enough. We’re not seeing enough renewables. Why is that? Because even though we don’t tax fuel, as you mentioned Alexander, because we’ve produced now so many solar panels, the cost is getting to be much cheaper. It’ll get cheaper and cheaper. That’s linked to this whole idea of exponential growth or tipping points, where all of a sudden all of us start to have a solar panel on our roof, where more and more of us become vegetarians.

I’ll just tell you a quick anecdote on that. We had some out of town guests who absolutely wanted to go to actually a very good steakhouse in Geneva. So along we went. We didn’t want to offend them and say “No, no, no. We’re certainly not gonna go to a steakhouse.” So we went along. It was a group of seven of us. Imagine the surprise when they came to take our orders and three out of seven of us said, “I’m afraid we’re vegetarians.” It was a bit of a shock. I think those types of things start to make others think as well, “Oh, why are you vegetarian,” and so on and so forth.

That sort of reflection means that certain business models are gonna go out of business, perhaps much faster than we think. On the more positive side, there are gonna be many more vegetarian restaurants, you can be sure, in the future.

Ariel: I want to ask about what we’re all doing individually to address climate change. But Alexander, one of the things that you’ve done that’s probably not what just a normal person would do, is start the Planetary Security Initiative. So before we get into what individuals can do, I was hoping you could talk a little bit about what that is.

Alexander: That was not so much as an individual. I was at Yale University for half a year when I started this, but then when I came back in the Ministry of Foreign Affairs for one more year, I had some ideas and I got support from the ministers of doing that, on bringing the experts in the world together that work in the field of the impact that climate change will have on security. So the idea to start was creating an annual meeting where all these experts in the world come together because that didn’t exist yet, and to make more scientists and researchers in the world energetic to study more in the field of how this relationship works. But more importantly, the idea was also to connect the knowledge and the insights of these experts on how the changing climate and the impacts impacts has on water and food, and our changing planetary conditions, how they are impacting the geopolitics.

I have a background, both in security as well as environment. That used to be two completely different tracks that weren’t really interacting. The more I was working on those two things, the more that I saw that the changing environment is actually directly impacting our security situation. It’s already happening and you can be pretty sure that the impact is going to be much more in the future. So what we then started was a meeting in the Peace Palace in the Hague. There were some 75 countries the first time that we were present there, and then the key experts in the world. It’s now an annual meeting that always takes place. For anybody that’s interested, contact me and then I will provide you with the right contact. It is growing now into all kinds of other initiatives and other involvement and more studies that are taking place.

So the issue is really taking off, and that is mainly because more and more people see the need of getting better insights into the impact that all of these changes that we’ve been discussing, that it’ll have on security whether that’s individual security, human security of individuals, that’s also geopolitical security. Imagine that when so much is changing, when the economies are changing so rapidly, when interests of people change and when people start going on the move, tensions will rise for a number of reasons, partly related to climate change, but it’s very much a situation where climate change is already in an existing fragile situation, it’s making it worse. So that is the Planetary Security Initiative. The government of the Netherlands has been very strong on this, working closely together with something other governments. Sweden, for instance, where I’m living, Sweden has in the past year been focusing very much on strengthening the United Nations, that you would have experts at the relevant high level in New York that can connect the dots and connect to people and the issues to not just raise awareness for the issue, but make sure that in the policies that are made, these issues are also taken into account because you better do it up front than repair damage afterwards if you haven’t taken care of these issues.

It’s a rapidly developing field. There is a new thing as, for instance, using AI and data, I think the World Resources Institute in Washington is very good at that, where they combine let’s say, the geophysical data, let’s say satellite and other data on increasing drought in the world, but also deforestation and other resource issues. They are connecting that now with the geopolitical impacts with AI and with combining all these completely different databases. You get much better insight on where the risks really are, and I believe that in the years to come, WRI in combination with several other think tanks can do brilliant work where the world is really waiting for the kind of insights. International policies will be so much more effective if you know much better where the problems are really going to hit first.

Ariel: Thank you. All right, so we are starting to get a little bit short on time, and I want to finish the discussion with things that we’ve personally been doing. I’m gonna include myself in this one because I think the more examples the better. So what we’ve personally been doing to change our lifestyles for the better, not sacrifice, but for the better, to address climate change. And also, to keep us all human, where we’re failing that we wish we were doing better.

I can go ahead and start. I am trying to not use my car in town. I’m trying to stick to biking or taking public transportation. I have dropped the temperature in our house by another degree, so I’m wearing more sweaters. I’m going to try to be stricter about flying, only if I feel that I will actually be having a good impact on the world will I fly, or a family emergency, things like that.

I’m pretty sure our house is on wind power. I work remotely, so I work from home. I don’t have to travel for work. I those are some of the big things, and as I said, flying is still a problem for me so that’s something I’m working on. Food is also an issue for me. I have lots of food issues so cutting out meat isn’t something that I can do. But I have tried to buy most of my food from local farms, I’m trying to buy most of my meat from local farms where they’re taking better care of the animals as well. So hopefully that helps a little bit. I’m also just trying to cut back on my consumption in general. I’m trying to not buy as many things, and if I do buy things I’m trying to get them from companies that are more environmentally-conscious. So I think food and flying are sort of where I’m failing a little bit, but I think that’s everything on my end.

Alexander: I think one of the big changes I made is I became years ago already vegetarian for a number of good reasons. I am now practically vegan. Sometimes when I travel it’s a bit too difficult. I hardly ever use the car. I guess it’s just five or six times a year that I actually use my car. I use bicycles and public transport. The electricity at our home is all wind power. In the Netherlands, that’s relatively easy to arrange nowadays. There’s a lot of offers for it, so I deliberately buy wind power, including in the times when wind power was still more expensive than other power. I think about in consumption, when I buy food, I try to buy more local food. There’s the occasional kiwi, which I always wonder it’s arrives in Europe, but that’s another thing that you can think of. Apart from flying, I really do my best with my footprint. Then flying is the difficult thing because with my work, I need to fly. It is about personal contacts. It is about meeting a lot of people. It’s about teaching.

I do teaching online. I use Skype for teaching to classrooms. I do many Skype conferences all the time, but yes I’m still flying. I refuse flying business class. I started that some six, seven years ago. Just today business class ticket was offered to me for a very long flight and I refused it. I say I will fly economy. But yes, the flying is what adds to my footprint. I still, I try to combine trips. I try to stay longer at a certain place, combining it, and then by train go to all kinds of other places. But when you’re stuck here in Stockholm, it’s quite difficult to get here by other means than flying. Once I’m, let’s say, in the Netherlands or Brussels or Paris or London or Geneva, you can do all those things by train, but it gets a bit more difficult out here.

John: Pretty much in Alexander’s case, except that I’m very local. I travel actually very little and I keep the travel down. If I do have to travel, I have managed to do seven hour trips by train. That’s a possibility in Europe, but that sort of gets you to the middle of Germany. Then the other thing is I’ve become vegetarian recently. I’m pretty close to vegan, although it’s difficult with such good cheese we have in this country. But the way it came about is interesting as well. It’s not just me. It’s myself, my wife, my daughter, and my son. The third child is never gonna become vegetarian I don’t think. But that’s not bad, four out of five.

In terms of what I think you can do and also points to things that we perhaps don’t think about contributing, being a voice, vis a vis others in our own communities and explaining why you do what you do in terms of biking and so on so forth. I think that really encourages others to do the same. It can grow a lot like that. In that vein, I teach as much as I can to high school students. I talk to them about Drawdown. I talk to them about solutions and so on. They get it. They are very very switched on about this. I really enjoy that. You really see, it’s their future, it’s their generation. They don’t have very much choice unfortunately. On a more positive note, I think they can really take it away in terms of a lot of actions which we haven’t done enough of.

Ariel: Well I wanted to mention this stuff because going back to your idea, this trickle up, I’m still hopeful that if people take action that that will start to force governments to. One final question on that note, did you guys find yourselves struggling with any of these changes or did you find them pretty easy to make?

Alexander: I think all of them were easy. Switching your energy to wind power, et cetera. Buying more consciously. It comes naturally. I was already vegetarian, and then moving to vegan, just go online and read it about it and how to do it. I remember when I was a kid that hardly anybody was vegetarian. Then I once discussed it with my mother and she said, “Oh it’s really difficult because then you need to totally balance your food and be in touch with your doctor, whatever.” I’ve never spoken to any doctor. I just stopped eating meat and now I … Years ago I swore out all dairy. I’ve never been ill. I don’t feel ill. Actually I feel better. It is not complicated. The rather complicated thing is flying, there are sometimes I have to make difficult choices like being for a long time away from home, I saved quite a bit on that part. That’s sometimes more complicated or, like soon I’ll be in a nearly eight hour train ride in something I could have flown in an hour.

John: I totally agree. I mean I enjoy being in a train, being able to work and not be worried about some truck running into you or the other foibles of driving which I find very very … I’ve got to a point where I’m becoming actually quite a bad driver. I drive so little that, I hope not, but I might have an accident.

Ariel: Well fingers crossed that doesn’t happen. Amd good. That’s been my experience so far too. The changes that I’ve been trying to make haven’t been difficult. I hope that’s an important point for people to realize. Anything else you want to add either of you?

Alexander: I think there’s just one thing that we didn’t touch on, on what you can do individually. That’s perhaps the most important one for us in democratic countries. That is vote. Vote for the best party that actually takes care of our long-term future, a party that aims for taking rapidly the right climate change measures. A party that wants to invest in a new economy that sees that if you invest now, you can be a leader later.

There is, in some countries, you have a lot of parties and there is all kinds of nuances. In other countries you have to deal with basically two parties, where just the one part is absolutely denying science and is doing exactly the wrong things and are basically aiming to ruin the planet as soon as possible, whereas the other party is actually looking for solutions. Well if you live in a country like that, and there are coincidentally soon elections coming up, vote for the party that takes the best positions on this because it is about the future of your children. It is the single most important influential thing that you can do, certainly if you live in a country where the emissions that the country produces are still among the highest in the world. Vote. Take people with you to do it.

Ariel: Yeah, so to be more specific about that, as I mentioned at the start this podcast, it’s coming out on Halloween, which means in the US, elections are next week. Please vote.

John: Yeah. Perhaps something else is how you invest, where your money is going. That’s one that can have a lot of impact as well. All I can say is, I hate to come back to Drawdown, but go through the Drawdown and think about your investments and say, okay, renewables whether it’s LEDs or whatever technology it is, if it’s in Drawdown, make sure it’s in your investment portfolio. If it’s not, you might want to get out of it, particularly the ones that we already know are causing the problem in the first place.

Ariel: That’s actually, that’s a good reminder. That’s something that has been on my list of things to do. I know I’m guilty of not investing in the proper companies at the moment. That’s something I’ve been wanting to fix.

Alexander: And tell your pension funds: divest from fossil fuels and invest in renewables and all kinds of good things that we need in the new economy.

John: But not necessarily because you’re doing it as a charitable cause, but really because these are the businesses of the future. We talked earlier about growth that these different businesses can take. Another factor that’s really important is efficiency. For instance, I’m sure you have heard of The Impossible Burger. It’s a plant-based burger. Now what do you think is the difference in terms of the amount of crop land required to produce a beef burger versus an impossible burger?

Alexander: I would say one in 25 or one in 35, but at range.

John: Yeah, so it’s one in 20. The thing is that when you look at that type of gain in efficiency, it’s just a question of time. A cow simply can’t compete. You have to cut down the trees to grow the animal feed that you ship to the cow, that the cow then eats. Then you have to wait a number of years, and that’s that 20 factor difference in efficiency. Now our capitalist economic system doesn’t like inefficient systems. You can try to make that cow as efficient as possible, you’re never going to be able to compete with a plant-based burger. Anybody who thinks that that plant-based burger isn’t going to displace the meat burger should really think again.

Ariel: All right, I think we’re ending on a nice hopeful note. So I want to thank you both for coming on today and talking about all of these issues.

Alexander: Thanks Ariel. It was nice to talk.

John: Thank you very much.

Ariel: If you enjoyed this podcast, please take a moment to like it and share it, and maybe even leave a positive review. And o f course, if you haven’t already, please follow us. You can find the FLI podcast on iTunes, Google Play, SoundCloud, and Stitcher.

Trump to Pull US Out of Nuclear Treaty

Click here to see this page in other languages:  Russian 

Last week, U.S. President Donald Trump confirmed that the United States will be pulling out of the landmark Intermediate-Range Nuclear Forces Treaty (INF). The INF treaty, which went into effect in 1987, banned ground-launched nuclear missiles that have a range of 500 km to 5,500 km (310 to 3,400 miles). Although the agreement covers land-based missiles that carry both nuclear and conventional warheads, it doesn’t cover any air-launched or sea-launched weapons.

Nonetheless, when it was signed into effect by Former U.S. President Ronald Reagan and Former Soviet President Mikhail Gorbachev, it led to the elimination of nearly 2,700 short- and medium-range missiles. More significantly, it helped bring an end to a dangerous nuclear standoff between the two nations, and the trust that it fostered played a critical part in defusing the Cold War.

Now, as a result of the recent announcements from the Trump administration, all of this may be undone. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, stated in an interview with The Guardian, “This is the most severe crisis in nuclear arms control since the 1980s. If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”

Of course, the U.S. isn’t the only player that’s contributing to unravelling an arms treaty that helped curb competition and contributed to bringing an end to the Cold War.

Reports indicate that Russia has been violating the INF treaty since at least 2014, a fact that was previously acknowledged by the Obama administration and which President Trump cited in his INF withdrawal announcement last week. “Russia has violated the agreement. They’ve been violating it for many years, and I don’t know why President Obama didn’t negotiate or pull out,” Trump stated. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to.…so we’re going to terminate the agreement. We’re going to pull out,” he continued.

Trump also noted that China played a significant role in his decision to pull the U.S. out of the INF treaty. Since China was not a part of the negotiations and is not a signatory, the country faces no limits when it comes to developing and deploying intermediate-range nuclear missiles — a fact that China has exploited in order to amass a robust missile arsenal. Trump noted that the U.S. will  have to develop those weapons, “unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable.”

 

A Growing Concern

Concerns over Russian missile systems that breach the INF treaty are real and valid. Equally valid are the concerns over China’s weapons strategy. However, experts note that President Trump’s decision to leave the INF treaty doesn’t set us on the path to the negotiating table, but rather, toward another nuclear arms race.

Russian officials have been clear in this regard, with Leonid Slutsky, who chairs the foreign affairs committee in Russia’s lower house of parliament, stating this week that a U.S. withdrawal from the INF agreement “would mean a real new Cold War and an arms race with 100 percent probability” and “a collapse of the planet’s entire nonproliferation and disarmament regime.”

This is precisely why many policy experts assert that withdrawal is not a viable option and, in order to achieve a successful resolution, negotiations must continue. Wolfgang Ischinger, the former German ambassador to the United States, is one such expert. In a statement issued over the weekend, he noted that he is “deeply worried” about President Trump’s plans to dismantle the INF treaty and urged the U.S. government to, instead, work to expand the treaty. “Multilateralizing this agreement would be a lot better than terminating it,” he wrote on Twitter.

Even if the U.S. government is entirely disinterested in negotiating, and the Trump administration seeks only to respond with increased weaponry, policy experts assert that withdrawing from the INF treaty is still an unavailing and unnecessary move. As Jeffrey Lewis, the director of the East Asia nonproliferation program at the Middlebury Institute of International Studies at Monterey, notes, the INF doesn’t prohibit sea- or air-based systems. Consequently, the U.S. could respond to Russian and Chinese political maneuverings with increased armament without escalating international tensions by upending longstanding treaties.

Indeed, since President Trump made his announcement, a number of experts have condemned the move and called for further negotiations. EU spokeswoman Maja Kocijancic said that the U.S. and Russia “need to remain in a constructive dialogue to preserve this treaty” as it “contributed to the end of the Cold War, to the end of the nuclear arms race and is one of the cornerstones of European security architecture.”  

Most notably, in a statement that was issued Monday, the European Union cautioned the U.S. against withdrawing from the INF treaty, saying, “The world doesn’t need a new arms race that would benefit no one and on the contrary, would bring even more instability.”

AI Alignment Podcast: On Becoming a Moral Realist with Peter Singer

Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI?

On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with Peter Singer. Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He’s a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne.

Topics discussed in this episode include:

  • Peter’s transition from moral anti-realism to moral realism
  • Why emotivism ultimately fails
  • Parallels between mathematical/logical truth and moral truth
  • Reason’s role in accessing logical spaces, and its limits
  • Why Peter moved from preference utilitarianism to hedonic utilitarianism
  • How objectivity in ethics might affect AI alignment
In this interview we discuss ideas contained in the work of Peter Singer. You can learn more about Peter’s work here and find many of the ideas discussed on this podcast in his work The Point of View of the Universe: Sidgwick and Contemporary EthicsYou can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone, welcome back to the AI Alignment Podcast series. I’m Lucas Perry, and today, we will be speaking with Peter Singer about his transition from being a moral anti-realist to a moral realist. In terms of AI safety and alignment, this episode primarily focuses on issues in moral philosophy.

In general, I have found the space of moral philosophy to be rather neglected in discussions of AI alignment where persons are usually only talking about strategy and technical alignment. If it is unclear at this point, moral philosophy and issues in ethics make up a substantial part of the AI alignment problem and have implications in both strategy and technical thinking.

In terms of technical AI alignment, it has implications in preference aggregation, and it’s methodology, in inverse reinforcement learning, and preference learning techniques in general. It affects how we ought to proceed with inter-theoretic comparisons of value, with idealizing persons or agents in general and what it means to become realized, how we deal with moral uncertainty, and how robust preference learning versus moral reasoning systems should be in AI systems. It has very obvious implications in determining the sort of society we are hoping for right before, during, and right after the creation of AGI.

In terms of strategy, strategy has to be directed at some end and all strategies smuggle in some sort of values or ethics, and it’s just good here to be mindful of what those exactly are.

And with regards to coordination, we need to be clear, on a descriptive account, of different cultures or groups’ values or meta-ethics and understand how to move from the state of all current preferences and ethics onwards given our current meta-ethical views and credences. All in all, this barely scratches the surface, but it’s just a point to illustrate the interdependence going on here.

Hopefully this episode does a little to nudge your moral intuitions around a little bit and impacts how you think about the AI alignment problem. In coming episodes, I’m hoping to pivot into more strategy and technical interviews, so if you have any requests, ideas, or persons you would like to see interviewed, feel free to reach out to me at lucas@futureoflife.org. As usual, if you find this podcast interesting or useful, it’s really a big help if you can help share it on social media or follow us on your preferred listening platform.

As many of you will already know, Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He’s a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne. And so, without further ado, I give you Peter Singer.

Thanks so much for coming on the podcast, Peter. It’s really wonderful to have you here.

Peter: Oh, it’s good to be with you.

Lucas: So just to jump right into this, it would be great if you could just take us through the evolution of your metaethics throughout your career. As I understand, you began giving most of your credence to being an anti-realist and a preference utilitarian, but then over time, it appears that you’ve developed into a hedonic utilitarian and a moral realist. Take us through the evolution of these views and how you developed and arrived at your new ones.

Peter: Okay, well, when I started studying philosophy, which was in the 1960s, I think the dominant view, at least among people who were not religious and didn’t believe that morals were somehow an objective truth handed down by God, was what was then referred to as an emotivist view, that is the idea that moral judgments express our attitudes, particularly, obviously from the name, emotional attitudes, that they’re not statements of fact, they don’t purport to describe anything. Rather, they express attitudes that we have and they encourage others to share those attitudes.

So that was probably the first view that I held, siding with people who were non-religious. It seemed like a fairly obvious option. Then I went to Oxford and I studied with R.M. Hare who was a professor of moral philosophy at Oxford at the time and a well-known figure in the field. His view was also in this general ballpark of non-objectivist or, as we would know say, non-realist theories, non-cognitivist] was another term used for them. They didn’t purport to be about knowledge.

But his view was that when we make a moral judgment, we are prescribing something. So his idea was that moral judgments fall into the general family of imperative judgments. So if I tell you shut the door, that’s an imperative. It doesn’t say anything that’s true or false. And moral judgments were a particular kind of imperative according to Hare, but they had this feature that they had to be universalizable. So by universalizable, Hare meant that if you were to make a moral judgment, your prescription would have to hold in all relevantly similar circumstances. And relevantly similar was defined in such a way that it didn’t depend on who the people were.

So, for example, if I were to prescribe that you should be my slave, the fact that I’m the slave master and you’re the slave isn’t a relevantly similar circumstance. If there’s somebody just like me and somebody just like you, that I happen to occupy your place, then the person who is just like me would also be entitled to be the slave master of me ’cause now I’m in the position of the slave.

Obviously, if you think about moral judgments that way, that does put a constraint on what moral judgments you can accept because you wouldn’t want to be a slave, presumably. So I liked this view better than the straightforwardly emotivist view because it did seem to give more scope for argument. It seemed to say look, there’s some kind of constraint that really, in practice, means we have to take everybody’s interests into account.

And I thought that was a good feature about this, and I drew on that in various kinds of applied contexts where I wanted to make moral arguments. So that was my position, I guess, after I was at Oxford, and for some decades after that, but I was never completely comfortable with it. And the reason I was not completely comfortable with it was that there was always a question you could ask on Hare’s view. Hare said where does this universalizability constraint come from on our moral judgment? And Hare’s answer was well, it’s a feature of moral language. It’s implied in, say, using the terms ought or good or bad or beauty or obligation. It’s implied that the judgments you are making are universalizable in this way.

And that, in itself, was plausible enough, but it was open to the response that well, in that case, I’m just not gonna use moral language. If moral language requires me to make universalizable prescriptions and that means that I can’t do all sorts of things or can’t advocate all sorts of things that I would want to advocate, then I just won’t use moral language to justify my conduct. I’ll use some other kind of language, maybe prudential language, language of furthering my self-interests. And what’s wrong with doing that moreover, and it’s not just that they can do that, but tell me what’s wrong with them doing that?

So this is a kind of a question about why act morally. And on his view, it wasn’t obvious from his view what the answer to that would be, and, in particular, it didn’t seem that there would be any kind of answer about that’s irrational or you’re missing something. It seemed, really, as if it was an open choice that you had whether to use moral language or not.

So as I got further into the problem, as I tried to develop arguments that would show that it was a requirement of reason, not just a requirement of moral language, but a requirement of reason that we universalize our judgements.

And yet, it was obviously a problem in fitting that in to Hare’s framework, which is, I’ve been saying, was a framework within this general non-cognitivist family. And for Hare, the idea that there are objective reasons for action didn’t really make sense. They were just these desires that we had, which led to us making prescriptions and then the constraint that we universalize their prescriptions, but he explicitly talked about the possibility of objective prescriptions and said that that was a kind of nonsense, which I think comes out of the general background of the kind of philosophy that came out of logical positivism and the verificationist idea that things that you couldn’t verify were nonsense or so and so. And that’s why I was pretty uncomfortable with this, but I didn’t really see bright alternatives to it for some time.

And then, I guess, gradually, I was persuaded by a number of philosophers who were respected that Hare was wrong about rejecting the idea of objective truth in morality. I talked to Tom Nagel and probably most significant was the work of Derek Parfit, especially his work On What Matters, volumes one and two, which I saw in advance in draft form. He circulated drafts of his books to lots of people who he thought might give him some useful criticism. And so I saw that many years before it came out, and the arguments did seem, to me, pretty strong, particularly the objections to the kind of view that I’d held, which, by this time, was no longer usually called emotivism, but was called expressivism, but I think it’s basically a similar view, a view in the ballpark.

And so I came to the conclusion that there is a reasonable case for saying that there are objective moral truths and this is not just a matter of our attitudes or of our preferences universalized, but there’s something stronger going on and it’s, in some ways, more like the objectivity of mathematical truths or perhaps of logical truths. It’s not an empirical thing. This is not something you can describe that comes in the world, the natural world of our sense that you can find or prove empirically. It’s rather something that is rationally self-evident, I guess, to people who reflect on it properly and think about it carefully. So that’s how I gradually made the move towards objectivist metaethic.

Lucas: I think here, it would be really helpful if you could thoroughly unpack what your hedonistic utilitarian objectivist meta-ethics actually looks like today, specifically getting into the most compelling arguments that you found in Parfit and in Nagel that led you to this view.

Peter: First off, I think we should be clear that being an objectivist about metaethics is one thing. Being a hedonist rather than a preference utilitarian is a different thing, and I’ll describe … There is some connection between them as I’ll describe in a moment, but I could have easily become an objectivist and remained a preference utilitarian or held some other kind of normative moral view.

Lucas: Right.

Peter: So the metaethic view is separate from that. What were the most compelling arguments here? I think one of the things that had stuck with me for a long time and that had restrained me from moving in this direction was the idea that it’s hard to know what you mean when you say that something is an objective truth outside the natural world. So in terms of saying that things are objectively true in science, the truths of scientific investigation, we can say well, there’s all of this evidence for it. No rational person would refuse to believe this once they were acquainted with all of this evidence. So that’s why we can say that that is objectively true.

But that’s clearly not going to work for truths in ethics, which, assuming of course that we’re not naturalists, that we don’t think this can be deduced from some examination of human nature or the world, I certainly don’t think that and the people that are influential on me, Nagel and Parfit in particular, also didn’t think that.

So the only restraining question was well, what could this really amount to? I had known going back to the intuitionists in the early 20th century, people like W.D. Ross or, earlier, Henry Sidgwick, who was a utilitarian objectivist philosopher, that people made the parallel with mathematical proofs that there are mathematical proofs that we see as true by direct insight into their truths by their self-evidence, but I have been concerned about this. I’d never really done a deep study of philosophy or mathematics, but I’d been concerned about this because I thought there’s a case for saying that mathematical truths are an analytic truths, they’re truths in virtue of the meanings of the terms and virtue of the way we define what we mean by the numbers and by equals or the various other terms that we use in mathematics so that it’s basically just the unpacking of an analytic system.

The philosophers that I respected didn’t think this view had been more popular at the time when I was a student and it had stuck with me for a while, and although it’s not disappeared, I think it’s perhaps not as widely held a view now as it was then. So that plus the arguments that were being made about how do we understand mathematical truths, how do we understand the truths of logical inference. We grasps these as self-evident. We find them undeniable, yet this is, again, a truth that is not part of the empirical world, but it doesn’t just seem that it’s an analytic truth either. It doesn’t just seem that it’s the meanings of the terms. It does seem that we know something when we know the truths of logic or the truths of mathematics.

On this basis, it started to seem like the idea that there are these non-empirical truths in ethics as well might be more plausible than I thought it was before. And I also went back and read Henry Sidgwick who’s a philosopher that I greatly admire and that Parfit also greatly admired, and looked at his arguments about what he saw as, what he called, moral axioms, and that obviously makes the parallel with axioms of mathematics.

I looked at them and it did seem to me difficult to deny, that is, claims, for example, that there’s no reason for preferring one moment of our existence to another in itself. In other words that we shouldn’t discount the future, except for things like uncertainty, but otherwise, the future is just as important as the present, an idea somewhat similar to his universalizability, but somewhat differently stated by Sidgwick that if something is right for someone, then it’s right independently of the identities of the people involved. But for Sidgwick, as I say, that was, again, a truth of reason, not simply an implication of the use of particular moral terms. Thinking about that, that started to seem right to me, too.

And, I guess, finally, Sidgwick’s claim that the interest of one individual are no more important than the interests of another, assuming that the goods involved that can be done to that person, that is the extent of their interests are similar. Sidgwick’s claim was that people were reflecting carefully on these truths can see that they’re true, and I thought about that, and it did seem to me that … It was pretty difficult to deny, not that nobody will deny them, but that they do have a self-evidence about them. That seemed to me to be a better basis for ethics than views that I’d been holding up to that point, the views that so came out of, originally, emotivism and then out of prescriptivism.

It was a reasonable chance that that was right. As you say, I should give it more credence than I have. It’s not that I’m 100% certain that it’s right by any means, but that’s a plausible view that’s worth defending and trying to see what objections people make to it.

Lucas: I think there’s three things here that would be helpful for us to dive in more on. The first thing is, and this isn’t a part of metaethics, which I’m particularly acquainted with, so, potentially, you can help guide us through this part a little bit more. This non-naturalism vs naturalism argument. Your view is, I believe you’re claiming, is a non-naturalist view is you’re claiming that you can not deduce the axioms of ethics or the basis of ethics from a descriptive or empirical account of the universe?

Peter: That’s right. There certainly are still naturalists around. I guess Peter Railton is a well-known, contemporary, philosophical naturalist. Perhaps Frank Jackson, my Australian friend and colleague. And some of the naturalist views have become more complicated than they used to be. I suppose the original idea of naturalism that people might be more familiar with is simply the claim that there is a human nature and that acting in accordance with that human nature is the right thing to do, so you describe human nature and then you draw from that what are the characteristics that we ought to follow.

That, I think, just simply doesn’t work. I think it has its origins in a religious framework in which you think that God has created our nature with particular purposes that we should behave in certain ways. But the naturalists who defend it, going back to Aquinas even, maintain that it’s actually independent of that view.

If you, in fact, you take an evolutionary view of human nature, as I think we should, then our nature is morally neutral. You can’t derive any moral conclusions from what our nature is like. It might be relevant to know what our nature is like in order to know that if you do one thing, that might lead to certain consequences, but it’s quite possible that, for example, our nature is to seek power and to use force to obtain power, that that’s an element of human nature or, on a group level, to go to war in order to have power over others, and yet naturalists wouldn’t wanna say that those are the right things. They would try and give some account as to why how some of that’s a corruption of human nature.

Lucas: Putting aside naturalist accounts that involve human nature, what about a purely descriptive or empirical understanding of the world, which includes, for example, sentient beings and suffering, and suffering is like a substantial and real ontological fact of the universe and the potential of deducing ethics from facts about suffering and what it is like to suffer? Would that not be a form of naturalism?

Peter: I think you have to be very careful about how you formulate this. What you said sounds a little bit like what Sam Harris says in his book, The Moral Landscape, which does seem to be a kind of naturalism because he thinks that you can derive moral conclusions from science, including exactly the kinds of things that you’ve talked about, but I think there’s a gap there, and the gap has to be acknowledged. You can certainly describe suffering and you can describe happiness conversely, but you need to get beyond description if you’re going to have a normative judgment. That is if you’re gonna have a judgment that says what we ought to do or what’s the right thing to do or what’s a good thing to do, there’s a step that’s just being left out.

If somebody says sentient beings can suffer pain or they can be happy, this is what suffer and pain is like, this is what being happy is like; therefore, we ought to promote happiness, which goes back to David Hume who pointed this out that various moral arguments describe the world using is, is, is, this is the case, and then, suddenly, but without any explanation, they say and therefore, we ought. Needs to be explained how you get from this is statement to the ought statements.

Lucas: It seems that reason, whatever reason might be and however you might define that, seems to do a lot of work at the foundation of your moral view because it seems that reason is what leads you towards the self-evident truth of certain foundational ethical axioms. Why might we not be able to pull the same sort of move with a form of naturalistic moral realism like Sam Harris develops by simply stating that given a full descriptive account of the universe and given first person accounts of suffering and what suffering is like, that it is self-evidently true that built into the nature of that sort of property or part of the universe is that it ought to be diminished?

Peter: Well, if you’re saying that … There is a fine line, maybe this is what you’re suggesting, between saying from the description, we can deduce what we ought to do and between saying when we reflect on what suffering is and when we reflect on what happiness is, we can see that it is self-evident that we ought to promote happiness and we ought to reduce suffering. So I regard that as a non-naturalist position, but you’re right that the two come quite close together.

In fact, this is one of the interesting features of volume three of Parfit’s On What Matters, which was only published posthumously, but was completed before he died, and in that, he responds to essays that are in a book that I edited called Does Anything Really Matter. The original idea was that he would respond in that volume, but, as often happened with Parfit, he wrote responses as such length that it needed to be a separate volume. It would’ve made the work too bulky to put them together, but Peter Railton had an essay in Does Anything Really Matter, and Parfit responded to it, and then he invited Railton to respond to his response, and, essentially, they are saying that yeah, their views have become closer anyway, there’s been a convergence, which is pretty unusual in philosophy because philosophers tend to emphasize the differences between their views.

Between what Parfit calls his non-natural objectivist view and between Railton’s naturalist view, because Railton’s is a more sophisticated naturalist view, the line starts to become a little thin, I agree. But, to me, the crucial thing is that you’re not just saying here’s this description; therefore, we ought to do this. But you’re saying if we understand what we’re talking about here, we can have as an intuition of self-evidence, the proposition that it’s good to promote this or it’s good to try to prevent this. So that’s the moral proposition, that it is good to do this. And that’s the proposition that you have to take some other step. You can say it’s self-evident, but you have to take some other step from simply saying this is what suffering is like.

Lucas: Just to sort of capture and understand your view a bit more here, and going back to, I think, mathematics and reason and what reason means to you and how it operates the foundation of your ethics, I think that a lot of people will sort of get lost or potentially feel it is maybe an arbitrary or cheap move to …

When thinking about the foundations of mathematics, there are foundational axioms, which is self-evidently true, which no one will deny, and then translating that move into the foundations of ethics into determining what we ought to do, it seems like there would be a lot of peole being lost there, there would be a lot of foundational disagreement there. When is it permissible or okay or rational to make that sort of move? What does it mean to say that these really foundational parts of ethics are self-evidently true? How is not the case that that’s simply an illusion or simply a byproduct of evolution that we’re confused that these certain fictions that we’ve evolved are self-evidently true?

Peter: Firstly, let me say, as I’ve mentioned before, I don’t claim that we can be 100% certain about moral truths, but I do think that it’s a plausible view. One reason why it relates to, you just mentioned, being a product of evolution, one reason why it relates to that, and this is something that I argued with my co-author Katarzyna de Lazari-Radek in the 2014 book we wrote called The Point of View of the Universe, which is, in fact, a phrase form Sidgwick, and that argument is that there are a number of moral judgments that we make, there are many moral judgments that we make that we know have evolutionary origins, so lots of things that we think of as wrong, originated because they would not have helped us to survive or they would not have helped a small tribal group to survive to allow certain kinds of conduct. And some of those, we might wanna reject today.

We might think, for example, we have an instinctive repugnance of incest, but Jonathon Hyde has shown that even if you describe a case where adult brothers and sisters who choose to have sex and nothing bad happens as a result of that, their relationship remains as strong as ever, and they have fun, and that’s the end of it, people still say oh, somehow that’s wrong. They try to make up reasons why it’s wrong. That, I think, is an example of an evolved impulse, which, perhaps, is no longer really apposite because we have effective contraception, and so what are the evolutionary reasons why we might want to avoid incest are not necessarily there.

But in a case of the kinds of things that I’m talking about and that Sidgwick is talking about, like the idea that everyone’s good is of equal significance, they have perceived why we would’ve evolved to have bad attitude because, in fact, it seems harmful to our prospects of survival and reproduction to give equal weight to the interest of complete strangers.

The fact that people do think this, and if you look at a whole lot of different independent, historical, ethical traditions in different cultures and different parts of the world at different times, you do find many thinkers who converge on something like this idea in various formulations. So why do they converge on this given that it doesn’t seem to have that evolutionary justification or explanation as to why it would’ve evolved?

I think that suggests that it may be a truth of reason and, of course, you may then say well, but reason has also evolved, and indeed it has, but I think that reason may be a little different in that we evolved a capacity to reason various specific problem solving needs, helped us to survive in lots of circumstances. But it may then enable us to see things that have no survival value, just as no doubt simple arithmetic has a survival value, but understanding the truths of higher mathematics don’t really have a survival value, so maybe similarly in ethics, there are some of these more abstract universal truths that don’t have a survival value, but which, nevertheless, the best explanation for why many people seem to come to these views is that they’re truths of reason, and once we’re capable of reasoning, we’re capable of understanding these truths.

Lucas: Let’s start off at reason and reason alone. When moving from reason and thinking, I guess, alongside here about mathematics for example, how is one moving specifically from reason to moral realism and what is the metaphysics of this kind of moral realism in a naturalistic universe without anything supernatural?

Peter: I don’t think that it needs to have a very heavyweight metaphysical presence in the universe. Parfit actually avoided the term realism in describing his view. He called it non-naturalistic normative objectivism because he thought that realism carried this idea that it was part of the furniture of the universe, as philosophers say, that the universe consists of the various material objects, but in addition to that, it consists of moral truths is if they’re somehow sort of floating there out in space, and that’s not the right way to think about it.

I’d say, rather, the right way to think about it is as, you know, we do with logical and mathematical truths that once you have been capable of a certain kind of thought, they will move towards these truths. They have the potential and capacity for thinking along these lines. One of the claims that I would make a consequence of my acceptance of objectivism in ethics as the rationally based objectivism is that the morality that we humans have developed on Earth in this, anyway, at this more abstract, universal level is something that aliens from another galaxy could also have achieved if they had similar capacities of thought or maybe greater capacities of thought. It’s always a possible logical space, you could say, or a rational space that is there that beings may be able to discover once they develop those capacities.

You can see mathematics in that way, too. It’s one of a number of possible ways of seeing mathematics and of seeing logic, but they’re just timeless things that, in some way, truths or laws, if you like, but they don’t exist in the sense in which the physical universe exists.

Lucas: I think that’s really a very helpful way of putting it. So the claim here is that through reason, one can develop the axioms of mathematics and then eventually develop quantum physics and other things. And similarly, when reason is applied to thinking about what one ought to do or when thinking about the status of sentient creatures that one is applying logic and reason to this rational space and that this rational space has truths in the same way that mathematics does?

Peter: Yes, that’s right. It has at least perhaps only a very small number, Sidgwick came up with three axioms that are perhaps only a very small number of truths and fairly abstract truths, but that they are truths. That’s the important aspect. That they’re not just particular attitudes, which beings who evolved as homo sapiens have all are likely to understand and accept, but beings who evolved in a different galaxy in a quite different way would not accept. My claim is that if they are also capable of reasoning, if evolution had again produced rational beings, they would be able to see the truths in the same way as we can.

Lucas: So spaces of rational thought and of logic, which can or can not be explored, seems very conceptual queer to me, such that I don’t even really know how to think about it. I think that one would worry that one is applying reason, whatever reason might be, to a fictional space. I mean you’re discussing earlier that some people believe mathematics to be simply the formalization of what is analytically true about the terms and judgments and the axioms and then it’s just a systematization of that and an unpacking of it from beginning into infinity. And so, I guess, it’s unclear to me how one can discern spaces of rational inquiry which are real, from ones which are anti-real or which are fictitious. Does that make sense?

Peter: It’s a problem. I’m not denying that there is something mysterious, I think maybe my former professor, R.M. Hare, would have said queer … No, it was John Mackie, actually, John Mackie was also at Oxford when I was there, who said these must be very queer things if there are some objective moral truths. I’m not denying that it’s something that, in a way, would be much simpler if we could explain everything in terms of empirical examination of the natural world and say there’s only that plus there are formal systems. There are analytic systems.

But I’m not persuaded that that’s a satisfactory explanation of mathematics or logic either, so if those who are convinced that this is a satisfactory way of explaining logic and mathematics, may well think that then they don’t need this explanation of ethics either, but it is a matter of if we need to appeal to something outside the natural realm to understand some of the other things about the way we reason, then perhaps ethics is another candidate for this.

Lucas: So just drawing parallels again here with mathematics ’cause I think it’s the most helpful. Mathematics is incredible for helping us to describe and predict the universe. The president of the Future of Life Institute, Max Tegmark, develops an idea of potential mathematical Platonism or realism where the universe can be understood primarily as, and sort of ontologically, a mathematical object within, potentially, a multiverse because as we look into the properties and features of quarks and the building blocks of the universe, all we find is more mathematical properties and mathematical relationships.

So within the philosophy of math, there’s certainly, it seems, open questions about what math is and what the relation of mathematics is to the fundamental metaphysics and ontology of the universe and potential multiverse. So in terms of ethics, what information or insight or anything do you think that we’re missing could further inform our view that there potentially is objective morality or whatever that means or inform us that there is a space of moral truths which can be arrived at by non-anthropocentric minds, like aliens minds you said could also arrive at the moral truths as they could also arrive at mathematical truths.

Peter: So what further insight would show that this was correct, other, presumably, than the arrival of aliens who start swapping mathematical theorems with us?

Lucas: And have arrived at the same moral views. For example, if they show up and they’re like hey, we’re hedonistic consequentialists and we’re really motivated to-

Peter: I’m not saying they’d necessarily be hedonistic consequentialists, but they would-

Lucas: I think they should be.

Peter: That’s a different question, right?

Lucas: Yeah, yeah, yeah.

Peter: We haven’t really discussed steps to get there yet, so I think they’re separate questions. My idea is that they would be able to see that if we had similar interests to the ones that they did, then those interests ought to get similar weight, that they shouldn’t ignore our interests just because we’re not members of whatever civilization or species they are. I would hope that if they are rationally sophisticated, they would at least be able to see that argument, right?

Some of them, just as with us, might see the argument and then say yes, but I love the tastes of your flesh so much I’m gonna kill you and eat you anyway. So, like us, they may not be purely rational beings. We’re obviously not purely rational beings. But if they can get here and contact us somehow, they should be sufficiently rational to be able to see the point of the moral view that I’m describing.

But that wasn’t a very serious suggestion about waiting for the aliens to arrive, and I’m not sure that I can give you much of an answer to say what further insights are relevant here. Maybe it’s interesting to try and look at this cross-culturally, as I was saying, and to examine the way that great thinkers of different cultures and different eras have converged on something like this idea despite the fact that it seems unlikely to have been directly produced by evolutionary beings in the same way that our other more emotionally driven moral reactions are.

Peter: I don’t know that the argument can go any further, and it’s not completely conclusive, but I think it remains plausible. You might say well, that’s a stalemate. Here are some reasons for thinking morality’s objective and other reasons for rejecting that, and that’s possible. That happens in philosophy. We get down to bedrock disagreements and it’s hard to move people with different views.

Lucas: What is reason? One could also view reason as some human-centric bundle of both logic and intuitions, and one can be mindful that the intuitions, which are sort of bundled with this logic, are almost arbitrary consequences of evolution. So what is reason fundamentally and what does it mean that other reasonable agents could explore spaces of math and morality in similar ways?

Peter: Well, I would argue that there are common principles that don’t depend on our specific human nature and don’t depend on the path of our evolution. I accept, to the extent, that because the path of our evolution has given us the capacity to solve various problems through thought and that that is what our reason amounts to and therefore, we have insight into these truths that we would not have if we did not have that capacity. From this kind of reasoning, you can think of as something that goes beyond specific problem solving skills to insights into laws of logic, laws of mathematics, and laws of morality as well.

Lucas: When we’re talking about axiomatic parts of mathematics and logics and, potentially, ethics here as you were claiming with this moral realism, how is it that reason allows us to arrive at the correct axioms in these rational spaces?

Peter: We developed the ability when we’re presented with these things to consider whether we can deny them or not, whether they are truly self-evident. We can reflect on them, we can talk to others about them, we can consider biases that we might have that might explain why we believe them and see where there are any such biases, and once we’ve done all that, we’re left with the insight that some things we can’t deny.

Lucas: I guess I’m just sort of poking at this idea of self-evidence here, which is doing a lot of work in the moral realism. Whether or not something is self-evident, at least to me, it seems like a feeling, like I just look at the thing and I’m like clearly that’s true, and if I get a little bit meta, I ask okay, why is that I think that this thing is obviously true? Well, I don’t really know, it just seems self-evidently true. It just seems so and this, potentially, just seems to be a consequence of evolution and of being imbued with whatever reason is. So I don’t know if I can always trust my intuitions about things being self-evidently true. I’m not sure how to navigate my intuitions and views of what is self-evident in order to come upon what is true.

Peter: As I said, it’s possible that we’re mistaken, that I’m mistaken in these particular instances. I can’t exclude that possibility, but it seems to me that there’s hypotheses that we hold these views because they are self-evident, and look for evolutionary explanations and, as I’ve said, I’ve not really found them, so that’s as far as I can go with that.

Lucas: Just moving along here a little bit, and I’m becoming increasingly mindful of your time, would you like to cover briefly this sort of shift that you had from preference utilitarianism to hedonistic utilitarianism?

Peter: So, again, let’s go back to my autobiographical story. For Hare, the only basis for making moral judgments was to start from our preferences and then to universalize them. There could be no arguments about something else being intrinsically good or bad, whether it was happiness or whether it was justice or freedom or whatever because that would be to import some kind of objective claims into this debate that just didn’t have a place in his framework, so all I could do was take my preferences and prescribe them universally, and, as I said, that involved putting myself in the position of the others affected by my action and asking whether I could still accept it.

When you do that, and if you, let’s say your action affects many people, not just you and one other, what you’re really doing is you’re trying to sum up how this would be from the point of view of every one of these people. So if I put myself in A’s position, would I be able to accept this? But then I’ve gotta put myself in B’s position as well, and C, and D, and so on. And to say can I accept this prescription universalized is to say if I were living the lives of all of those people, would I want this to be done or not? And that’s a kind of, as they say, a summing of the extent to which doing this satisfies everyone’s preferences net on balance after deducting, of course, the way in which is thwarts or frustrates or is contrary to their preferences.

So this seem to be the only way in which you could go further with Hare’s views as they eventually worked it out and changed it a little bit over the years, but in his later formulations of it. So it was a kind of a preference utilitarianism that it led to, and I was reasonably happy with that, and I accepted the idea that this meant that what we ought to be doing is to maximize the satisfaction of preferences and avoid thwarting them.

And it gives you, in many cases, of course, somewhat similar conclusions to what you would say if what we wanna do is maximize happiness an minimize suffering or misery because for most people, happiness is something that they very much desire and misery is something that they don’t want. Some people might have different preferences that are not related to that, but for most people, they will probably come down some way or other to how it relates to their well-being, their interests.

There are certainly objections to this, and some of the objections relate to preferences that people have when they’re not fully informed about things. And Hare’s view was that, in fact, the preferences that we should universalize are the preferences people should have when they are fully informed and when they’re thinking calmly, they’re not, let’s say, angry with somebody and therefore they have a strong preference to hit him in the face, even though this will be bad for them and bad for him.

So the preference view sort of then took this further step of saying it’s the preferences that you would have if you were well informed and rational and calm, and that seemed to solve some problems with preference utilitarianism, but it gave rise to other problems. One of the problems were well, does this mean that if somebody is misinformed in a way that you can be pretty confident they’re never gonna be correctly informed, you should still do what they would want if they were correctly informed.

An example of this might be someone who’s a very firm religious believer and has been all their life, and let’s say one of their religious beliefs is that having sex outside marriage is wrong because God has forbid it, let’s say, it’s contrary to the commandments or whatever, but given that, let’s say, let’s just assume, there is no God, therefore, a priori there’s no commandments that God made against sex outside marriage, and given that if they didn’t believe in God, they would be happy to have sex outside marriage, and this would make them happier, and would make their partner happy as well, should I somehow try to wangle things so that they do have sex outside marriage even though, as they are now, they prefer not to.

And that seems a bit of a puzzle, really. Seems highly paternalistic to ignore their preferences in the base of their knowledge even though you’re convinced that they’re knowledge is false. So there are puzzles and paradoxes like that. And then there was another argument that does actually, again, come out of Sidgwick, although I didn’t find it in Sidgwick until I read it in other philosopher’s later.

Again, I think Peter Railton’s is one who uses his. and that is that if you’re really asking what people would do if they’re rational and fully informed, you have to make judgments about what is a rational and fully informed view in this situation. And that might involve even the views that we’ve just been discussing, that if you were rational, you would know what the objective truth was and you would want to do it. So, at that level, a preference view actually seems to amount to a different view, an objectivist view, that you would hold where you would have to actually know what the things that were good.

So, as I say, it had a number of internal problems, even just if you assume the meta-ethic that I was taking from Hare originally. But if then, as happened with me, you become convinced that there can be objective moral truths. This was, in some ways, opened up the field to other possible ideas as to what was intrinsically good because now you could argue that something was intrinsically good even if it was not something that people preferred, and in that light, I went back to reading some of the classical utilitarians, again, particularly, Sidgwick and his arguments for why happiness rather than the satisfaction of desires is the ultimate value, something that is of intrinsic value, and it did seem to overcome these problems with preference utilitarianism that had been troubling me.

It had certainly had some paradoxes of its own, some things that it seemed not to handle as well, but after thinking about it, again, I decided that it was more likely than not that a hedonistic view was the right view. I wouldn’t put it stronger than that. I still think preference utilitarianism has some things to be said for it and they’re also, of course, views that say yes, happiness is intrinsically good and suffering is intrinsically bad, but they’re not the only things that are intrinsically good or bad, things like justice or freedom or whatever. There’s various other candidates that people have put forward. Many of them, in fact, are being objectively good or bad. So there are also possibilities.

Lucas: When you mentioned that happiness or certain sorts of conscious states of sentient creatures can be seen as intrinsically good or valuable, keeping in mind the moral realism that you hold, what is the metaphysical status of experiences in the universe given this view? Is it that happiness is good based off of the application of reason and the rational space of ethics? Unpack the ontology of happiness and the metaphysics here a bit.

Peter: Well, of course it doesn’t change what happiness is. That’s to say that it’s of intrinsic value, but that is the claim that I’m making. That the world is a better place if it has more happiness in it and less suffering in it. That’s judgment that I’m making about the state of the universe. Obviously, there have to be beings who can be happy or can be miserable, and that requires a conscious mind, but the judgment that the universe if better with more happiness and less suffering is mind independent. I think … Let’s imagine that there were beings that could feel pain and pleasure, but could not make any judgments about anything of value. They’re like some non-humans animals, I guess. It would still be the case that the universe was better if those non-human animals suffered less and had more pleasure.

Lucas: Right. Because it would be sort of intrinsic quality or property to the experience that it be valuable or disvaluable. So yeah, thanks so much for your time, Peter. It’s really been wonderful and informative. If people would like to follow you or check you out somewhere, where can they go ahead and do that?

Peter: I have a website, which actually I’m in the process of reconstructing a bit, but it’s Petersinger.info. There’s a Wikipedia page. They wanna look at things that I’m involved in, they can look at thelifeyoucansave.org, which is the nonprofit organization that I’ve founded that is recommending perfective charities that people can donate to. That probably gives people a bit of an idea. There’s books that I’ve written that are discussing these things. I probably mentioned The Point of View of the Universe, which goes into the things we’ve discussed today, probably more thoroughly than anything else. For people who don’t wanna read a big book, I’ve also got Oxford University Press’ very short introduction series. The book on utilitarianism is, again, co-authored by the same co-author as The Point of View of the Universe, Katarzyna de Lazari-Radek and myself, and that’s just a hundred page version of some of these arguments we’ve been discussing.

Lucas: Wonderful. Well, thanks again, Peter. We haven’t ever met in person, but hopefully I’ll catch you around the Effective Altruism conference track sometime soon.

Peter: Okay, hope so.

Lucas: Alright, thanks so much, Peter.

Hey, it’s post-podcast Lucas here and just wanted to chime in with some of my thoughts and tie this all into AI thinking. For me, the most consequential aspect of moral thought in this space and moral philosophy, generally, is how much disagreement there is between people who’ve thought long and hard about this issue and what an enormous part of AI alignment this makes up, and the effects, different moral and meta-ethical views have on preferred AI alignment methodology.

Current general estimates by AI researchers, but human level AI on the decade to century long timescale with about a 50% probability by mid-century with that obviously increasing over time, and it’s quite obvious that moral philosophy ethics and issues of value and meaning will not be solved on that timescale. So if we assume at the worst case success story where technical alignment and coordination and strategy issues will continue in their standard, rather morally messy way with how we currently unreflectively deal with things, where moral information isn’t taken very seriously, then I’m really hoping the technical alignment and coordination succeed well enough for us to create a very low level aligned system, that we’re able to pull the brakes on and work hard on issues of value, ethics, and meaning. The end towards which that AGI will be aimed. Otherwise, it seems very clear that given all of this moral uncertainty that is shared, we risk value drift or catastrophically unoptimal or even negative futures.

Turning into Peter’s views that we discussed here today, if axioms of morality are accessible through reason alone, as the axioms of mathematics appear to be, then we ought to consider the implications here for how we want to progress with AI systems and AI alignment more generally.

If we take human beings to be agents of limited or semi-rationality, then we could expect that some of us, or some fraction of us, have gained access to what might potentially be core axioms of the logical space of morality. When AI systems are trained on human data in order to infer and learn human preferences, given Peter’s view, this could be seen as a way of learning the moral thinking of imperfectly rational beings. This, or any empirical investigation, given Peter’s views, would not be able to arrive at any clear, moral truth, rather it would find areas where semi-rational beings, like ourselves, generally tend to converge in this space.

This would be useful or potentially passable up until AGI, but if such a system is to be fully autonomous and safe, then a more robust form of alignment is necessary. If the AGI we create is one day rational, putting aside whatever reason might be and how it gives rational creatures access to self-evident truths and rational spaces, then if AGI is a fully rational agent, then it, perhaps, would arrive at self-evident truths of mathematics and logic, and even morality, just as aliens on another planet might if they’re fully rational as is Peter’s view. If so, this would potentially be evidence of this view being true and can also reflect here that AGI from this point of using reason to have insight into the core truths of logical spaces could reason much better and more impartially than any human in order to fully explore and realize universal truths of morality.

At this point, we would essentially have a perfect moral reasoner on our hands with access to timeless universal truths. Now the question would be could we trust it and what would ever be sufficient reasoning or explanation given to humans by this moral oracle that would satisfy and satiate us of our appetites and desires to know moral truth and to be sure that we have arrived at moral truth?

It’s above my pay grade what rationality or reason actually is and might be prior to certain logical and mathematical axioms and how such a truth seeking meta-awareness can grasps these truths as self-evident or whether the self-evidence of the truths of mathematics and logic are programmed into us by evolution trying and failing over millions of year. But maybe that’s an issue for another time. Regardless, we’re doing philosophy, computer science, and poly-sci on a deadline, so let’s keep working on getting it right.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

An image of Hurricane Michael making landfall October 11, 2018. Photo courtesy of NASA.

IPCC 2018 Special Report Paints Dire — But Not Completely Hopeless — Picture of Future

Click here to see this page in other languages:  Russian 

On Wednesday, October 10, the panhandle of Florida was struck by Hurricane Michael, which has already claimed over 30 lives and destroyed communities, homes and infrastructure across multiple states. Michael is the strongest hurricane in recorded history to make landfall in that region. And in coming years, it’s likely that we’ll continue to see an increase in record breaking storms — as well as record-breaking heat waves, droughts, floods, and wildfires.

Only two days before Michael unleashed its devastation on the United States, the United Nations International Panel on Climate Change (IPCC) released a dire report on the prospects for maintaining global temperature rise to 1.5°C—and why we must meet this challenge head on.

In 2015, roughly during the time that the Paris Climate Agreement was being signed, global temperatures reached 1°C above pre-industrial levels. And we’re already feeling the impacts of this increase in the form of bigger storms, bigger wildfires, higher temperatures, melting arctic ice, etc.

The recent IPCC report concludes that, if society continues on its current trajectory — and even if the world abides by the Paris Climate Agreement — the planet will hit 1.5°C of warming in a matter of decades, and possibly in the next 12 years. And every half degree more that temperatures rises is expected to bring on even more extreme effects. Even if we can limit global warming to 1.5°C, the report predicts we’ll lose most coral reefs, sea levels will rise and flood many coastal communities, more people around the world will experience extreme heat waves, and other natural disasters can be expected to increase.

As global temperatures rise, they don’t rise evenly across the globe. Land air is expected to reach higher temperatures than that over the oceans, so what could be 1.5°C on average across earth, might be a 3-4.5°C increase in some sections of the world. This has the potential to trigger deadly heat waves, wildfires and droughts, which would also negatively impact local ecosystems and farmland.

But what about if we reach 2°C? This level of temperature increase is often floated as the highest limit the world can handle without too much suffering – but how much worse will it be than 1.5°C?

A difference of 0.5°C may not seem like much, but it could mean the difference between a world with some surviving coral reefs, and a world in which they — and many other species — are all destroyed. Two degrees could lead to an extra 420 million people experiencing extreme and possibly deadly heat waves. Some regions of the world will see increases in temperatures as high as 4-6°C. Sea levels are predicted to rise an extra 10 centimeters at 2°C versus 1.5°C, which could impact an extra 10 million people along coastal areas.

Meanwhile, human health will deteriorate; diseases like malaria and dengue fever could become more prevalent and spread into new regions with this increase in temperature. Farmland for many staple crops could decrease, and even livestock are expected to be adversely affected as feed quality and water availability may decrease.

The list goes on and on. But perhaps one of the greatest threats of climate change is that those who will likely be the hardest hit by increasing temperatures are those who are already among the poorest and most vulnerable.

Yet we’re not quite out of time. As the report highlights, all of these problems arise as a result of society taking little to no action. But what if we did start taking steps to reduce global warming? What if we could get governments and corporations to recognize the need to reduce emissions and switch to clean, alternative, renewable energy sources? What if individuals made changes to their own lifestyles while also encouraging their government leaders to take action?

The report suggests that under those circumstances, if we can achieve global net-zero emissions — that is, such low levels of carbon or other pollutants are emitted that they can be absorbed by trees and soil — then we can still prevent temperatures from exceeding 1.5°C. Temperatures will still increase somewhat as a result of current emissions, but there’s still time to curtail the most severe effects.

There are other organizations that believe we can achieve global net-zero emissions as well. For example, this summer, the Exponential Climate Action Roadmap was released, which offers a roadmap to achieve the goals of the Paris Climate Agreement by 2030. Or there’s The Solutions Project, which maps out steps to quickly achieve 100% renewable energy. And Drawdown provides 80 steps we can take to reduce emissions.

We don’t have much time left, but it’s not too late. The prospects are dire if we continue on our current trajectory, but if society can recognize the urgency of the situation and come together to take action, there’s still hope of keeping the worst effects of climate change at bay.

An edited version of this article was originally published on Metro. Photo courtesy of NASA.

Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett

In both 2016 and 2017, genome editing made it into the annual Worldwide Threat Assessment of the US Intelligence Community. (Update: it was also listed in the 2019 Threat Assessment.) One of biotechnology’s most promising modern developments, it had now been deemed a danger to US national security – and then, after two years, it was dropped from the list again. All of which raises the question: what, exactly, is genome editing, and what can it do? 

Most simply, the phrase “genome editing” represents tools and techniques that biotechnologists use to edit the genomethat is, the DNA or RNA of plants, animals, and bacteria. Though the earliest versions of genome editing technology have existed for decades, the introduction of CRISPR in 2013 “brought major improvements to the speed, cost, accuracy, and efficiency of genome editing.

CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats, is actually an ancient mechanism used by bacteria to remove viruses from their DNA. In the lab, researchers have discovered they can replicate this process by creating a synthetic RNA strand that matches a target DNA sequence in an organism’s genome. The RNA strand, known as a “guide RNA,” is attached to an enzyme that can cut DNA. After the guide RNA locates the targeted DNA sequence, the enzyme cuts the genome at this location. DNA can then be removed, and new DNA can be added. CRISPR has quickly become a powerful tool for editing genomes, with research taking place in a broad range of plants and animals, including humans.

A significant percentage of genome editing research focuses on eliminating genetic diseases. However, with tools like CRISPR, it also becomes possible to alter a pathogen’s DNA to make it more virulent and more contagious. Other potential uses include the creation of “‘killer mosquitos,’ plagues that wipe out staple crops, or even a virus that snips at people’s DNA.”

But does genome editing really deserve a spot among the ranks of global threats like nuclear weapons and cyber hacking? To many members of the scientific community, its inclusion felt like an overreaction. Among them was Dr. Piers Millett, a science policy and international security expert whose work focuses on biotechnology and biowarfare.

Millett wasn’t surprised that biotechnology in general made it into these reports: what he didn’t expect was for one specific tool, genome editing, to be called out. In his words: “I would personally be much more comfortable if it had been a broader sentiment to say ‘Hey, there’s a whole bunch of emerging biotechnologies that could destabilize our traditional risk equation in this space, and we need to be careful with that.’ …But calling out specifically genome editing, I still don’t fully understand any rationale behind it.”

This doesn’t mean, however, that the misuse of genome editing is not cause for concern. Even proper use of the technology often involves the genetic engineering of biological pathogens, research that could very easily be weaponized. Says Millett, “If you’re deliberately trying to create a pathogen that is deadly, spreads easily, and that we don’t have appropriate public health measures to mitigate, then that thing you create is amongst the most dangerous things on the planet.”

 

Biowarfare Before Genome Editing

A medieval depiction of the Black Plague.

Developments such as CRISPR present new possibilities for biowarfare, but biological weapons caused concern long before the advent of gene editing. The first recorded use of biological pathogens in warfare dates back to 600 BC, when Solon, an Athenian statesman, poisoned enemy water supplies during the siege of Krissa. Many centuries later, during the 1346 AD siege of Caffa, the Mongol army catapulted plague-infested corpses into the city, which is thought to have contributed to the 14th century Black Death pandemic that wiped out up to two thirds of Europe’s population.

Though biological weapons were internationally banned by the 1925 Geneva Convention, state biowarfare programs continued and in many cases expanded during World War II and the Cold War. In 1972, as evidence of these violations mounted, 103 nations signed a treaty known as the Biological Weapons Convention (BWC). The treaty bans the creation of biological arsenals and outlaws offensive biological research, though defensive research is permissible. Each year, signatories are required to submit certain information about their biological research programs to the United Nations, and violations reported to the UN Security Council may result in an inspection.

But inspections can be vetoed by the permanent members of the Security Council, and there are no firm guidelines for enforcement. On top of this, the line that separates permissible defensive biological research from its offensive counterpart is murky and remains a subject of controversy. And though the actual numbers remain unknown, pathologist Dr. Riedel asserts that “the number of state-sponsored programs has increased significantly during the last 30 years.”

 

Dual Use Research

So biological warfare remains a threat, and it’s one that genome editing technology could hypothetically escalate. Genome editing falls into a category of research and technology that’s known as “dual-use” – that is, it has the potential both for beneficial advances and harmful misuses. “As an enabling technology, it enables you to do things, so it is the intent of the user that determines whether that’s a positive thing or a negative thing,” Millett explains.

And ultimately, what’s considered positive or negative is a matter of perspective. “The same activity can look positive to one group of people, and negative to another. How do we decide which one is right and who gets to make that decision?” Genome editing could be used, for example, to eradicate disease-carrying mosquitoes, an application that many would consider positive. But as Millet points out, some cultures view such blatant manipulation of the ecosystem as harmful or “sacrilegious.”

Millett believes that the most effective way to deal with dual-use research is to get the researchers engaged in the discussion. “We have traditionally treated the scientific community as part of the problem,” he says. “I think we need to move to a point where the scientific community is the key to the solution, where we’re empowering them to be the ones who identify the risks, the ones who initiate the discussion about what forms this research should take.” A good scientist, he adds, is one “who’s not only doing good research, but doing research in a good way.”

 

DIY Genome Editing

But there is a growing worry that dangerous research might be undertaken by those who are not scientists at all. There are already a number of do-it-yourself (DIY) genome editing kits on the market today, and these relatively inexpensive kits allow anyone, anywhere to edit DNA using CRISPR technology. Do these kits pose a real security threat? Millett explains that risk level can be assessed based on two distinct criteria: likelihood and potential impact. Where the “greatest” risks lie will depend on the criterion.

“If you take risk as a factor of likelihood of impact, the most likely attacks will come from low-powered actors, but have a minimal impact and be based on traditional approaches, existing pathogens, and well characterized risks and threats,” Millett explains. DIY genome editors, for example, may be great in number but are likely unable to produce a biological agent capable of causing widespread harm.

“If you switch it around and say where are the most high impact threats going to come from, then I strongly believe that that requires a level of sophistication and technical competency and resources that are not easy to acquire at this point in time,” says Millett. “If you’re looking for advanced stuff: who could misuse genome editing? States would be my bet in the foreseeable future.”

State Bioweapons Programs

Large-scale bioweapons programs, such as those run by states, pose a double threat: there is always the possibility of accidental release alongside the potential for malicious use. Millett believes that these threats are roughly equal, a conclusion backed by a thousand page report from Gryphon Scientific, a US defense contractor.

Historically, both accidental release and malicious use of biological agents have caused damage. In 1979, there was the accidental release of aerosolized anthrax from the Sverdlovsk bioweapons production facility in the Soviet Union – a clogged air filter in the facility had been removed, but had not been replaced. Ninety-four people were affected by the incident and at least 64 died, along with a number of livestock. The Soviet secret police attempted a cover-up and it was not until years later that the administration admitted the cause of the outbreak.

More recently, Millett says, a US biodefense facility “failed to kill the anthrax that it sent out for various lab trials, and ended up sending out really nasty anthrax around the world.” Though no one was infected, a 2015 government investigation revealed that “over the course of the last decade, 86 facilities in the United States and seven other countries have received low concentrations of live spore samples… thought to be completely inactivated.”

These incidents pale, however, in comparison with Japan’s intentional use of biological weapons during the 1930s and 40s. There is “a published history that suggests up to 30,000 people were killed in China by the Japanese biological weapons program during the lead up to World War II. And if that data is accurate, that is orders of magnitude bigger than anything else,” Millett says.

Given the near-impossibility of controlling the spread of disease, a deliberate attack may have accidental effects far beyond what was intended. The Japanese, for example, may have meant to target only a few Chinese villages, only to unwittingly trigger an epidemic. There are reports, in fact, that thousands of Japan’s own soldiers became infected during a biological attack in 1941.

Despite the 1972 ban on biological weapons programs, Millett believes that many countries still have the capacity to produce biological weapons. As an example, he explains that the Soviets developed “a set of research and development tools that would answer the key questions and give you all the key capabilities to make biological weapons.”

The BWC only bans offensive research, and “underneath the umbrella of a defensive program,” Millett says, “you can do a whole load of research and development to figure out what you would want to weaponize if you were going to make a weapon.” Then, all a country needs to start producing those weapons is “the capacity to scale up production very, very quickly.” The Soviets, for example, built “a set of state-based commercial infrastructure to make things like vaccines.” On a day-to-day basis, they were making things the Soviet Union needed. “But they could be very radically rebooted and repurposed into production facilities for their biological weapons program,” Millett explains. This is known as a “breakout program.”

Says Millett, “I believe there are many, many countries that are well within the scope of a breakout program … so it’s not that they necessarily at this second have a fully prepared and worked-out biological weapons program that they can unleash on the world tomorrow, but they might well have all of the building blocks they need to do that in place, and a plan for how to turn their existing infrastructure towards a weapons program if they ever needed to. These components would be permissible under current international law.”

 

Biological Weapons Convention

This unsettling reality raises questions about the efficacy of the BWC – namely, what does it do well, and what doesn’t it do well? Millett, who worked for the BWC for well over a decade, has a nuanced view.

“The very fact that we have a ban on these things is brilliant,” he says. “We’re well ahead on biological weapons than many other types of weapons systems. We only got the ban on nuclear weapons – and it was only joined by some tiny number of countries – last year. Chemical weapons, only in 1995. The ban on biological weapons is hugely important. Having a space at the international level to talk about those issues is very important.” But, he adds, “we’re rapidly reaching the end of the space that I can be positive about.”

The ban on biological weapons was motivated, at least in part, by the sense that – unlike chemical weapons – they weren’t particularly useful. Traditionally, chemical and biological weapons were dealt with together. The 1925 Geneva Protocol banned both, and the original proposal for the Biological Weapons Convention, submitted by the UK in 1969, would have dealt with both. But the chemical weapons ban was ultimately dropped from the BWC, Millett says, “because that was during Vietnam, and so there were a number of chemical agents that were being used in Vietnam that weren’t going to be banned.” Once the scope of the ban had been narrowed, however, both the US and the USSR signed on.

Millet describes the resulting document as “aspirational.” He explains,“The Biological Weapons Convention is four pages long, whereas the Chemical Weapons Convention is 200 pages long, give or take.” And the difference “is about the teeth in the treaty.”

“The BWC is…a short document that’s basically a commitment by states not to make these weapons. The Chemical Weapons Convention is an international regime with an organization, with an inspection regime intended to enforce that. Under the BWC, if you are worried about another state, you’re meant to try to resolve those concerns amicably. But if you can’t do that, we move onto Article Six of the Convention, where you report it to the Security Council. The Security Council is meant to investigate it, but of course if you’re a permanent member of the Security Council, you can veto that, so that doesn’t happen.”

 

De-escalation

One easy way that states can avoid raising suspicion is to be more transparent. As Millett puts it, “If you’re not doing naughty things, then it’s on you to demonstrate that you’re not.” This doesn’t mean revealing everything to everybody. It means finding ways to show other states that they don’t need to worry.

As an example, Millett cites the heightened security culture that developed in the US after 9/11. Following the 2001 anthrax letter attacks, as well as a large investment in US biodefense programs, an initiative was started to prevent foreigners from working in those biodefense facilities. “I’m very glad they didn’t go down that path,” says Millett, “because the greatest risk, I think, was not that a foreign national would sneak in.” Rather, “the advantage of having foreign nationals in those programs was at the international level, when country Y stands up and accuses the US of having an illicit bioweapons program hidden in its biodefense program, there are three other countries that can stand up and say, ‘Well, wait a minute. Our scientists are in those facilities. We work very closely with that program, and we see no evidence of what you’re saying.’”

Historically, secrecy surrounding bioweapons programs has led other countries to begin their own research. Before World War I, the British began exploring the use of bioweapons. The Germans were aware of this. By the onset of the war, the British had abandoned the idea, but the Germans, not knowing this, began their own bioweapons program in an attempt to keep up. By World War II, Germany no longer had a bioweapons program. But the Allies believed they still did, and the U.S. bioweapons program was born of such fears.

 

What now?

Asked if he believes genome editing is a bioweapons “game changer”, Millett says no. “I see it as an enabling technology in the short to medium term, then maybe with longer-term implications , but then we’re out into the far distance of what we can reasonably talk about and predict,” he says. “Certainly for now, I think its big impact is it makes it easier, faster, cheaper, and more reliable to do things that you could do using traditional approaches.”

But as biotechnology continues to evolve, so too will biowarfare. For example, it will eventually be possible for governments to alter specific genes in their own populations. “Imagine aerosolizing a lovely genome editor that knocks out a specifically nasty gene in your population,” says Millett. “It’s a passive thing. You breathe it in retroactively alters the population.

A government could use such technology to knock out a gene linked to cancer or other diseases. But, Millett says, “what would happen if you came across a couple of genes that at an individual level were not going to have an impact, but at a population level were connected with something, say, like IQ?” With the help of a genome editor, a government could make their population smarter, on average, by a few IQ points.

“There’s good economic data that says that is … statistically important,” Millett says. “The GDP of the country will be noticeably affected if we could just get another two or three percent IQ points. There are direct national security implications of that. If, for example, Chinese citizens got smarter on average over the next couple of generations by a couple of IQ points per generation, that has national security implications for both the UK and the US.”

For now, such an endeavor remains in the realm of science fiction. But technology is evolving at a breakneck speed, and it’s more important than ever to consider the potential implications of our advancements. That said, Millett is optimistic about the future. “I think the key is the distribution of bad actors versus good actors,” he says. As long as the bad actors remain the minority, there is more reason to be excited for the future of biotechnology than there is to be afraid of it.

Dr. Piers Millett holds fellowships at the Future of Humanity Institute, the University of Oxford, and the Woodrow Wilson Center for International Policy and works as a consultant for the World Health Organization. He also served at the United Nations as the Deputy Head of the Biological Weapons Convention.  

Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks?

In this special podcast episode, Ariel speaks with Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Martin is a cosmologist and space scientist based in the University of Cambridge. He is director of The Institute of Astronomy and Master of Trinity College, and he was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords.

Topics discussed in this episode include:

  • Why Martin remains a technical optimist even as he focuses on existential risks
  • The economics and ethics of climate change
  • How AI and automation will make it harder for Africa and the Middle East to economically develop
  • How high expectations for health care and quality of life also put society at risk
  • Why growing inequality could be our most underappreciated global risk
  • Martin’s view that biotechnology poses greater risk than AI
  • Earth’s carrying capacity and the dangers of overpopulation
  • Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
  • The ethics of artificial meat, life extension, and cryogenics
  • How intelligent life could expand into the galaxy
  • Why humans might be unable to answer fundamental questions about the universe

Books and resources discussed in this episode include

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloudiTunesGooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with The Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, and with a few focusing on nuclear weapons, but FLI is really an organization about existential risks, and especially x-risks that are the result of human action. These cover a much broader field than just artificial intelligence.

I’m excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He’s based in Cambridge where he has been director of The Institute of Astronomy, and Master of Trinity College. He was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including The National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.

He’s on the board of The Princeton Institute for Advanced Study, and has served on many bodies connected with international collaboration and science, especially threats stemming from humanity’s ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He’s written seven books for the general public, and his most recent book is about these threats. It’s the reason that I’ve asked him to join us today. First, Martin thank you so much for talking with me today.

Martin: Good to be in touch.

Ariel: Your new book is called On the Future: Prospects for Humanity. In his endorsement of the book Neil deGrasse Tyson says, “From climate change, to biotech, to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.”

I really liked this quote, because I felt like it sums up what your book is about. Basically science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. First, do you agree with that sentiment? Am I accurate in that description?

Martin: No, I certainly agree, and that’s truer of this century than ever before because of greater scientific knowledge we have, and the greater power to use it for good or ill, because the technologies allow tremendously advanced technologies which could be misused by a small number of people.

Ariel: You’ve written in the past about how you think we have essentially a 50/50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe what the overall message you’re hoping that people take away is?

Martin: Well, I describe myself as a technical optimist, but political pessimist because it is clear that we couldn’t be living such good lives today with seven and a half billion people on the planet if we didn’t have the technology which has been developed in the last 100 years, and clearly there’s a tremendous prospect of better technology in the future. But on the other hand what is depressing is the very big gap between the way the world could be, and the way the world actually is. In particular, even though we have the power to give everyone a decent life, the lot of the bottom billion people in the world is pretty miserable and could be alleviated a lot simply by the money owned by the 1,000 richest people in the world.

We have a very unjust society, and the politics is not optimizing the way technology is used for human benefit. My view is that it’s the politics which is an impediment to the best use of technology, and the reason this is important is that as time goes on we’re going to have a growing population which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed.

That’s the problem of the collective effect of us on the planet, but there’s another effect, which is that these new technologies, especially bio, cyber, and AI allow small groups of even individuals to have an effect by error or by design, which could cascade very broadly, even globally. This, I think, makes our society very brittle. We’re very interdependent, and on the other hand it’s easy for there to be a breakdown. That’s what depresses me, the gap between the way things could be, and the downsides if we collectively overreach ourselves, or if individuals cause disruption.

Ariel: You mentioned actually quite a few things that I’m hoping to touch on as we continue to talk. I’m almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have. It’s connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if we heard that there was 10% chance that an asteroid would strike in 2100 people would do something about it.

We wouldn’t say, “Oh, technology will be better in the future so let’s not worry about it now.” Apparently I’m very cynical, because I think that’s exactly what we would do. And I’m curious, what makes you feel more hopeful that even with something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation?

Martin: Well, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit, whereas the problem with climate change is really that it’s first of all, the effect is mainly going to be several decades in the future. It’s started to happen, but the really severe consequences are decades away. But also there’s an uncertainty, and it’s not a sort of sudden event we can easily visualize. It’s not at all clear therefore, how we are actually going to do something about it.

In the case of the asteroid, it would be clear what the strategy would be to try and deal with it, whereas in the case of climate there are lots of ways, and the problem is that the consequences are decades away, and they’re global. Most of the political focus obviously is on short-term worry, short-term problems, and on national or more local problems. Anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now, and it’s hard to keep those issues up the agenda when there are so many urgent things to worry about.

I think you’re maybe right that even if there was a threat of an asteroid, there may be the same sort of torpor, and we’d fail to deal with it, but I thought that’s an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climate it’s not so obviously going to be a catastrophe that people are motivated now to start thinking about it.

Ariel: I’ve heard it go both ways that either climate change is yes, obviously going to be bad but it’s not an existential risk so therefore those of us who are worried about existential risk don’t need to worry about it, but then I’ve also heard people say, “No, this could absolutely be an existential risk if we don’t prevent runaway climate change.” I was wondering if you could talk a bit about what worries you most regarding climate.

Martin: First of all, I don’t think it is an existential risk, but it’s something we should worry about. One point I make in my book is that I think the debate, which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science — although of course there are some complete deniers — but differences about ethics and economics. There’s some people of course who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there’s quite a big uncertainty, matter of fact a true uncertainty about how much warmer you get for a given increase in CO2.

But even among those who accept the IPCC projections of climate change, and the uncertainties therein, I think there’s a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to a rate of, say 5%, and those who think we shouldn’t do it in this context. If you apply a 5% discount rate as you would if you were deciding whether it’s worth putting up an office building or something like that, then of course you don’t give any weight to what happens after about, say 2050.

As Bjorn Lomborg, the well-known environmentalist argues, we should therefore give a lower priority to dealing with climate change than to helping the world’s poor in other more immediate ways. He is consistent given his assumptions about the discount rate. But many of us would say that in this context we should not discount the future so heavily. We should care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won’t be alive at the end of the century. We should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios.

I think the debates about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies, or are we prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life?

Ariel: Do you think the risks are only going to be showing up that much later? We are already seeing these really heavy storms striking. We’ve got Florence in North Carolina right now. There’s a super typhoon hit southern China and the Philippines. We had Maria, and I’m losing track of all the hurricanes that we’ve had. We’ve had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year. Do you think we really need to wait that long?

Martin: I think it’s generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean, and that this will become a more serious trend, but by the end of the century of course it could be very serious indeed. And the main threat is of course to people in the disadvantaged parts of the world. If you take these recent events, it’s been far worse in the Philippines than in the United States because they’re not prepared for it. Their houses are more fragile, etc.

Ariel: I don’t suppose you have any thoughts on how we get people to care more about others? Because it does seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it’s the poorer countries who seem to be suffering more. Then of course there’s the issue of the people who are alive now versus the people in the future.

Martin: That’s right, yes. Well, I think most people do care about their children and grandchildren, and so to that extent they do care about what things will be like at the end of the century, but as you say, the extra-political problem is that the cause of the CO2 emissions is mainly what’s happened in the advanced countries, and the downside is going to be more seriously felt by those in remote parts of the world. It’s easy to overlook them, and hard to persuade people that we ought to make a sacrifice which will be mainly for their benefit.

I think incidentally that’s one of the other things that we have to ensure happens, is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced parts of the world. I think that’s going to be in everyone’s interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there’s going to be massive and well-justified discontent, because unlike in the earlier generations, they’re aware of what they’re missing. They all have mobile phones, they all know what it’s like, and I think there’s going to be embitterment leading to conflict if we don’t narrow this gap, and this requires I think a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.

Ariel: That sort of ties into another question that I had for you, and that is, what do you think is the most underappreciated threat that maybe isn’t quite as obvious? You mentioned the fact that we have these people in poorer countries who are able to more easily see what they’re missing out on. Inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are underappreciated?

Martin: Yes. Just to go back, that threat is of course very serious because by the end of the century there might be 10 times as many people in Africa as in Europe, and of course they would then have every justification in migrating towards Europe with the result of huge disruption. We do have to care about those sorts of issues. I think there are all kinds of reasons apart from straight ethics why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap.

Incidentally, one thing which is a handicap for them is that they won’t have the route to prosperity followed by the so called “Asian tigers,” which were able to have high economic growth by undercutting the labor cost in the west. Now what’s happening is that with robotics it’s possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries, and so Africa and the Middle East won’t have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the west.

This is another reason why it’s going to be a big challenge. That’s something which I think we don’t worry about enough, and need to worry about, because if the inequalities persist when everyone is able to move easily and knows exactly what they’re missing, then that’s a recipe for a very dangerous and disruptive world. I would say that is an underappreciated threat.

Another thing I would count as important is that we are as a society very brittle, and very unstable because of high expectations. I’d like to give you another example. Suppose there were to be a pandemic, not necessarily a genetically engineered terrorist one, but a natural one. Then in contrast to what happened in the 14th century when the Bubonic Plague, the Black Death, occurred and killed nearly half the people in certain towns and the rest went on fatalistically. If we had some sort of plague which affected even 1% of the population of the United States, there’d be complete social breakdown, because that would overwhelm the capacity of hospitals, and people, unless they are wealthy, would feel they weren’t getting their entitlement of healthcare. And if that was a matter of life and death, that’s a recipe for social breakdown. I think given the high expectations of people in the developed world, then we are far more vulnerable to the consequences of these breakdowns, and pandemics, and the failures of electricity grids, et cetera, than in the past when people were more robust and more fatalistic.

Ariel: That’s really interesting. Is it essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong?

Martin: That’s right. And of course, if we know that there are cures available to some disease and there’s not the hospital capacity to offer it to all the people who are afflicted with the disease, then naturally that’s a matter of life and death, and that is going to promote social breakdown. This is a new threat which is of course a downside of the fact that we can at least cure some people.

Ariel: There’s two directions that I want to go with this. I’m going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit, but first I want to touch on biotech threats.

One of the things that’s been a little bit interesting for me is that when I first started at FLI three years ago we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene. Now, three years later I’m not hearing quite as much about the biotech threats, and I’m not sure if that’s because something has actually changed, or if it’s just because at FLI I’ve become more focused on AI and therefore stuff is happening but I’m not keeping up with it. I was wondering if you could talk a bit about what some of the risks you see today are with respect to biotech?

Martin: Well, let me say I think we should worry far more about bio threats than about AI in my opinion. I think as far as the bio threats are concerned, then there are these new techniques. CRISPR, of course, is a very benign technique if it’s used to remove a single damaging gene that gives you a particular disease, and also it’s less objectionable than traditional GM because it doesn’t cross the species barrier in the same way, but it does allow things like a gene drive where you make a species extinct by making it sterile.

That’s good if you’re wiping out a mosquito that carries a deadly virus, but there’s a risk of some effect which distorts the ecology and has a cascading consequence. There are risks of that kind, but more important I think there is a risk of the misuse of these techniques, and not just CRISPR, but for instance the the gain of function techniques that we used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible, things like that which can be done in a more advanced way now I’m sure.

These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape, and of course they are the kinds of things which could be misused. There have, of course, been lots of meetings, you have been at some, to discuss among scientists what the guidelines should be. How can we ensure responsible innovation in these technologies? These are modeled on the famous Conference in Asilomar in the 1970s when recombinant DNA was first being discussed, and the academics who worked in that area, they agreed on a sort of cautious stance, and a moratorium on some kinds of experiments.

But now they’re trying to do the same thing, and there’s a big difference. One is that these scientists are now more global. It’s not just a few people in North America and Europe. They’re global, and there is strong commercial pressures, and they’re far more widely understood. Bio-hacking is almost a student recreation. This means, in my view, that there’s a big danger, because even if we have regulations about certain things that can’t be done because they’re dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws, or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say, and I think this is very scary. Consequences could cascade globally.

Ariel: Do you think that the threat is more likely to come from something happening accidentally, or intentionally?

Martin: I don’t know. I think it could be either. Certainly it could be something accidental from gene drive, or releasing some dangerous virus, but I think if we can imagine it happening intentionally, then we’ve got to ask what sort of people might do it? Governments don’t use biological weapons because you can’t predict how they will spread and who they’d actually kill, and that would be an inhibiting factor for any terrorist group that had well-defined aims.

But my worst nightmare is some person, and there are some, who think that there are too many human beings on the planet, and if they combine that view with the mindset of extreme animal rights people, etc, they might think it would be a good thing for Gaia, for Mother Earth, to get rid of a lot of human beings. They’re the kind of people who, with access to this technology, might have no compunction in releasing a dangerous pathogen. This is the kind of thing that worries me.

Ariel: I find that interesting because it ties into the other question that I wanted to ask you about, and that is the idea of overpopulation. I’ve read it both ways, that overpopulation is in and of itself something of an existential risk, or a catastrophic risk, because we just don’t have enough resources on the planet. You actually made an interesting point, I thought, in your book where you point out that we’ve been thinking that there aren’t enough resources for a long time, and yet we keep getting more people and we still have plenty of resources. I thought that was sort of interesting and reassuring.

But I do think at some point that does become an issue. At then at the same time we’re seeing this huge push, understandably, for improved healthcare, and expanding life spans, and trying to save as many lives as possible, and making those lives last as long as possible. How do you resolve those two sides of the issue?

Martin: It’s true, of course, as you imply, that the population has risen double in the last 50 years, and there were doomsters who in the 1960s and ’70s thought that mass starvation by now, and there hasn’t been because food production has more than kept pace. If there are famines today, as of course there are, it’s not because of overall food shortages. It’s because of wars, or mal-distribution of money to buy the food. Up until now things have gone fairly well, but clearly there are limits to the food that can be produced on the earth.

All I would say is that we can’t really say what the carrying capacity of the earth is, because it depends so much on the lifestyle of people. As I say in the book, the world couldn’t sustainably have 2 billion people if they all lived like present day Americans, using as much energy, and burning as much fossil fuels, and eating as much beef. On the other hand you could imagine lifestyles which are very sort of austere, where the earth could carry 10, or even 20 billion people. We can’t set an upper limit, but all we can say is that given that it’s fairly clear that the population is going to rise to about 9 billion by 2050, and it may go on rising still more after that, we’ve got to ensure that the way in which the average person lives is less profligate in terms of energy and resources, otherwise there will be problems.

I think we also do what we can to ensure that after 2050 the population turns around and goes down. The base scenario is when it goes on rising as it may if people choose to have large families even when they have the choice. That could happen, and of course as you say, life extension is going to have an affect on society generally, but obviously on the overall population too. I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.

And it’s not hopeless, because the actual number of births per year has already started going down. The reason the population is still going up is because more babies survive, and most of the people in the developing world are still young, and if they live as long as people in advanced countries do, then of course that’s going to increase the population even for a steady birth rate. That’s why, unless there’s a real disaster, we can’t avoid the population rising to about 9 billion.

But I think policies can have an affect on what happens after that. I think we do have to try to make people realize that having large numbers of children has negative externalities, as it were in economic jargon, and it is going to be something to put extra pressure on the world, and affects our environment in a detrimental way.

Ariel: As I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023, and the timing was just entertaining to me, because like I said you have a section in your book about why you don’t actually agree with Elon Musk’s plan for some of this stuff.

Martin: That’s right.

Ariel: I was hoping you could talk a little bit about why you’re not as big a plan of space tourism, and what you think of humanity expanding into the rest of the solar system and universe?

Martin: Well, let me say that I think it’s a dangerous delusion to think we can solve the earth’s problems by escaping to Mars or elsewhere. Mass emigration is not feasible. There’s nowhere in the solar system which is as comfortable to live in as the top of Everest or the South Pole. I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world’s problems have to be solved here, dealing with climate change is a dawdle compared to terraforming Mars. SoI don’t think that’s true.

Now, two other things about space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity Probe trundling across the surface of Mars, and maybe missing things that a geologist would notice, but future robots will be able to do much of what a human will do, and to manufacture large structures in space, et cetera, so the practical need to send people to space is going down.

On the other hand, some people may want to go simply as an adventure. It’s not really tourism, because tourism implies it’s safe and routine. It’ll be an adventure like Steve Fossett or the guy who fell supersonically from an altitude balloon. It’d be crazy people like that, and maybe this Japanese tourist is in the same style, who want to have a thrill, and I think we should cheer them on.

I think it would be good to imagine that there are a few people living on Mars, but it’s never going to be as comfortable as our Earth, and we should just cheer on people like this.

And I personally think it should be left to private money. If I was an American, I would not support the NASA space program. It’s very expensive, and it could be undercut by private companies which can afford to take higher risks than NASA could inflict on publicly funded civilians. I don’t think NASA should be doing manned space flight at all. Of course, some people would say, “Well, it’s a national aspiration, a national goal to show superpower pre-eminence by a massive space project.” That was, of course, what drove the Apollo program, and the Apollo program cost about 4% of The US federal budget. Now NASA has .6% or thereabouts. I’m old enough to remember the Apollo moon landings, and of course if you would have asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years at that time.

There would have been, had the program been funded, but of course there was no motive, because the Apollo program was driven by superpower rivalry. And having beaten the Russians, it wasn’t pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program, and leapfrog what the Americans did by going to Mars. That could happen. Otherwise I think the only manned space flight will, and indeed should, be privately funded by adventurers prepared to go on cut price and very risky missions.

But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life’s long-term future, because whereas we are, as humans, fairly well adapted to the earth, they will be in a place, Mars, or an asteroid, or somewhere, for which they are badly adapted. Therefore they would have every incentive to use all the techniques of genetic modification, and cyber technology to adapt to this hostile environment.

A new species, perhaps quite different from humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. They, of course, may download themselves to be electronic. We don’t know how it’ll happen. We all know about the possibilities of advanced intelligence in electronic form. But I think this’ll happen on Mars, or in space, and of course if we think about going further and exploring beyond our solar system, then of course that’s not really a human enterprise because of human life times being limited, but it is a goal that would be feasible if you were a near immortal electronic entity. That’s a way in which our remote descendants will perhaps penetrate beyond our solar system.

Ariel: As you’re looking towards these longer term futures, what are you hopeful that we’ll be able to achieve?

Martin: You say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it’s not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. Of course, one thing which is very important is to be aware of different time scales.

Sometimes you hear people talk about humans watching the death of the sun in five billion years. That’s nonsense, because the timescale for biological evolution by Darwinian selection is about a million years, thousands of times shorter than the lifetime of the sun, but more importantly the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.

It would only take one, or two, or three centuries before we have entities which are very different from human beings if they are created by genetic modification, or downloading to electronic entities. They won’t be normal humans. I think this will happen, and this of course will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by Darwinian selection, to something quite new. This century is very special, which is a century where we might be triggering or jump starting a new kind of technological evolution which could spread from our solar system far beyond, on the timescale very short compared to the time scale for Darwinian evolution and the time scale for astronomical evolution.

Ariel: All right. In the book you spend a lot of time also talking about current physics theories and how those could evolve. You spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future?

Martin: Well, it’s only peripherally linked to it. I put that in the book because I was thinking about, what are the challenges, not just challenges of a practical kind, but intellectual challenges? One point I make is that there are some scientific challenges which we are now confronting which may be beyond human capacity to solve, because there’s no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory.

It’s possible that there be some fundamental aspects of nature that humans will never understand, and they will be a challenge for post-humans. I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain for instance, than in the context of cosmology, although there are challenges in cosmology which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera, and we need a theory like that in order to decide whether our big bang was the only one, or whether there were other big bangs and a kind of multiverse.

It’s possible that in 50 years from now we will have such a theory, we’ll know the answers to those questions. But it could be that there is such a theory and it’s just too hard for anyone to actually understand and make predictions from. I think these issues are relevant to the intellectual constraints on humans.

Ariel: Is that something that you think, or hope, that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow “us” to understand some of these more complex ideas?

Martin: Well, I think it’s certainly possible that machines could actually, in a sense, create entities based on physics which we can’t understand. This is perfectly possible, because obviously we know they can vastly out-compute us at the moment, so it could very well be, for instance, that there is a variant of string theory which is correct, and it’s just too difficult for any human mathematician to work out. But it could be that computers could work it out, so we get some answers.

But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. That’s a separate question which may not affect what it can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things.

I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, “Isn’t it great there’ll be these even deeper intellects than human beings out there,” but others who thought these might just be zombies thought it was very sad if there was no entity which could actually appreciate the beauties and wonders of nature in the way we can. It does matter, in a sense, to our perception of this far future, if we think that these entities which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we’ve emerged. I think that’s a very important question.

Ariel: I want to pull things back to a little bit more shorter term I guess, but still considering this idea of how technology will evolve. You mentioned that you don’t think it’s a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars. I think in general we have this tendency to say, “Oh, well in the future we’ll have technology that can fix whatever issue we’re dealing with now, so we don’t need to worry about it.”

I was wondering if you could sort of comment on that approach. To what extent can we say, “Well, most likely technology will have improved and can help us solve these problems,” and to what extent is that a dangerous approach to take?

Martin: Well, clearly technology has allowed us to live much better, more complex lives than we could in the past, and on the whole the net benefits outweigh the downsides, but of course there are downsides, and they stem from the fact that we have some people who are disruptive, and some people who can’t be trusted. If we had a world where everyone could trust everyone else, we could get rid of about a third of the economy I would guess, but I think the main point is that we are very vulnerable.

We have huge advances, clearly, in networking via the Internet, and computers, et cetera, and we may have the Internet of Things within a decade, but of course people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. That’s just one example, and ditto for biotech which may allow the development of pathogens which kill people of particular races, or have other effects.

There are these technologies which are developing fast, and they can be used to great benefit, but they can be misused in ways that will provide new kinds of horrors that were not available in the past. It’s by no means obvious which way things will go. Will there be a continued net benefit of technology, as I think we’ve said there as been up ’til now despite nuclear weapons, et cetera, or will at some stage the downside run ahead of the benefits.

I do worry about the latter being a possibility, particularly because of this amplification factor, the fact that it only takes a few people in order to cause disruption that could cascade globally. The world is so interconnected that we can’t really have a disaster in one region without its affecting the whole world. Jared Diamond has this book called Collapse where he discusses five collapses of particular civilizations, whereas other parts of the world were unaffected.

I think if we really had some catastrophe, it would affect the whole world. It wouldn’t just affect parts. That’s something which is a new downside. The stakes are getting higher as technology advances, and my book is really aimed to say that these developments are very exciting, but they pose new challenges, and I think particularly they pose challenges because a few dissidents can cause more trouble, and I think it’ll make the world harder to govern. It’ll make cities and countries harder to govern, and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that’s going to be a challenge for all future governments.

Ariel: Reading your book I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. I was curious: what do you hope that people will do after reading the book, or learning more about these issues in general?

Martin: Well, first of all I hope that people can be persuaded to think long term. I mentioned that religious groups, for instance, tend to think long term, and the papal encyclical in 2015 I think had a very important effect on the opinion in Latin America, Africa, and East Asia in the lead up to the Paris Climate Conference, for instance. That’s an example where someone from outside traditional politics would have an effect.

What’s very important is that politicians will only respond to an issue if it’s prominent in the press, and prominent in their inbox, and so we’ve got to ensure that people are concerned about this. Of course, I ended the book saying, “What are the special responsibilities of scientists,” because scientists clearly have a special responsibility to ensure that their work is safe, and that the public and politicians are made aware of the implications of any discovery they make.

I think that’s important, even though they should be mindful that their expertise doesn’t extend beyond their special area. That’s a reason why scientific understanding, in a general sense, is something which really has to be universal. This is important for education, because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera, which have scientific aspects, then everyone has to have enough feel for those aspects to participate in a debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments.

I think an educated population is essential for proper democracy. Obviously that’s a platitude. But the education needs to include, to a greater extent, an understanding of the scope and limits of science and technology. I make this point at the end and hope that it will lead to a greater awareness of these issues, and of course for people in universities, we have a responsibility because we can influence the younger generation. It’s certainly the case that students and people under 30 may be alive towards the end of the century are more mindful of these concerns than the middle aged and old.

It’s very important that these activities like the Effective Altruism movement, 80,000 Hours, and these other movements among students should be encouraged, because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to drunk driving and things like that, which have happened over a few decades, and I think perhaps we can have a more environmental sensitivity so to become regarded as sort of rather naff or tacky to waste energy, and to be extravagant in consumption.

I’m hopeful that attitudes will change in a positive way, but I’m concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light, and small groups can have a global effect. This makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That’s something which is new, and I think is becoming more and more serious.

Ariel: We’ve been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of, that aren’t as big of risks?

Martin: Well, I think we need to ensure responsible innovation in all new technologies. We’ve talked a lot about bio, and we are very concerned about the misuse of cyber technology. As regards AI, of course there are a whole lot of concerns to be had. I personally think that the takeover AI would be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimized by some algorithm which they can’t have explained to them.

I think there is an awareness to this, and I think that what’s being done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI, and also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that’s bad news, whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of the misuse of biotech, et cetera.

The order in which things happen is going to be very important, but I must say I don’t completely share these concerns about machines running away and taking over, ’cause I think there’s a difference in that, for biological evolution there’s been a drive toward intelligence being favored, but so is aggression. In the case of computers, they may drive towards greater intelligence, but it’s not obvious that that is going to be combined with aggression, because they are going to be evolving by intelligent design, not the struggle of the fittest, which is the way that we evolved.

Ariel: What about concerns regarding AI just in terms of being mis-programmed, and AI just being extremely competent? Poor design on our part, poor intelligent design?

Martin: Well, I think in the short term obviously there are concerns about AI making decisions that affect people, and I think most of us would say that we shouldn’t be deprived of our credit rating, or put in prison on the basis of some AI algorithm which can’t be explained to us. We are entitled to have an explanation if something is done to us against our will. That is why it is worrying if too much is going to be delegated to AI.

I also think that constraint on the development of self-driving cars, and things of that kind, is going to be constrained by the fact that these become vulnerable to hacking of various kinds. I think it’ll be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it’s not clear that we will ever accept a driverless car. I think I’m frankly less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted, that we can trust ourselves to them.

Ariel: As I mentioned at the start, and as you mentioned at the start, you are a techno optimist, for as much as the book is about things that could go wrong it did feel to me like it was also sort of an optimistic look at the future. What are you most optimistic about? What are you most hopeful for looking at both short term and long term, however you feel like answering that?

Martin: I’m hopeful that biotech will have huge benefits for health, will perhaps extend human life spans a bit, but that’s something about which we should feel a bit ambivalent. So, I think health, and also food. If you asked me, what is one of the most benign technologies, it’s to make artificial meat, for instance. It’s clear that we can more easily feed a population of 9 billion on a vegetarian diet than on a traditional diet like Americans consume today.

To take one benign technology, I would say artificial meat is one, and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. I’m optimistic about that. If we think about very long term trends then life extension is something which obviously if it happens too quickly is going to be hugely disruptive, multi-generation families, et cetera.

Also, even though we will have the capability within a century to change human beings, I think we should constrain that on earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen, then as I say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn’t changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2,000 years ago and resonate with the people, and their characters, and their attitudes and emotions.

It’s not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. That will be a fundamental, and very fast change in the nature of humanity. The question is, can we do something to at least constrain the rate at which that happens, or at least constrain the way in which it happens? But it is going to be almost certainly possible to completely change human mentality, and maybe even human physique over that time scale. One has only to listen to listen to people like George Church to realize that it’s not crazy to imagine this happening.

Ariel: You mentioned in the book that there’s lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative effects of cryogenics, and the burden that it puts on the future. I was wondering if you could talk really quickly about that?

Martin: There are some people, I know some, who have a medallion around their neck which is an injunction of, if they drop dead they should be immediately frozen, and their blood drained and replaced by liquid nitrogen, and that they should then be stored — there’s a company called Alcor in Arizona that does this — and allegedly revived at some stage when technology advanced. I find it hard to take these seriously, but they say that, well the chance may be small, but if they don’t invest this way then the chance is zero that they have a resurrection.

But I actually think that even if it worked, even if the company didn’t go bust, and sincerely maintained them for centuries and they could then be revived, I still think that what they’re doing is selfish, because they’d be revived into a world that was very different. They’d be refugees from the past, and they’d therefore be imposing an obligation on the future.

We obviously feel an obligation to look after some asylum seeker or refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest for instance, and had to find a new home, but these refugees from the past, as it were, they’re imposing a burden on future generations. I’m not sure that what they’re doing is ethical. I think it’s rather selfish.

Ariel: I hadn’t thought of that aspect of it. I’m a little bit skeptical of our ability to come back.

Martin: I agree. I think the chances are almost zero, even if they were stored and et cetera, one would like to see this technology tried on some animal first to see if they could freeze animals at liquid nitrogen temperatures and then revive it. I think it’s pretty crazy. Then of course, the number of people doing it is fairly small, and some of the companies doing it, there’s one in Russia, which are real ripoffs I think, and won’t survive. But as I say, even if these companies did keep going for a couple of centuries, or however long is necessary, then it’s not clear to me that it’s doing good. I also quoted this nice statement about, “What happens if we clone, and create a neanderthal? Do we put him in a zoo or send him to Harvard,” said the professor from Stanford.

Ariel: Those are ethical considerations that I don’t see very often. We’re so focused on what we can do that sometimes we forget. “Okay, once we’ve done this, what happens next?”

I appreciate you being here today. Those were my questions. Was there anything else that you wanted to mention that we didn’t get into?

Martin: One thing we didn’t discuss, which was a serious issue, is the limits of medical treatment, because you can make extraordinary efforts to keep people alive long before they’d have died naturally, and to keep alive babies that will never live a normal life, et cetera. Well, I certainly feel that that’s gone too far at both ends of life.

One should not devote so much effort to extreme premature babies, and allow people to die more naturally. Actually, if you asked me about predictions I’d make about the next 30 or 40 years, first more vegetarianism, secondly more euthanasia.

Ariel: I support both, vegetarianism, and I think euthanasia should be allowed. I think it’s a little bit barbaric that it’s not.

Martin: Yes.

I think we’ve covered quite a lot, haven’t we?

Ariel: I tried to.

Martin: I’d just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It’s not really a science book, although it emphasizes how scientific ideas are what’s going to determine how our civilization evolves. I’d also like to say that for those in universities, we know it’s only interim for students, but we have universities like MIT, and my University of Cambridge, we have convening power to gather people together to address these questions.

I think the value of the centers which we have in Cambridge, and you have in MIT, are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high that if our efforts can really reduce the risk of a disaster by one part in 10,000, we’ve more than earned our keep. I’m very supportive of our Centre for Existential Risk in Cambridge, and also the Future of Life Institute which you have at MIT.

Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic, and the threats of low radiation doses, et cetera, it’s not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable threats which could affect the whole future of humanity. I think it’s very important that these groups should be encouraged and fostered, and I’m privileged to be part of them.

Ariel: All right. Again, the book is On the Future: Prospects for Humanity by Martin Rees. I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics that you covered, and I don’t think I got all of them, but there’s climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics, and as you mentioned the role that scientists need to play in ensuring a safe future.

I highly recommend the book as a really great introduction to the potential risks, and the hopefully much greater potential benefits that science and technology can pose for the future. Martin, thank you again for joining me today.

Martin: Thank you, Ariel, for talking to me.

Cognitive Biases and AI Value Alignment: An Interview with Owain Evans

Click here to see this page in other languages:  Russian 

At the core of AI safety, lies the value alignment problem: how can we teach artificial intelligence systems to act in accordance with human goals and values?

Many researchers interact with AI systems to teach them human values, using techniques like inverse reinforcement learning (IRL). In theory, with IRL, an AI system can learn what humans value and how to best assist them by observing human behavior and receiving human feedback.

But human behavior doesn’t always reflect human values, and human feedback is often biased. We say we want healthy food when we’re relaxed, but then we demand greasy food when we’re stressed. Not only do we often fail to live according to our values, but many of our values contradict each other. We value getting eight hours of sleep, for example, but we regularly sleep less because we also value working hard, caring for our children, and maintaining healthy relationships.

AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, some researchers worry that systems trained with IRL will be fundamentally unable to distinguish between value-aligned and misaligned behavior. This could become especially dangerous as AI systems become more powerful: inferring the wrong values or goals from observing humans could lead these systems to adopt harmful behavior.

 

Distinguishing Biases and Values

Owain Evans, a researcher at the Future of Humanity Institute, and Andreas Stuhlmüller, president of the research non-profit Ought, have explored the limitations of IRL in teaching human values to AI systems. In particular, their research exposes how cognitive biases make it difficult for AIs to learn human preferences through interactive learning.

Evans elaborates: “We want an agent to pursue some set of goals, and we want that set of goals to coincide with human goals. The question then is, if the agent just gets to watch humans and try to work out their goals from their behavior, how much are biases a problem there?”

In some cases, AIs will be able to understand patterns of common biases. Evans and Stuhlmüller discuss the psychological literature on biases in their paper, Learning the Preferences of Ignorant, Inconsistent Agents, and in their online book, agentmodels.org. An example of a common pattern discussed in agentmodels.org is “time inconsistency.” Time inconsistency is the idea that people’s values and goals change depending on when you ask them. In other words, “there is an inconsistency between what you prefer your future self to do and what your future self prefers to do.”

Examples of time inconsistency are everywhere. For one, most people value waking up early and exercising if you ask them before bed. But come morning, when it’s cold and dark out and they didn’t get those eight hours of sleep, they often value the comfort of their sheets and the virtues of relaxation. From waking up early to avoiding alcohol, eating healthy, and saving money, humans tend to expect more from their future selves than their future selves are willing to do.

With systematic, predictable patterns like time inconsistency, IRL could make progress with AI systems. But often our biases aren’t so clear. According to Evans, deciphering which actions coincide with someone’s values and which actions spring from biases is difficult or even impossible in general.

“Suppose you promised to clean the house but you get a last minute offer to party with a friend and you can’t resist,” he suggests. “Is this a bias, or your value of living for the moment? This is a problem for using only inverse reinforcement learning to train an AI — how would it decide what are biases and values?”

 

Learning the Correct Values

Despite this conundrum, understanding human values and preferences is essential for AI systems, and developers have a very practical interest in training their machines to learn these preferences.

Already today, popular websites use AI to learn human preferences. With YouTube and Amazon, for instance, machine-learning algorithms observe your behavior and predict what you will want next. But while these recommendations are often useful, they have unintended consequences.

Consider the case of Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina. After watching videos of Trump rallies to learn more about his voter appeal, Tufekci began seeing white nationalist propaganda and Holocaust denial videos on her “autoplay” queue. She soon realized that YouTube’s algorithm, optimized to keep users engaged, predictably suggests more extreme content as users watch more videos. This led her to call the website “The Great Radicalizer.”

This value misalignment in YouTube algorithms foreshadows the dangers of interactive learning with more advanced AI systems. Instead of optimizing advanced AI systems to appeal to our short-term desires and our attraction to extremes, designers must be able to optimize them to understand our deeper values and enhance our lives.

Evans suggests that we will want AI systems that can reason through our decisions better than humans can, understand when we are making biased decisions, and “help us better pursue our long-term preferences.” However, this will entail that AIs suggest things that seem bad to humans on first blush.

One can imagine an AI system suggesting a brilliant, counterintuitive modification to a business plan, and the human just finds it ridiculous. Or maybe an AI recommends a slightly longer, stress-free driving route to a first date, but the anxious driver takes the faster route anyway, unconvinced.

To help humans understand AIs in these scenarios, Evans and Stuhlmüller have researched how AI systems could reason in ways that are comprehensible to humans and can ultimately improve upon human reasoning.

One method (invented by Paul Christiano) is called “amplification,” where humans use AIs to help them think more deeply about decisions. Evans explains: “You want a system that does exactly the same kind of thinking that we would, but it’s able to do it faster, more efficiently, maybe more reliably. But it should be a kind of thinking that if you broke it down into small steps, humans could understand and follow.”

This second concept is called “factored cognition” – the idea of breaking sophisticated tasks into small, understandable steps. According to Evans, it’s not clear how generally factored cognition can succeed. Sometimes humans can break down their reasoning into small steps, but often we rely on intuition, which is much more difficult to break down.

 

Specifying the Problem

Evans and Stuhlmüller have started a research project on amplification and factored cognition, but they haven’t solved the problem of human biases in interactive learning – rather, they’ve set out to precisely lay out these complex issues for other researchers.

“It’s more about showing this problem in a more precise way than people had done previously,” says Evans. “We ended up getting interesting results, but one of our results in a sense is realizing that this is very difficult, and understanding why it’s difficult.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is much trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • How the US and Russia perceive first-strike capabilities
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
  • The perceived obstacles to reducing nuclear arsenals

Publications discussed in this episode include:

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.

September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.

From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.

Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.

It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers which have not signed the treaty are taking steps to modernize their nuclear arsenals.

This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.

Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.

Paul and Mike first, thank you so much for joining me today.

Paul: Thank you, thanks for having us.

Mike: Yeah, excited for the conversation.

Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.

Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?

The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.

But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?

If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.

For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.

We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.

In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.

Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.

You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.

Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.

The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.

Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?

Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.

I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.

In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.

Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.

The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.

The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.

Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.

Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.

Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.

If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.

Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.

I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.

And that’s a lesson the United States by the way has taught the world over and over, over the last few decades you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.

Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.

I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.

Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.

I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?

Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.

Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.

The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.

I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.

Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and  nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.

There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?

We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”

I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.

When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.

Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.

Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side the American and Soviet side during the Cold War and immediately afterwards.

If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.

But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.

Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.

Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.

Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?

Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.

For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.

Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.

Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.

But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.

But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.

Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.

Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?

Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.

I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.

Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.

Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.

If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.

I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.

Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.

That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.

I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?

Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.

To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.

You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.

During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.

In the case of INF Treaty, Russia looking at what China is building not a signatory to INF and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.

Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging what specifically do you worry about if you’re thinking about destabilization? What does that look like?

Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.

But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.

Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.

That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.

For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.

While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-

Mike: Right, the US has actively moved to not do this, basically.

Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.

I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.

Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.

Paul: I knew you were going to say that.

Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.

Paul: Right.

Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.

Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.

Mike: That’s totally fair.

Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.

Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.

Mike: Now the world is safer. Major conventional war.

Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.

Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.

Paul: Right.

Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.

Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.

I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.

Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.

There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.

From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.

I do worry that that’s not a major feature when countries are looking at modernization that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.

Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.

Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.

Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.

In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.

That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.

Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.

Paul: Thank you.

Mike: Thanks a lot. Happy to do it and happy to come back anytime.

Paul: Yeah, thanks for having us. We really appreciate it.

$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa

Click here to see this page in other languages:  German Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII

AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment with William MacAskill

How are we to make progress on AI alignment given moral uncertainty?  What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty?

Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics.

Topics discussed in this episode include:

  • Will’s current normative and metaethical credences
  • The value of moral information and moral philosophy
  • A taxonomy of the AI alignment problem
  • How we ought to practice AI alignment given moral uncertainty
  • Moral uncertainty in preference aggregation
  • Moral uncertainty in deciding where we ought to be going as a society
  • Idealizing persons and their preferences
  • The most neglected portion of AI alignment
In this interview we discuss ideas contained in the work of William MacAskill. You can learn more about Will’s work here, and follow him on social media here. You can find Gordon Worley’s post here and Rob Wiblin’s previous podcast with Will here.  You can hear more in the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast series at the Future of Life Institute. I’m Lucas Perry, and today we’ll be speaking with William MacAskill on moral uncertainty and its place in AI alignment. If you’ve been enjoying this series and finding it interesting or valuable, it’s a big help if you can share it on social media and follow us on your preferred listening platform.

Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues and ethics. And so, without further ado, I give you William MacAskill.

Yeah, Will, thanks so much for coming on the podcast. It’s really great to have you here.

Will: Thanks for having me on.

Lucas: So, I guess we can start off. You can tell us a little bit about the work that you’ve been up to recently in terms of your work in the space of metaethics and moral uncertainty just over the past few years and how that’s been evolving.

Will: Great. My PhD topic was on moral uncertainty, and I’m just putting the finishing touches on a book on this topic. The idea here is to appreciate the fact that we very often are just unsure about what we ought, morally speaking, to do. It’s also plausible that we ought to be unsure about what we ought morally to do. Ethics is a really hard subject, there’s tons of disagreement, it would be overconfident to think, “Oh, I’ve definitely figured out the correct moral view.” So my work focuses on not really the question of how unsure we should be, but instead what should we do given that we’re uncertain?

In particular, I look at the issue of whether we can apply the same sort of reasoning that we apply to uncertainty about matters of fact to matters of moral uncertainty. In particular, can we use what is known as “expected utility theory”, which is very widely accepted as at least approximately correct in empirical uncertainty. Can we apply that in the same way in the case of moral uncertainty?

Lucas: Right. And so coming on here, you also have a book that you’ve been working on on moral uncertainty that is unpublished. Have you just been expanding this exploration in that book, diving deeper into that?

Will: That’s right. There’s actually been very little that’s been written on the topic of moral uncertainty, at least in modern times, at least relative to its importance. I would think of this as a discipline that should be studied as much as consequentialism or contractualism or Kantianism is studied. But there’s really, in modern times, only one book that’s been written on the topic and that was written 18 years ago now, or published 18 years ago. What we want is this to be, firstly, just kind of definitive introduction to the topic, it’s co-authored with me as lead author, but co-authored with Toby Ord and Krista Bickfest, laying out both what we see as the most promising path forward in terms of addressing some of the challenges that face an account of decision-making under moral uncertainty, some of the implications of taking moral uncertainty seriously, and also just some of the unanswered questions.

Lucas: Awesome. So I guess, just moving forward here, you have a podcast that you already did with Rob Wiblin: 80,000 Hours. So I guess we can sort of just avoid covering a lot of the basics here about your views on using expected utility calculous in moral reasoning and moral uncertainty in order to decide what one ought to do when one is not sure what one ought to do. People can go ahead and listen to that podcast, which I’ll provide a link to within the description.

It would also be good, just to sort of get a general sense of where your meta ethical partialities just generally right now tend to lie, so what sort of meta ethical positions do you tend to give the most credence to?

Will: Okay, well that’s a very well put question ’cause, as with all things, I think it’s better to talk about degrees of belief rather than absolute belief. So normally if you ask a philosopher this question, we’ll say, “I’m a nihilist,” or “I’m a moral realist,” or something, so I think it’s better to split your credences. So I think I’m about 50/50 between nihilism or error theory and something that’s non-nihilistic.

Whereby nihilism or error theory, I just mean that any positive moral statement or normative statement or a evaluative statement. That includes, you ought to maximize happiness. Or, if you want a lot of money, you ought to become a banker. Or, pain is bad. That, on this view, all of those things are false. All positive, normative or evaluative claims are false. So it’s a very radical view. And we can talk more about that, if you’d like.

In terms of the rest of my credence, the view that I’m kind of most sympathetic towards in the sense of the one that occupies most of my mental attention is a relatively robust form of moral realism. It’s not clear whether it should be called kind of naturalist moral realism or non-naturalist moral realism, but the important aspect of it is just that goodness and badness are kind of these fundamental moral properties and are properties of experience.

The things that are of value are things that supervene on conscious states, in particular good states or bad states, and the way we know about them is just by direct experience with them. Just by being acquainted with a state like pain gives us a reason for thinking we ought to have less of this in the world. So that’s my kind of favored view in the sense it’s the one I’d be most likely to defend in the seminar room.

And then I give somewhat less credence in a couple of views. One is a view called “subjectivism” which is the idea that what you ought to do is determined in some sense by what you want to do. So the simplest view there would just be when I say, “I ought to do X.” That just means I want to do X in some way. Or a more sophisticated version would be ideal subjectivism where when I say I ought to do X, it means some very idealized version of myself would want myself to want to do X. Perhaps if I had limited amounts of knowledge and much clearer computational power and so on. I’m a little less sympathetic to that than many people I know. We’ll go into that.

And then a final view that I’m also less sympathetic towards is non-cognitivism, which would be the idea that our moral statements … So when I say, “Murder is wrong,” I’m not even attempting to express a proposition. What they’re doing is just expressing some emotion of mine, like, “Yuk. Murder. Ugh,” in the same way that when I said that, that wasn’t expressing any proposition, it was just expressing some sort of pro or negative attitude. And again, I don’t find that terribly plausible, again for reasons we can go into.

Lucas: Right, so those first two views were cognitivist views, which makes them fall under sort of a semantic theory where you think that people are saying truth or false statements when they’re claiming moral facts. And the other theory in your moral realism are both metaphysical views, which I think is probably what we’ll mostly be interested here in terms of the AI alignment problem.

There are other issues in metaethics, for example having to do with semantics, as you just discussed. You feel as though you give some credence to non-cognitivism, but there are also justification views, so like issues in moral epistemology, how one can know about metaethics and why one ought to follow metaethics if metaethics has facts. Where do you sort of fall in in that camp?

Will: Well, I think all of those views are quite well tied together, so what sort of moral epistemology you have depends very closely, I think, on what sort of meta-ethical view you have, and I actually think, often, is intimately related as well to what sort of view in normative ethics you have. So my preferred philosophical world view, as it were, the one I’d defend in a seminar room, is classical utilitarian in its normative view, so the only thing that matters is positive or negative mental states.

In terms of its moral epistemology, the way we access what is of value is just by experiencing it, so in just the same way we access conscious states. There are also some ways in which you can’t merely, you know, why is it that we should maximize the sum of good experiences rather than the product, or something? That’s a view that you’ve got to obtain by kind of reasoning rather than just purely from experience.

Part of my epistemology does appeal to whatever this spooky ability we have to reason about abstract affairs, but it’s the same sort of faculty that is used when we think about mathematics or set theory or other areas of philosophy. If, however, I had some different view, so supposing we were a subjectivist, well then moral epistemology looks very different. You’re actually just kind of reflecting on your own values, maybe looking at what you would actually do in different circumstances and so on, reflecting on your own preferences, and that’s the right way to come to the right kind of moral views.

There’s also another meta-ethical view called “constructivism” that I’m definitely not the best person to talk about with. But on that view, again it’s not really a realistic view, but on this view we just have a bunch of beliefs and intuitions and the correct moral view is just the best kind of systematization of those and beliefs or intuitions in the same way as you might think … Like linguistics, it is a science, but it’s fundamentally based just on what our linguistic intuitions are. It’s just kind of a systematization of them.

On that view, then, moral epistemology would be about reflecting on your own moral intuitions. You just got all of this data, which is the way things seem like to you, morally speaking, and then you’re just doing the systematization thing. So I feel like the question of moral epistemology can’t be answered in a vacuum. You’ve got to think about your meta-ethical view of the metaphysics of ethics at the same time.

Lucas: I think I’m pretty interested in here, and also just poking a little bit more into that sort of 50% credence you give to your moral realist view, which is super interesting because it’s a view that people tend not to have, I guess, in the AI computer science rationality space, EA space. People tend to, I guess, have a lot of moral anti-realists in this space.

In my last podcast, I spoke with David Pearce, and he also seemed to sort of have a view like this, and I’m wondering if you can just sort of unpack yours a little bit, where he believed that suffering and pleasure disclose the in-built pleasure/pain access of the universe. Like you can think of minds as sort of objective features of the world, because they in fact are objective features of the world, and the phenomenology and experience of each person is objective in the same way that someone could objectively be experiencing redness, and in the same sense they could be objectively experiencing pain.

It seems to me, and I don’t fully understand the view, but the claim is that there are some sort of in-built quality or property to the hedonic qualia of suffering or pleasure that discloses its in-built value to that.

Will: Yeah.

Lucas: Could you unpack it a little bit more about the metaphysics of that and what that even means?

Will: It sounds like David Pearce and I have quite similar views. I think relying heavily on the analogy with, or very close analogy with consciousness is going to help, where imagine you’re kind of a robot scientist, you don’t have any conscious experiences but you’re doing all this fancy science and so on, and then you kind of write out the book of the world, and i’m like, “hey, there’s this thing you missed out. It’s like conscious experience.” And you, the robot scientist, would say, “Wow, that’s just insane. You’re saying that some bits of matter have this first person subjective feel to them? Like, why on earth would we ever believe that? That’s just so out of whack with the naturalistic understanding of the world.” And it’s true. It just doesn’t make any sense from given what we know now. It’s a very strange phenomenon to exist in the world.

Will: And so one of the arguments that motivates error theory is this idea of just, well, if values were to exist, they would just be so weird, what Mackie calls “queer”. It’s just so strange that just by a principle of Occam’s razor not adding strange things in to our ontology, we should assume they don’t exist.

But that argument would work in the same way against conscious experience, and the best response we’ve got is to say, no, but I know I’m conscious, and just tell by introspecting. I think we can run the same sort of argument when it comes to a property of consciousness as well, which is namely the goodness or badness of certain conscious experiences.

So now I just want you to go kind of totally a-theoretic. Imagine you’ve not thought about philosophy at all, or even science at all, and I was just to ask you, rip off one of your fingernails, or something. And then I say, “Is that experience bad?” And you would say yes.

Lucas: Yeah, it’s bad.

Will: And I would ask, how confident are you? The more confident that this pain is bad than that I even have hands, perhaps. That’s at least how it seems to be for me. So then it seems like, yeah, we’ve got this thing that we’re actually incredibly confident of which is the badness of pain, or at least the badness of pain for me, and so that’s what initially gives the case for then thinking, okay, well, that’s at least one objective moral fact that pain is bad, or at least pain is bad for me.

Lucas: Right, so the step where I think that people will tend to get lost in this is when … I thought the part about Occam’s razor was very interesting. I think that most people are anti-realistic because they use Occam’s razor there and they think that what the hell would a value even be anyway in the third person objective sense? Like, that just seems really queer, as you put it. So I think people get lost at the step where the first person seems to simply have a property of badness to it.

I don’t know what that would mean if one has a naturalistic reductionist view of the world. There seems to be just like entropy, noise and quarks and maybe qualia as well. It’s not clear to me how we should think about properties of qualia and whether or not one can drive, obviously, “ought” statements about properties of qualia to normative statements, like “is” statements about the properties of qualia to “ought” statements?

Will: One thing I want to be very clear on is just it definitely is the case that we have really no idea on this view. We are currently completely in the dark about some sort of explanation of how matter and forces and energy could result in goodness or badness, something that ought to be promoted. But that’s also true with conscious experience as well. We have no idea how on earth matter could result in kind of conscious experience. At the same time, it would be a mistake to start denying conscious experience.

And then we can ask, we say, okay, we don’t really know what’s going on but we accept that there’s conscious experience, and then I think if you were again just to completely pre theoretically start categorizing distant conscious experiences that we have, we’d say that some are red and some are blue, some are maybe more intense, some are kind of dimmer than others, you’d maybe classify them into sights and sounds and other sorts of experiences there.

I think also a very natural classification would be the ones that are good and the ones that are bad, and then I think when we cash that out further, I think it’s not nearly the case. I don’t think the best explanation is that when we say, oh, this is good or this is bad it means what we want or what we don’t want, but instead it’s like what we think we have reason to want or reason not to want. It seems to give us evidence for those sorts of things.

Lucas: I guess my concern here is just that I worry that words like “good” and “bad” or “valuable” or “dis-valuable”, I feel some skepticism about whether or not they disclose some sort of intrinsic property of the qualia. I’m also not sure what the claim here is about the nature of and kinds of properties that qualia can have attached to them. I worry that goodness and badness might be some sort of evolutionary fiction which enhances us, enhances our fitness, but it doesn’t actually disclose some sort of intrinsic metaphysical quality or property of some kind of experience.

Will: One thing I’ll say is, again, remember that I’ve got this 50% credence on error theory, so in general, all these questions, maybe this is just some evolutionary fiction, things just seem bad but they’re not actually, and so on. I actually think those are good arguments, and so that should give us confidence, some degree of confidence and this idea of just actually nothing matters at all.

But kind of underlying a lot of my views is this more general argument that if you’re unsure between two views, one in which just nothing matters at all, we’ve got no reasons for action, the other one we do have some reasons for action, then you can just ignore the one that says you’ve got no reasons for action ’cause you’re not going to do badly by its likes no matter what you do. If I were to go around shooting everybody, that wouldn’t be bad or wrong on nihilism. If I were to shoot lots of people, it wouldn’t be bad or wrong on nihilism.

So if there are arguments such as, I think an evolutionary argument that pushes us in the direction of kind of error theory, in a sense we can put them to the side, ’cause what we ought to do is just say, yeah, we take that really seriously. Give us a high credence in error theory, but now say, after all those arguments, what are the views, because most plausibly kind of bear their force.

So this is why with the kind of evolutionary worry, I’m just like, yes. But, supposing it’s the case that there actually are. Presumably conscious experiences themselves are useful in some evolutionary way that, again, we don’t really understand. I think, presumably, also good and bad experiences are useful in some evolutionary way that we don’t fully understand, perhaps because they have a tendency to motivate at least beings like us, and that in fact seems to be a key aspect of making a kind of goodness or badness statement. It’s at least somehow tied up to the idea of kind of motivation.

And then when I say ascribing a property to a conscious experience, I really just don’t mean whatever it is that we mean when we say that this experience is red seeming, this is experience is blue seeming, I mean, again, opens philosophical questions what we even mean by properties but in the same way this is bad seeming, this is good seeming.

Before I got into thinking about philosophy and naturalism and so on, would I have thought those things are kind of on a par, and I think I would’ve done, so it’s at least a pre theoretically justified view to think, yeah, there just is this axiological property of my experience.

Lucas: This has made me much more optimistic. I think after my last podcast I was feeling quite depressed and nihilistic, and hearing you give this sort of non-naturalistic or naturalistic moral realist count is cheering me up a bit about the prospects of AI alignment and value in the world.

Will: I mean, I think you shouldn’t get too optimistic. I’m also certainly wrong-

Lucas: Yeah.

Will: … sort of is my favorite view. But take any philosopher. What’s the chance that they’ve got the right views? Very low, probably.

Lucas: Right, right. I think I also need to be careful here that human beings have this sort of psychological bias where we give a special metaphysical status and kind of meaning and motivation to things which have objective whatever to it. I guess there’s also some sort of motivation that I need to be mindful of that seeks out to make value objective or more meaningful and foundational in the universe.

Will: Yeah. The thing that I think should make you feel optimistic, or at least motivated, is this argument that if nothing matters, it doesn’t matter that nothing matters. It just really ought not to affect what you do. You may as well act as if things do matter, and in fact we can have this project of trying to figure out if things matter, and that maybe could be an instrumental goal, which kind of is a purpose for life is to get to a place where we really can figure out if it has any meaning. I think that sort of argument can at least give one grounds for getting out of bed in the morning.

Lucas: Right. I think there’s this philosophy paper that I saw, but I didn’t read, that was like, “nothing Matters, but it does matter”, with the one lower case M and then another capital case M, you know.

Will: Oh, interesting.

Lucas: Yeah.

Will: It sounds a bit like 4:20 ethics.

Lucas: Yeah, cool.

Moving on here into AI alignment. And before we get into this, I think that this is something that would also be interesting to hear you speak a little bit more about before we dive into AI alignment. What even is the value of moral information and moral philosophy, generally? Is this all just a bunch of BS or how can it be interesting and or useful in our lives, and in science and technology?

Will: Okay, terrific. I mean, and this is something I write about in a paper I’m working on now and also in the book, as well.

So, yeah, I think the stereotype of the philosopher engaged in intellectual masturbation, not doing really much for the world at all, is quite a prevalent stereotype. I’ll not comment on whether that’s true for certain areas of philosophy. I think it’s definitely not true for certain areas within ethics. What is true is that philosophy is very hard, ethics is very hard. Most of the time when we’re trying to do this, we make very little progress.

If you look at the long-run history of thought in ethics and political philosophy, the influence is absolutely huge. Even just take Aristotle, Locke, Hobbes, Mill, and Marx. The influence of political philosophy and moral philosophy there, it shaped thousands of years of human history. Certainly not always for the better, sometimes for the worse, as well. So, ensuring that we get some of these ideas correct is just absolutely crucial.

Similarly, even in more recent times … Obviously not as influential as these other people, but also it’s been much less time so we can’t predict into the future, but if you consider Peter Singer as well, his ideas about the fact that we may have very strong obligations to benefit those who are distant strangers to us, or that we should treat animal welfare just on a par with human welfare, at least on some understanding of those ideas, that really has changed the beliefs and actions of, I think, probably tens of thousands of people, and often in really quite dramatic ways.

And then when we think about well, should we be doing more of this, is it merely that we’re influencing things randomly, or are we making things better or worse? Well, if we just look to the history of moral thought, we see that most people in most times have believed really atrocious things. Really morally abominable things. Endorsement of slavery, distinctions between races, subjugation of women, huge discrimination against non-heterosexual people, and, in part at least, it’s been ethical reflection that’s allowed us to break down some of those moral prejudices. And so we should presume that we have very similar moral prejudices now. We’ve made a little bit of progress, but do we have the one true theory of ethics now? I certainly think it’s very unlikely. And so we need to think more if we want to get to the actual ethical truth, if we don’t wanna be living out moral catastrophes in the same way as we would if we kept slaves, for example.

Lucas: Right, I think we do want to do that, but I think that a bit later in the podcast we’ll get into whether or not that’s even possible, given economic, political, and militaristic forces acting upon the AI alignment problem and the issues with coordination and race to AGI.

Just to start to get into the AI alignment problem, I just wanna offer a little bit of context. It is implicit in the AI alignment problem, or value alignment problem, that AI needs to be aligned to some sort of ethic or set of ethics, this includes preferences or values or emotional dispositions, or whatever you might believe them to be. And so it seems that generally, in terms of moral philosophy, there are really two methods, or two methods in general, by which to arrive at an ethic. So, one is simply going to be through reason, and one is going to be through observing human behavior or artifacts, like books, movies, stories, or other things that we produce in order to infer and discover the observed preferences and ethics of people in the world.

The latter side of alignment methodologies are empirical and involves the agent interrogating and exploring the world in order to understand what the humans care about and value, as if values and ethics were simply a physical by-product of the world and of evolution. And the former is where ethics are arrived at through reason alone, and involve the AI or the AGI potentially going about ethics as a philosopher would, where one engages in moral reasoning about metaethics in order to determine what is correct. From the point of view of ethics, there is potentially only what the humans empirically do believe and then there is what we may or may not be able to arrive at through reason alone.

So, it seems that one or both of these methodologies can be used when aligning an AI system. And again, the distinction here is simply between sort of preference aggregation or empirical value learning approaches, or methods of instantiating machine ethics, reasoning, or decision-making in AI systems so they become agents of morality.

So, what I really wanna get into with you now is how metaethical uncertainty influences our decision over the methodology of value alignment. Over whether or not we are to prefer an empirical preference learning or aggregation type approach, or one which involved an imbuing of moral epistemology and ethical metacognition and reasoning into machine systems so it can discover what we ought to do. And how moral uncertainty, and metaethical moral uncertainty in particular, operates within both of these spaces once you’re committed to some view, or both of these views. And then we can get into issues and intertheoretic comparisons and how that arises here at many levels, the ideal way we should proceed if we could do what would be perfect, and again, what is actually likely to happen given race dynamics and political, economic, and militaristic forces.

Will: Okay that sounds terrific. I mean, there’s a lot of cover there.

I think it might be worth me saying just maybe a couple of distinctions I think are relevant and kind of my overall view in this. So, in terms of distinction, I think within what broadly gets called the alignment problem, I think I’d like to distinguish between what I’d call the control problem, then kind of human values alignment problem, and then the actual alignment problem.

Where the control problem is just, can you get this AI to do what you want it to do? Where that’s maybe relatively narrowly construed, I want it to clean up my room, I don’t want it to put my cat in the bin, that’s kinda control problem. I think describing that as a technical problem is kind of broadly correct.

Second is then what gets called aligning AI with human values. For that, it might be the case that just having the AI pay attention to what humans actually do and infer their preferences that are revealed on that basis, maybe that’s a promising approach and so on. And that I think will become increasingly important as AI becomes larger and larger parts of the economy.

This is kind of already what we do when we vote for politicians who represent at least large chunks of the electorate. They hire economists who undertake kind of willingness-to-pay surveys and so on to work out what people want, on average. I do think that this is maybe more normatively loaded than people might often think, but at least you can understand that, just as the control problem is I have some relatively simple goal, which is, what do I want? I want this system to clean my room. How do I ensure that it actually does that without making mistakes that I wasn’t intending? This is kind of broader problem of, well you’ve got a whole society and you’ve got to aggregate their preferences for what kind of society wants and so on.

But I think, importantly, there’s this third thing which I called a minute ago, the actual alignment problem, so let’s run with that. Which is just working out what’s actually right and what’s actually wrong and what ought we to be doing. I do have a worry that because many people in the wider world, often when they start thinking philosophically they start endorsing some relatively simple, subjectivist or relativist views. They might think that answering this question of well, what do humans want, or what do people want, is just the same as answering what ought we to do? Whereas for kind of the reductio of that view, just go back a few hundred years where the question would have been, well, the white man’s alignment problem, where it’s like, “Well, what do we want, society?”, where that means white men.

Lucas: Uh oh.

Will: What do we want them to do? So similarly, unless you’ve got the kind of such a relativist view that you think that maybe that would have been correct back then, that’s why I wanna kind of distinguish this range of problems. And I know that you’re kind of most interested in that third thing, I think. Is that right?

Lucas: Yeah, so I think I’m pretty interested in the second and the third thing, and I just wanna unpack a little bit of your distinction between the first and the second. So, the first was what you called the control problem, and you called the second just the plurality of human values and preferences and the issue of aligning to that in the broader context of the world.

It’s unclear to me how I get the AI to put a strawberry on the plate or to clean up my room and not kill my cat without the second thing haven been done, at least to me.

There is a sense at a very low level where your sort of working on technical AI alignment, which involves working on the MIRI approach with agential foundations and trying to work on a constraining optimization and corrigibility and docility and robustness and security and all of those sorts of things that people work on and the concrete problems in AI safety, stuff like that. But, it’s unclear to me where that sort of stuff is just limited to and includes the control problem, and where it begins requiring the system to be able to learn my preferences through interacting with me and thereby is already sort of participating in the second case where it’s sort of participating in AI alignment more generally, rather than being sort of like a low level controlled system.

Will: Yeah, and I should say that on this side of things I’m definitely not an expert, not really the person to be talking to, but I think you’re right. There’s going to be some big, gray area or transition from systems. So there’s one that might be cleaning my room, or even let’s just say it’s playing some sort of game, unfortunately I forget the example … It was under the blog post, an example of the alignment problem in the wild, or something, from open AI. But, just a very simple example of the AIs playing a game, and you say, “Well, get as many points as possible.” And what you really want it to do is win a certain race, but what it ends up doing is driving this boat just round and round in circles because that’s the way of maximizing the number of points.

Lucas: Reward hacking.

Will: Reward hacking, exactly. That would be a kind of failure of control problem, that first in our sense. And then I believe there’s gonna be kind of gray areas, where perhaps it’s the certain sort of AI system where the whole point is it’s just implementing kind of what I want. And that might be very contextually determined, might depend on what my mood is of the day. For that, that might be a much, much harder problem and will involve kind of studying what I actually do and so on.

We could go into the question of whether you can solve the problem of cleaning a room without killing my cat. Whether that is possible to solve without solving much broader questions, maybe that’s not the most fruitful avenue of discussion.

Lucas: So, let’s put aside this first case which involves the control problem, we’ll call it, and let’s focus on the second and the third, where again the second is defined as sort of the issue of the plurality of human values and preferences which can be observed, and then the third you described as us determining what we ought to do and tackling sort of the metaethics.

Will: Yeah, just tackling the fundamental question of, “Where ought we to be headed as a society?” One just extra thing to add onto that is that’s just a general question for society to be answering. And if there are kind of fast, or even medium-speed, developments in AI, perhaps suddenly we’ve gotta start answering that question, or thinking about that question even harder in a more kind of clean way than we have before. But even if AI were to take a thousand years, we’d still need to answer that question, ’cause it’s just fundamentally the question of, “Where ought we to be heading as a society?”

Lucas: Right, and so going back a little bit to the little taxonomy that I had developed earlier, it seems like your second case scenario would be sort of down to metaethical questions, which are behind and which influence the empirical issues with preference aggregation and there being plurality of values. And the third case would be, what would be arrived at through reason and, I guess, the reason of many different people.

Will: Again, it’s gonna involve questions of metaethics as well where, again, on my theory that metaethics … It would actually just involve interacting with conscious experiences. And that’s a critical aspect of coming to understand what’s morally correct.

Lucas: Okay, so let’s go into the second one first and then let’s go into the third one. And while we do that, it would be great if we could be mindful of problems in intertheoretic comparison and how they arise as we go through both. Does that sound good?

Will: Yeah, that sounds great.

Lucas: So, would you like to just sort of unpack, starting with the second view, the metaethics behind that, issues in how moral realism versus moral anti-realism will affect how the second scenario plays out, and other sorts of crucial considerations in metaethics that will affect the second scenario?

Will: Yeah, so for the second scenario, which again, to be clear, is the aggregating of the variety of human preferences across a variety of contexts and so on, is that right?

Lucas: Right, so that the agent can be fully autonomous and realized in the world that it is sort of an embodiment of human values and preferences, however construed.

Will: Yeah, okay, so here I do think all the metaethics questions are gonna play a lot more role in the third question. So again, it’s funny, it’s very similar to the question of kind of what mainstream economists often think they’re doing when it comes to cost-benefit analysis. Let’s just even start in the individual case. Even there, it’s not a purely kind of descriptive enterprise, where, again, let’s not even talk about AI. You’re just looking out for me. You and I are friends and you want to do me a favor in some way, how do you make a decision about how to do me that favor, how to benefit me in some way? Well, you could just look at the things I do and then infer on the basis of that what my utility function is. So perhaps every morning I go and I rob a convenience store and then I buy some heroin and then I shoot up and-

Lucas: Damn, Will!

Will: That’s my day. Yes, it’s a confession. Yeah, you’re the first to hear it.

Lucas: It’s crazy, in Oxford huh?

Will: Yeah, Oxford University is wild.

You see that behavior on my part and you might therefore conclude, “Wow, well what Will really likes is heroin. I’m gonna do him a favor and buy him some heroin.” Now, that seems kind of commonsensically pretty ridiculous. Well, assuming I’m demonstrating all sorts of bad behavior that looks like it’s very bad for me, it looks like a compulsion and so on. So instead what we’re really doing is not merely maximizing the utility function that’s gone by my revealed preferences, we have some deeper idea of kind of what’s good for me or what’s bad for me.

Perhaps that comes down to just what I would want to want, or what I want myself to want to want to want. Perhaps you can do it in terms of what are called second-order, third-order preferences. What idealized Will would want … That is not totally clear. Well firstly, it’s really hard to know kind of what would idealized Will want. You’re gonna have to start doing at least a little bit of philosophy there. Because I tend to favor hedonism, I think that an idealized version of my friend would want the best possible experiences. That might be very different from what they think an idealized version of themselves would want because perhaps they have some objective list account of well-being and they think well, what they would also want is knowledge for the its own sake and appreciating beauty for its own sake and so on.

So, even there I think you’re gonna get into pretty tricky questions about what is good or bad for someone. And then after that you’ve got the question of preference aggregation, which is also really hard, both in theory and in practice. Where, do you just take strengths of preferences across absolutely everybody and then add them up? Well, firstly you might worry that you can’t actually make these comparisons of strengths of preferences between people. Certainly if you’re just looking at peoples revealed preferences, it’s really opaque how you would say if I prefer coffee to tea and you vice versa, who has the stronger preference? But perhaps we could look at behavioral facts to kind of try and at least anchor that, but it’s still then non-obvious that what we ought to do when we’re looking at everybody’s preferences is just maximize the sum rather than perhaps give some extra weighting to people who are more badly off, perhaps we give more priority to their interests. So this is kinda theoretical issues.

And then secondly, is kinda just practical issues of implementing that, where you actually need to ensure that people aren’t faking their preferences. And there’s a well known literature and voting theory that says that basically any aggregation system you have, any voting system, is going to be manipulable in some way. You’re gonna be able to get a better result for yourself, at least in some circumstances, by misrepresenting what you really want.

Again, these are kind of issues that our society already faces, but they’re gonna bite even harder when we’re thinking about delegating to artificial agents.

Lucas: There’s two levels to this that you’re sort of elucidating. The first is that you can think of the AGI as being something which can do favors for everybody in humanity, so there are issues empirically and philosophically and in terms of understanding other agents about what sort of preferences should that AGI be maximizing for each individual, say being constrained by what is legal and what is generally converged upon as being good or right. And then there’s issues with preference aggregation which come up more given that we live in a resource-limited universe and world, where not all preferences can coexist and there has to be some sort of potential cancellation between different views.

And so, in terms of this higher level of preference aggregation … And I wanna step back here to metaethics and difficulties of intertheoretic comparison. It would seem that given your moral realist view, it would affect how the weighting would potentially be done. Because it seemed like before you were eluding to the fact that if your moral realist view would be true, then the way at which we could determine what we ought to do or what is good and true about morality would be through exploring the space of all possible experiences, right, so we can discover moral facts about experiences.

Will: Mm-hmm (affirmative).

Lucas: And then in terms of preference aggregation, there would be people who would be right or wrong about what is good for them or the world.

Will: Yeah, I guess this is, again why I wanna distinguish between these two types of value alignment problem, where on the second type, which is just kind of, “What does society want?” Societal preference aggregation. I wasn’t thinking of it as there being kind of right or wrong preferences.

In just the same way as there’s this question of just, “I want system to do X” but there’s a question of, “Do I want that?” or “How do you know that I want that?”, there’s a question of, “How do you know what society wants?” That’s a question in and of its own right that’s then separate from that third alignment issue I was raising, which then starts to bake in, well, if people have various moral preferences, views about how the world ought to be, yeah some are right and some are wrong. And no way should you give some aggregation over all those different views, because ideally you should give no weight to the ones that are wrong and if any are true, they get all the weight. It’s not really about kind of preference aggregation in that way.

Though, if you think about it as everyone is making certain sort of guess at the moral truth, then you could think of that like a kind of judgment aggregation problem. So, it might be like data or input for your kind of moral reasoning.

Lucas: I think I was just sort of conceptually slicing this a tiny bit different from you. But that’s okay.

So, staying on this second view, it seems like there’s obviously going to be a lot of empirical issues and issues in understanding persons and idealized versions of themselves. Before we get in to intertheoretic comparison issues here, what is your view on coherent extrapolated volition, sort of, being the answer to this second part?

Will: I don’t really know that much about it. From what I do know, it always seemed under-defined. As I understand it, the key idea is just, you take everyone’s idealized preferences in some sense, and then I think what you do is just take a sum of what everyone’s preference is. I’m personally quite in favor of the summation strategy. I think we can make interpersonal comparisons of strengths of preferences, and I think summing people’s preferences is the right approach.

We can use certain kinds of arguments that also have application in moral philosophy, like the idea of “If you didn’t know who you were going to be in society, how would you want to structure things? And if you’re a rational, self-interested agent, maximizing expected utility, then you’ll do the utilitarian aggregation function, so you’ll maximize the sum of preference strength.

But then, if we’re doing this idealized preference thing, all the devil’s going to be in the details of, “Well how are you doing this idealization?” Because, given my preferences for example, for what they are … I mean my preferences are absolutely … Certainly they’re incomplete, they’re almost certainly cyclical, who knows? Maybe there’s even some preferences I have that are areflexive of things, as well. Probably contradictory, as well, so there’s questions about what does it mean to idealize, and that’s going to be a very difficult question, and where a lot of the work is, I think.

Lucas: So I guess, just two things here. What are sort of the timeline and actual real world working in relationship here, between the second problem that you’ve identified and the third problem that you’ve identified, and what is the role and work that preferences are doing here, for you, within the context of AI alignment, given that you’re sort of partial of a form of hedonistic consequentialism?

Will: Okay, terrific, ’cause this is kind of important framing.

In terms of answering this alignment problem, the deep one of just where ought societies to be going, I think the key thing is to punt it. The key thing is to get us to a position where we can think about and reflect on this question, and really for a very long time, so I call this the long reflection. Perhaps it’s a period of a million years or something. We’ve got a lot of time on our hands. There’s really not the kind of scarce commodity, so there are various stages to get into that state.

The first is to reduce extinction risks down basically to zero, put us a position of kind of existential security. The second then is to start developing a society where we can reflect as much as possible and keep as many options open as possible.

Something that wouldn’t be keeping a lot of options open would be, say we’ve solved what I call the control problem, we’ve got these kind of lapdog AIs that are running the economy for us, and we just say, “Well, these are so smart, what we’re gonna do is just tell it, ‘Figure out what’s right and then do that.'” That would really not be keeping our options open. Even though I’m sympathetic to moral realism and so on, I think that would be quite a reckless thing to do.

Instead, what we want to have is something kind of … We’ve gotten to this position of real security. Maybe also along the way, we’ve fixed the various particularly bad problems of the present, poverty and so on, and now what we want to do is just keep our options open as much as possible and then kind of gradually work on improving our moral understanding where if that’s supplemented by AI system …

I think there’s tons of work that I’d love to see developing how this would actually work, but I think the best approach would be to get the artificially intelligent agents to be just doing moral philosophy, giving us arguments, perhaps creating new moral experiences that it thinks can be informative and so on, but letting the actual decision making or judgments about what is right and wrong be left up to us. Or at least have some kind of gradiated thing where we gradually transition the decision making more and more from human agents to artificial agents, and maybe that’s over a very long time period.

What I kind of think of as the control problem in that second level alignment problem, those are issues you face when you’re just addressing the question of, “Okay. Well, we’re now gonna have an AI run economy,” but you’re not yet needing to address the question of what’s actually right or wrong. And then my main thing there is just we should get ourselves into a position where we can take as long as we need to answer that question and have as many options open as possible.

Lucas: I guess here given moral uncertainty and other issues, we would also want to factor in issues with astronomical waste into how long we should wait?

Will: Yeah. That’s definitely informing my view, where it’s at least plausible that morality has an aggregative component, and if so, then the sheer vastness of the future may, because we’ve got half a billion to a billion years left on Earth, a hundred trillion years before the starts burn out, and then … I always forget these numbers, but I think like a hundred billion stars in the Milky Way, ten trillion galaxies.

With just vast resources at our disposal, the future could be astronomically good. It could also be astronomically bad. What we want to insure is that we get to the good outcome, and given the time scales involved, even what seem like an incredibly long delay, like a million years, is actually just very little time indeed.

Lucas: In half a second I want to jump into whether or not this is actually likely to happen given race dynamics and that human beings are kind of crazy. The sort of timeline here is that we’re solving the technical control problem up into and on our way to sort of AGI and what might be superintelligence, and then we are also sort of idealizing everyone’s values and lives in a way such that they have more information and they can think more and have more free time and become idealized versions of themselves, given constraints within issues of values canceling each other out and things that we might end up just deeming to be impermissible.

After that is where this period of long reflection takes place, and sort of the dynamics and mechanics of that are seeming open questions. It seems that first comes computer science and global governance and coordination and strategy issues, and then comes long time of philosophy.

Will: Yeah, then comes the million years of philosophy, so I guess not very surprising a philosopher would suggest this. Then the dynamics of the setup is an interesting question, and a super important one.

One thing you could do is just say, “Well, we’ve got ten billion people alive today, let’s say. We’re gonna divide the universe into ten billionths, so maybe that’s a thousand galaxies each or something.” And then you can trade after that point. I think that would get a pretty good outcome. There’s questions of whether you can enforce it or not into the future. There’s some arguments that you can. But maybe that’s not the optimal process, because especially if you think that “Wow! Maybe there’s actually some answer, something that is correct,” well, maybe a lot of people miss that.

I actually think if we did that and if there is some correct moral view, then I would hope that incredibly well informed people who have this vast amount of time, and perhaps intellectually augmented people and so on who have this vast amount of time to reflect would converge on that answer, and if they didn’t, then that would make me more suspicious of the idea that maybe there is a real face to the matter. But it’s still the early days we’d really want to think a lot about what goes into the setup of that kind of long reflection.

Lucas: Given this account that you’ve just given about how this should play out in the long term or what it might look like, what is the actual probability do you think that this will happen given the way that the world actually is today and it’s just the game theoretic forces at work?

Will: I think I’m going to be very hard pressed to give a probability. I don’t think I know even what my subjective credence is. But speaking qualitatively, I’d think it would be very unlikely that this is how it would play out.

Again, I’m like Brian and Dave in that I think if you look at just history, I do think moral forces have some influence. I wouldn’t say they’re the largest influence. I think probably randomness explains a huge amount of history, especially when you think about how certain events are just very determined by actions of individuals. Economic forces and technological forces, environmental changes are also huge as well. It is hard to think at least that it’s going to be likely that such a well orchestrated dynamic would occur. But I do think it’s possible and I think we can increase the chance of that happening by the careful actions that where people like FLI are doing at the moment.

Lucas: That seems like the sort of ideal scenario, absolutely, but I also am worried that people don’t like to listen to moral philosophers or people in that potentially selfish government forces and things like that will end up taking over and controlling things, which is kind of sad for the cosmic endowment.

Will: That’s exactly right. I think my chances … If there was some hard takeoff and sudden leap to artificial general intelligence, which I think is relatively unlikely, but again is possible, I think that’s probably the most scary ’cause it means that a huge amount of power is suddenly in the hands of a very small number of people potentially. You could end up with the very long run future of humanity being determined by the idiosyncratic preferences of just a small number of people, so it would be very dependent whether those people’s preferences are good or bad, with a kind of slow takeoff, so where there’s many decades in terms of development of AGI and it gradually getting incorporated into the economy.

I think there’s somewhat more hope there. Society will be a lot more prepared. It’s less likely that something very bad will happen. But my default presumption when we’re talking about multiple nations, billions of people doing something that’s very carefully coordinated is not going to happen. We have managed to do things that have involved international cooperation and amazing levels of operational expertise and coordination in the past. I think the eradication of smallpox is perhaps a good example of that. But it’s something that we don’t see very often, at least not now.

Lucas: It looks like that we need to create a Peter Singer of AI safety of some other philosopher who has had a tremendous impact on politics and society to spread this sort of vision throughout the world such that it would more likely become realized. Is that potentially most likely?

Will: Yeah. I think if a wide number of the political leaders, even if just political leaders of US, China, Russia, all were on board with global coordination on the issue of AI, or again, whatever other transformative technology might really upend things in the 21st century, and were on board with “How important it is that we get to this kind of period of long reflection where we can really figure out where we’re going,” then that alone would be very promising.

Then the question of just how promising is that I think depends a lot on maybe the robustness of … Even if you’re a moral realist, there’s a question of “How likely do you think it is that people will get the correct moral view?” It could be the case that it’s just this kind of strong attractor where even if you’ve got nothing as clean cut as the long reflection that I was describing, instead some really messy thing, perhaps various wars and it looks like feudal society or something, and anyone would say that civilization looks likely chaotic, maybe it’s the case that even given that, just given enough time and enough reasoning power, people will still converge on the same moral view.

I’m probably not as optimistic as that, but it’s at least a view that you could hold.

Lucas: In terms of the different factors that are going into the AI alignment problem and the different levels you’ve identified, first, second, and third, which side do you think is lacking the most resources and attention right now? Are you most worried about the control problem, that first level? Or are you more worried about potential global coordination and governance stuff at the potential second level or moral philosophy stuff at the third?

Will: Again, flagging … I’m sure I’m biased on this, but I’m currently by far the most worried on the third level. That’s for a couple of reasons. One is I just think the vast majority of the world are simple subjectivists or relativists, so the idea that we ought to be engaging in real moral thinking about how we use society, where we go with society, how we use our cosmic endowment as you put it, my strong default is that that question just never even really gets phrased.

Lucas: You don’t think most people are theological moral realists?

Will: Yeah. I guess it’s true that I’m just thinking about-

Lucas: Our bubble?

Will: My bubble, yeah. Well educated westerners. Most people in the world at least would say they’re theological moral realists. One thought is just that … I think my default is that some sort of relativistic will hold sway and people will just not really pay enough attention to think about what they ought to do. A second relevant thought is just I think the best possible universe is plausibly really, really good, like astronomically better than alternative extremely good universes.

Lucas: Absolutely.

Will: It’s also the case that if you’re … Even like slight small differences in moral view might lead you to optimize for extremely different things. Even just a toy example of preference utilitarianism vs hedonistic utilitarianism, what you might think of as two very similar views, I think in the actual world there’s not that much difference between them, because we just kind of know what makes people better off, at least approximately, improves their conscious experiences, it also is generally what they want, but when you’re kind of technologically unconstrained, it’s plausible to me that the optimal configuration of things will look really quite different between those two views. I guess I kind of think the default is that we get it very badly wrong and it will require really sustained work in order to ensure we get it right … If it’s the case that there is a right answer.

Lucas: Is there anything with regards to issues in intertheoretic comparisons, or anything like that at any one of the three levels which we’ve discussed today that you feel we haven’t sufficiently covered or something that you would just like to talk about?

Will: Yeah. I know that one of your listeners was asking whether I thought they were solvable even in principle, by some superintelligence, and I think they are. I think they are if other issues in moral philosophy are solvable. I think that’s particularly hard, but I think ethics in general is very hard.

I also think it is the case that whatever output we have at the end of this kind of long deliberation, again it’s unlikely we’ll get to credence 1 in a particular view, so we’ll have some distribution over different views, and we’ll want to take that into account. Maybe that means we do some kind of compromise action.

Maybe that means we just distribute our resources in proportion with our credence in different moral views. That’s again one of these really hard questions that we’ll want if at all possible to punt on and leave to people who can think about this in much more depth.

Then in terms of aggregating societal preferences, that’s more like the problem of interpersonal comparisons of preference strength, which is kind of formally isomorphic but is at least a definitely issue.

Lucas: At the second and the third levels is where the intertheoretic problems are really going to be arising, and at that second level where the AGI is potentially working to idealize our values, I think there is again the open question about in the real world, whether or not there will be moral philosophers at the table or in politics or whoever has control over the AGI at that point in order to work on and think more deeply about intertheoretic comparisons of value at that level and timescale. Just thinking a little bit more about what we ought to do or what we should do realistically, given potential likely outcomes about whether or not this sort of thinking will or will not be at the table.

Will: My default is just the crucial thing is to ensure that this thinking is more likely to be at the table. I think it is important to think about, “Well, what ought we to do then,” if we think it’s as very likely that things go badly wrong. Maybe it’s not the case that we should just be aiming to push for the optimal thing, but for some kind of second best strategy.

I think at the moment we should just be trying to push for the optimal thing. In particular, that’s in part because my views that a optimal universe is just so much better than even an extremely good one, that I just kind of think we ought to be really trying to maximize the chance that we can figure out what there is and then implement it. But it would be interesting to think about it more.

Lucas: For sure. I guess just wrapping up here, did you ever have the chance to look at those two Lesswrong posts by Worley?

Will: Yeah, I did.

Lucas: Did you have any thoughts or comments on them? If people are interested you can find links in the description.

Will: I read the posts, and I was very sympathetic in general to what he was thinking through. In particular the principle of philosophical conservatism. Hopefully I’ve shown that I’m very sympathetic to that, so trying to think “What are the minimal assumptions? Would this system be safe? Would this path make sense on a very, very wide array of different philosophical views?” I think the approach I’ve suggested, which is keeping our options open as much as possible and punting on the really hard questions, does satisfy that.

I think one of his posts was talking about “Should we assume moral realism or assume moral antirealism?” It seems like there our views differed a little bit, where I’m more worried that everyone’s going to assume some sort of subjectivism and relativism, and that there might be some moral truth out there that we’re missing and we never think to find it, because we decide that what we’re interested in is maximizing X, so we program agents to build X and then just go ahead with it, whereas actually the thing that we ought to have been optimizing for is Y. But broadly speaking, I think this question of trying to be as ecumenical as possible philosophically speaking makes a lot of sense.

Lucas: Wonderful. Well, it’s really been a joy speaking, Will. Always a pleasure. Is there anything that you’d like to wrap up on, anywhere people can follow you or check you out on social media or anywhere else?

Will: Yeah. You can follow me on Twitter @WillMacAskill if you want to read more on some of my work you can find me at williammacaskill.com

Lucas: To be continued. Thanks again, Will. It’s really been wonderful.

Will: Thanks so much, Lucas.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich

Our AI systems work remarkably well in closed worlds. That’s because these environments contain a set number of variables, making the worlds perfectly known and perfectly predictable. In these micro environments, machines only encounter objects that are familiar to them. As a result, they always know how they should act and respond. Unfortunately, these same systems quickly become confused when they are deployed in the real world, as many objects aren’t familiar to them. This is a bit of a problem because, when an AI system becomes confused, the results can be deadly.

Consider, for example, a self-driving car that encounters a novel object. Should it speed up, or should it slow down? Or consider an autonomous weapon system that sees an anomaly. Should it attack, or should it power down? Each of these examples involve life-and-death decisions, and they reveal why, if we are to deploy advanced AI systems in real world environments, we must be confident that they will behave correctly when they encounter unfamiliar objects.

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with. To achieve this, Dietterich asserts that we must move away from (or, at least, greatly modify) the discriminative training methods that currently dominate AI research.

However, to do that, we must first address the “open category problem.”

 

Understanding the Open Category Problem

When driving down the road, we can encounter a near infinite number of anomalies. Perhaps a violent storm will arise, and hail will start to fall. Perhaps our vision will become impeded by smoke or excessive fog. Although these encounters may be unexpected, the human brain is able to easily analyze new information and decide on the appropriate course of action — we will recognize a newspaper drifting across the road and, instead of abruptly slamming on the breaks, continue on our way.

Because of the way that they are programmed, our computer systems aren’t able to do the same.

“The way we use machine learning to create AI systems and software these days generally uses something called ‘discriminative training,’” Dietterich explains, “which implicitly assumes that the world consists of only, say, a thousand different kinds of objects.” This means that, if a machine encounters a novel object, it will assume that it must be one of the thousand things that it was trained on. As a result, such systems misclassify all foreign objects.

This is the “open category problem” that Dietterich and his team are attempting to solve. Specifically, they are trying to ensure that our machines don’t assume that they have encountered every possible object, but are, instead, able to reliably detect — and ultimately respond to — new categories of alien objects.

Dietterich notes that, from a practical standpoint, this means creating an anomaly detection algorithm that assigns an anomaly score to each object detected by the AI system. That score must be compared against a set threshold and, if the anomaly score exceeds the threshold, the system will need to raise an alarm. Dietterich states that, in response to this alarm, the AI system should take a pre-determined safety action. For example, a self-driving car that detects an anomaly might slow down and pull off to the side of the road.

 

Creating a Theoretical Guarantee of Safety

There are two challenges to making this method work. First, Dietterich asserts that we need good anomaly detection algorithms. Previously, in order to determine what algorithms work well, the team compared the performance of eight state-of-the-art anomaly detection algorithms on a large collection of benchmark problems.

The second challenge is to set the alarm threshold so that the AI system is guaranteed to detect a desired fraction of the alien objects, such as 99%. Dietterich says that formulating a reliable setting for this threshold is one of the most challenging research problems because there are, potentially, infinite kinds of alien objects. “The problem is that we can’t have labeled training data for all of the aliens. If we had such data, we would simply train the discriminative classifier on that labeled data,” Dietterich says.

To circumvent this labeling issue, the team assumes that the discriminative classifier has access to a representative sample of “query objects” that reflect the larger statistical population. Such a sample could, for example, be obtained by collecting data from cars driving on highways around the world. This sample will include some fraction of unknown objects, and the remaining objects belong to known object categories.

Notably, the data in the sample is not labeled. Instead, the AI system is given an estimate of the fraction of aliens in the sample. And by combining the information in the sample with the labeled training data that was employed to train the discriminative classifier, the team’s new algorithm can choose a good alarm threshold. If the estimated fraction of aliens is known to be an over-estimate of the true fraction, then the chosen threshold is guaranteed to detect the target percentage of aliens (i.e. 99%).

Ultimately, the above is the first method that can give a theoretical guarantee of safety for detecting alien objects, and a paper reporting the results was presented at ICML 2018. “We are able to guarantee, with high probability, that we can find 99% all of these new objects,” Dietterich says.

In the next stage of their research, Dietterich and his team plan to begin testing their algorithm in a more complex setting. Thus far, they’ve been looking primarily at classification, where the system looks at an image and classifies it. Next, they plan to move to controlling an agent, like a robot of self-driving car. “At each point in time, in order to decide what action to choose, our system will do a ‘look ahead search’ based on a learned model of the behavior of the agent and its environment. If the look ahead arrives at a state that is rated as ‘alien’ by our method, then this indicates that the agent is about to enter a part of the state space where it is not competent to choose correct actions,” Dietterich says. In response, as previously mentioned, the agent should execute a series of safety actions and request human assistance.

But what does this safety action actually consist of?

 

Responding to Aliens

Dietterich notes that, once something is identified as an anomaly and the alarm is sounded, the nature of this fall back system will depend on the machine in question, like whether the AI system is in a self-driving car or autonomous weapon.

To explain how these secondary systems operate, Dietterich turns to self-driving cars. “In the Google car, if the computers lose power, then there’s a backup system that automatically slows the car down and pulls it over to the side of the road.” However, Dietterich clarifies that stopping isn’t always the best course of action. One may assume that a car should come to a halt if an unidentified object crosses its path; however, if the unidentified object happens to be a blanket of snow on a particularly icy day, hitting the breaks gets more complicated. The system would need to factor in the icy roads, any cars that may be driving behind, and whether these cars can break in time to avoid a rear end collision.

But if we can’t predict every eventuality, how can we expect to program an AI system so that it behaves correctly and in a way that is safe?

Unfortunately, there’s no easy answer; however, Dietterich clarifies that there are some general best practices; “There’s no universal solution to the safety problem, but obviously there are some actions that are safer than others. Generally speaking, removing energy from the system is a good idea,” he says. Ultimately, Dietterich asserts that all the work related to programming safe AI really boils down to determining how we want our machines to behave under specific scenarios, and he argues that we need to rearticulate how we characterize this problem, and focus on accounting for all the factors, if we are to develop a sound approach.

Dietterich notes that “when we look at these problems, they tend to get lumped under a classification of ‘ethical decision making,’ but what they really are is problems that are incredibly complex. They depend tremendously on the context in which they are operating, the human beings, the other innovations, the other automated systems, and so on. The challenge is correctly describing how we want the system to behave and then ensuring that our implementations actually comply with those requirements.” And he concludes, “the big risk in the future of AI is the same as the big risk in any software system, which is that we build the wrong system, and so it does the wrong thing. Arthur C Clark in 2001: A Space Odyssey had it exactly right. The Hal 9000 didn’t ‘go rogue;’ it was just doing what it had been programmed to do.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

European Parliament Passes Resolution Supporting a Ban on Killer Robots

Click here to see this page in other languages:  Russian 

The European Parliament passed a resolution on September 12, 2018 calling for an international ban on lethal autonomous weapons systems (LAWS). The resolution was adopted with 82% of the members voting in favor of it.

Among other things, the resolution calls on its Member States and the European Council “to develop and adopt, as a matter of urgency … a common position on lethal autonomous weapon systems that ensures meaningful human control over the critical functions of weapon systems, including during deployment.”

The resolution also urges Member States and the European Council “to work towards the start of international negotiations on a legally binding instrument prohibiting lethal autonomous weapons systems.”

This call for urgency comes shortly after recent United Nations talks where countries were unable to reach a consensus about whether or not to consider a ban on LAWS. Many hope that statements such as this from leading government bodies could help sway the handful of countries still holding out against banning LAWS.

Daan Kayser of PAX, one of the NGO members of the Campaign to Stop Killer Robots, said, “The voice of the European parliament is important in the international debate. At the UN talks in Geneva this past August it was clear that most European countries see the need for concrete measures. A European parliament resolution will add to the momentum toward the next step.”

The countries that took the strongest stances against a LAWS ban at the recent UN meeting were the United States, Russia, South Korea, and Israel.

 

Scientists’ Voices Are Heard

Also mentioned in the resolution were the many open letters signed by AI researchers and scientists from around the world, who are calling on the UN to negotiate a ban on LAWS.

Two sections of the resolution stated:

“having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organisations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the destabilising effects of these technologies.’”

Toby Walsh, a prominent AI researcher who helped create the letters, said, “It’s great to see politicians listening to scientists and engineers. Starting in 2015, we’ve been speaking loudly about the risks posed by lethal autonomous weapons. The European Parliament has joined the calls for regulation. The challenge now is for the United Nations to respond. We have several years of talks at the UN without much to show. We cannot let a few nations hold the world hostage, to start an arms race with technologies that will destabilize the current delicate world order and that many find repugnant.”

FLI August, 2018 Newsletter