Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.

As biotechnology and other emerging technologies become more powerful, the dual-use nature of research — that is, research that can have both beneficial and risky outcomes — is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?

On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues.

Topics discussed in this episode include:

  • Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
  • The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
  • The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
  • How scientists can anticipate whether the results of their research could be misused by someone else
  • To what extent does risk stem from technology, and to what extent does it stem from how we govern it?

Books and publications discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. Now I’ve been planning to do something about biotechnology this month anyways since it would go along so nicely with the new resource we just released which highlights the benefits and risks of biotech. I was very pleased when Catherine Rhodes agreed to be on the show. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance, or a lack of it.

But she has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. The timing of Catherine as a guest is also especially fitting given that just this week the science world was shocked to learn that a researcher out of China is claiming to have created the world’s first genetically edited babies.

Now neither she nor I have had much of a chance to look at this case too deeply but I think it provides a very nice jumping-off point to consider regulations, ethics, and risks, as they pertain to biology and all emerging sciences. So Catherine, thank you so much for being here.

Catherine: Thank you.

Ariel: I also want to add that we did have another guest scheduled to join us today who is unfortunately ill, and unable to participate, so Catherine, I am doubly grateful to you for being here today.

Before we get too far into any discussions, I was hoping to just go over some basics to make sure we’re all on the same page. In my readings of your work, you talk a lot about biorisk and biosecurity, and I was hoping you could just quickly define what both of those words mean.

Catherine: Yes, in terms of thinking about both biological risk and biological security, I think about the objects that we’re trying to protect. It’s about the protection of human, animal, and plant life and health, in particular. Some of that extends to protection of the environment. The risks are the risks to those objects and security is securing and protecting those.

Ariel: Okay. I’d like to start this discussion where we’ll talk about ethics and policy, looking first at the example of the gain-of-function experiments that caused another stir in the science community a few years ago. That was research which was made, I believe, on the H5N1 virus, also known as the avian flu, and I believe it made the virus more virulent. First, can you just explain what gain-of-function means? And then I was hoping you could talk a bit about what that research was, and what the scientific community’s reaction to it was.

Catherine: Gain-of-function’s actually quite a controversial term to have selected to describe this work, because a lot of what biologists do is work that would add a function to the organism that they’re working on, without that actually posing any security risk. In this context, it was a gain of a function that would make it perhaps more desirable for use as a biological weapon.

In this case, it was things like an increase in its ability to transmit between mammals, so in particular, they were getting it tracked to be transmittable between ferrets in a laboratory, and ferrets are a model for transmission between humans.

Ariel: You actually bring up an interesting point that I hadn’t thought about. To what extent does our choice of terminology affect how we perceive the ethics of some of these projects?

Catherine: I think it was perhaps in this case, it was more that the use of that term which was more done from perhaps the security and policy community side, made the conversation with scientists more difficult, as it was felt this was mislabeling our research, it’s affecting research that shouldn’t really come into this kind of conversation about security. So I think that was where it maybe caused some difficulties.

But I think also there’s understanding that needs to be the other way as well, that this isn’t not necessarily that all policymakers are going to have that level of detail about what they mean when they’re talking about science.

Ariel: Right. What was the reaction then that we saw from the scientific community and the policymakers when this research was published?

Catherine: There was firstly a stage of debate about whether those papers should be published or not. There was some guidance given by what’s called the National Science Advisory Board for Biosecurity in the US, that those papers should not be published in full. So, actually, the first part of the debate was about that stage of ‘should you publish this sort of research where it might have a high risk of misuse?’

That was something that the security community had been discussing for at least a decade, that there were certain experiments where they felt that they would meet a threshold of risk, where they shouldn’t be openly published or shouldn’t be published with their methodological details in full. I think for the policy and security community, it was expected that these cases would arise, but this hadn’t perhaps been communicated to the scientific community particularly well, and so I think it came as a shock to some of those researchers, particularly because the research had been approved initially, so they were able to conduct the research, but suddenly they would find that they can’t publish the research that they’ve done. I think that was where this initial point of contention came about.

It then became a broader issue. More generally, how do we handle these sorts of cases? Are there times when we should restrict publication? Or, is publication actually open publication, going to be a better way of protecting ourselves, because we’ll all know about the risks as well?

Ariel: Like you said, these scientists had gotten permission to pursue this research, so it’s not like it was questionable, or they had no reason to think it was too questionable to begin with. And yet, I guess there is that issue of how can scientists think about some of these questions more long term and maybe recognize in advance that the public or policymakers might find their research concerning? Is that something that scientists should be trying to do more of?

Catherine: Yes, and I think that’s part of this point about the communication between the scientific and policy communities, so that these things don’t come as a surprise or a shock. Yes, I think there was something in this. If we’re allowed to do the research, should we not have had more conversation at the earlier stages? I think in general I would say that’s where we need to get to, because if you’re trying to intervene at the stage of publication, it’s probably already too late to really contain the risk of publication, because for example, if you’ve submitted a journal article online, that information’s already out there.

So yes, trying to take it further back in the process, so that the beginning stages of designing research projects these things are considered, is important. That has been pushed forward by funders, so there are now some clauses about ‘have you reviewed the potential consequences of your research?’ That is one way of triggering that thinking about it. But I think there’s been a broader question further back about education and awareness.

It’s all right if you’re being asked that question, but do you actually have information that helps you know what would be a security risk? And what elements might you be looking for in your work? So, there’s this case more generally in how do we build awareness amongst the scientific community that these issues might arise, and train them to be able to spot some of the security concerns that may be there?

Ariel: Are we taking steps in that direction to try to help educate both budding scientists and also researchers who have been in the field for a while?

Catherine: Yes, there have been quite a lot of efforts in that area. Again, probably over the last decade or so, done by academic groups in civil society. It’s been something that’s been encouraged by states-parties to the Biological Weapons Convention have been encouraging education and awareness raising, and also the World Health Organization. It’s got a document on responsible life sciences research, and it also encourages education and awareness-raising efforts.

I think that those have further to go, and I think some of the barriers to those being taken up are the familiar things that it’s very hard to find space in a scientific curriculum to have that teaching, that more resources are needed in terms of where are the materials that you would go to. That is being built up.

I think also then talking about the scientific curriculums at maybe the undergraduate, postgraduate level, but how do you extend this throughout scientific careers as well? There needs to be a way of reaching scientists at all levels.

Ariel: We’re talking a lot about the scientists right now, but in your writings, you mention that there are three groups who have responsibility for ensuring that science is safe and ethical. Those are one, obviously the scientists, but then also you mention policymakers, and you mention the public and society. I was hoping you could talk a little bit about how you see the roles for each of those three groups playing out.

Catherine: I think these sorts of issues, they’re never going to be just the responsibility of one group, because there are interactions going on. Some of those interactions are important in terms of maybe incentives. So we talked about publication. Publication is of such importance within the scientific community and within their incentive structures. It’s so important to publish, that again, trying to intervene just at that stage, and suddenly saying, “No, you can’t publish your research” is always going to be a big problem.

It’s to do with the norms and the practices of science, but some of that, again, comes from the outside. Are there ways we can reshape those sorts of structures that would be more useful? Is one way of thinking about it. I think we need clear signals from policymakers as well, about when to take threats seriously or not. If we’re not hearing from policymakers that there are significant security concerns around some forms of research, then why should we expect the scientist to be aware of it?

Yes, also policy does have a control and governance mechanisms within it, so it can be very useful. In forms of deciding what research can be done, that’s often done by funders and government bodies, and not by the research community themselves. Trying to think how more broadly, to bring in the public dimension. I think what I mean there is that it’s about all of us being aware of this. It shouldn’t be isolating one particular community and saying, “Well, if things go wrong, it was you.”

Socially, we’ve got decisions to make about how we feel about certain risks and benefits and how we want to manage them. In the gain-of-function case, the research that was done had the potential for real benefits for understanding avian influenza, which could produce a human pandemic, and therefore there could be great public health benefits associated with some of this research that also poses great risks.

Again, when we’re dealing with something that for society, could bring both risks and benefits, society should play a role in deciding what balance it wants to achieve.

Ariel: I guess I want to touch on this idea of how we can make sure that policymakers and the public – this comes down to a three way communication. I guess my question is, how do we get scientists more involved in policy, so that policymakers are informed and there is more of that communication? I guess maybe part of the reason I’m fumbling over this question is it’s not clear to me how much responsibility we should be putting specifically on scientists for this, versus how much responsibility does go to the other groups.

Catherine: About science, it’s becoming more involved in policy. That’s another part of thinking of the relationship between science and policy, and science and society, is that we’ve got an expectation that part of what policymakers will consider is how to have regulation and governance that’s appropriate to scientific practice, and to emerging technologies, science and technology advances, then they need information from the scientific community about those things. There’s a responsibility of policymakers to seek some of that information, but also for scientists to be willing to engage in the other direction.

I think that’s the main answer to how they could be more informed, and what other ways there could be more communication? I think some of the useful ways that’s done at the moment is by having, say, meetings where there might be a horizon scanning element, so that scientists can have input on where we might see advances going. But if you also have within the participation, policymakers, and maybe people who know more about things like technology transfer, and startups, investments, so they can see what’s going on in terms of where the money’s going. Bringing those groups together to look at where the future might be going is quite a good way of capturing some of those advances.

And it helps inform the whole group, so I think those sorts of processes are good, and there are some examples of those, and there are some examples where the international science academies come together to do some of that sort of work as well, so that they would provide information and reports that can go forward to international policy processes. They do that for meetings at the Biological Weapons Convention, for example.

Ariel: Okay, so I want to come back to this broadly in a little bit, but first I want to touch on biologists and ethics and regulation a little bit more generally. Because I guess I keep thinking of the famous Asilomar meeting from I think it was in the late ’70s, in which biologists got together, recognized some of the risks in their field, and chose to pause the work that they were doing, because there were ethical issues. I tend to credit them with being more ethically aware than a lot of other scientific fields.

But it sounds like maybe that’s not the case. Was that just a special example in which scientists were unusually proactive? I guess, should we be worried about scientists and biosecurity, or is it just a few bad apples like we saw with this recent Chinese researcher?

Catherine: I think in terms of ethical awareness, it’s not that I don’t think biologists are ethically aware, but it is that there can be a lot of different things coming onto their agendas in that, and again, those can be pushed out by other practices within your daily work. So, I think for example, one of the things in biology, often it’s quite close to medicine, and there’s been a lot over the last few decades about how we treat humans and animals in research.

There’s ethics and biomedical ethics, there’s practices to do with consent and participation of human subjects, that people are aware of. It’s just that sometimes you’ve got such an overload of all these different issues you’re supposed to be aware of and responding to, so sustainable development and environmental protection is another one, that I think it’s going to be the case that often things will fall off the agenda or knowing which you should prioritize perhaps can be difficult.

I do think there’s this lack of awareness of the past history of biological warfare programs, and the fact that scientists have always been involved with them, and then looking forward to know how much more easy, because of the trends in technology, it may be for more actors to have access to such technologies and the implications that might have.

I think that picks up on what you were saying about, are we just concerned about the bad apples? Are there some rogue people out there that we should be worried about? I think there’s two parts to that, because there may be some things that are more obvious, where you can spot, “Yeah, that person’s really up to something they shouldn’t be.” I think there are probably mechanisms where people do tend to be aware of what’s going on in their laboratories.

Although, as you mentioned, the recent Chinese case, potentially CRISPR gene edited babies, it seems clear that people within that person’s laboratory didn’t know what was going on, the funders didn’t know what was going on, the government didn’t know what was going on, so yes, there will be some cases where there’s something very obvious that someone is doing bad.

I think that’s probably an easier thing to handle and to conceptualize, but when we’re now getting these questions about you can be doing the stuff, scientific work, and research, that’s for clear benefits, and you’re doing it for those beneficial purposes, but how do you work out whether the results of that could be misused by someone else? How do you frame whether you have any responsibility for how someone else would use it when they may well not be anywhere near you in a laboratory? They may be very remote, you probably have no contact with them at all, so how can you judge and assess how your work may be misused, and then try and make some decision about how you should proceed with it? I think that’s a more complex issue.

That does probably, as you say, speak to ‘are there things in scientific cultures, working practices, that might assist with dealing with that? Or might make it problematic?’ Again, I think I’ve picked up a few times, but there’s a lot going on in terms of the sorts of incentive structures that scientists are working in, which do more broadly meet up with global economic incentives. Again, not knowing the full details of the recent Chinese CRISPR case, there can often be almost racing dynamics between countries to have done some of this research and to be ahead in it.

I think that did happen with the gain-of-function experiments so that when the US had a moratorium on doing them, that China wrapped up its experiments in the same area. There’s all these kind of incentive structures that are going on as well, and I think those do affect wider scientific and societal practices.

Ariel: Okay. Quickly touching on some of what you were talking about, in terms of researchers who are doing things right, in most cases I think what happens is this case of dual use, where the research could go either way. I think I’m going to give scientists the benefit of the doubt and say most of them are actually trying to do good with their research. That doesn’t mean that someone else can’t come along later and then do something bad with it.

This is I think especially a threat with biosecurity, and so I guess, I don’t know that I have a specific question that you haven’t really gotten into already, but I am curious if you have ideas for how scientists can deal with the dual use nature of their research. Maybe to what extent does more open communication help them deal with it, or is open communication possibly bad?

Catherine: Yes. I think yes it’s possibly good and possibly bad. I think again, yeah, it’s a difficult question without putting their practice into context. Again, it shouldn’t be that just the scientist has to think through these issues of dual use and can it be misused. If there’s not really any new information coming out about how serious a threat this might be, so do we know that this is being pursued by any terrorist group? Do we know why that might be of a particular concern?

I think another interesting thing is that you might get combinations of technology that have developed in different areas, so you might get someone who does something that helps with the dispersal of an agent, that’s entirely disconnected from someone who might be working on an agent, that would be useful to disperse. Knowing about the context of what else is going on in technological development, and not just within your own work is also important.

Ariel: Just to clarify, what are you referring to when you say agent here?

Catherine: In this case, again, thinking of biology, so that might be a microorganism. If you were to be developing a biological weapon, you don’t just need to have a nasty pathogen. You would need some way of dispersing, disseminating that, for it to be weaponized. Those components may be for beneficial reasons going on in very different places. How would scientists be able to predict where those might combine and come together, and create a bigger risk than just their own work?

Ariel: Okay. And then I really want to ask you about the idea of the races, but I don’t have a specific question to be honest. It’s a concerning idea, and it’s something that we look at in artificial intelligence, and it’s clearly a problem with nuclear weapons. I guess what are concerns we have when we look at biological races?

Catherine: It may not even be necessarily specific to looking at biological races, but it is this thing, and again, not even thinking of maybe military science uses of technology, but about how we have very strong drivers for economic growth, and that technology advances will be really important to innovation and economic growth.

So, I think this does provide a real barrier to collective state action against some of these threats, because if a country can see an advantage of not regulating an area of technology as strongly, then they’ve got a very strong incentive to go for that. It’s working out how you might maybe overcome some of those economic incentives, and try and slow down some of the development of technology, or application of technology perhaps, to a pace where we can actually start doing these things like working out what’s going on, what the risks might be, how we might manage those risks.

But that is a hugely controversial kind of thing to put forward, because the idea of slowing down technology, which is clearly going to bring us these great benefits and is linked to progress and economic progress is a difficult sell to many states.

Ariel: Yeah, that makes sense. I think I want to turn back to the Chinese case very quickly. I think this is an example of what a lot of people fear, in that you have this scientist who isn’t being open with the university that he’s working with, isn’t being open with his government about the work he’s doing. It sounds like even the people who are working for him in the lab, and possibly even the parents of the babies that are involved may not have been fully aware of what he was doing.

We don’t have all the information, but at the moment, at least what little we have sounds like an example of a scientist gone rogue. How do we deal with that? What policies are in place? What policies should we be considering?

Catherine: I think I share where the concerns in this are coming from, because it looks like there’s multiple failures of the types of layers of systems that should have maybe been able to pick this up and stop it, so yes, we would usually expect that a funder of the research, or the institution the person’s working in, the government through regulation, the colleagues of a scientist would be able to pick up on what’s happening, have some ability to intervene, and that doesn’t seem to have happened.

Knowing that these multiple things can all fall down is worrying. I think actually an interesting thing about how we deal with this that there seems to be a very strong reaction from the scientific community working around those areas of gene editing, to all come together and collectively say, “This was the wrong thing to do, this was irresponsible, this is unethical. You shouldn’t have done this without communicating more openly about what you were doing, what you were thinking of doing.”

I think that’s really interesting to see that community push back which I think in those cases to me, where scientists are working in similar areas, I’d be really put off by that, thinking, “Okay, I should stay in line with what the community expects me to do.” I think that is important.

Where it also is going to kick in from the more top-down regulatory side as well, so whether China will now get some new regulation in place, do some more checks down through the institutional levels, I don’t know. Likewise, I don’t know whether internationally it will bring a further push for coordination on how we want to regulate those experiments.

Ariel: I guess this also brings up the question of international standards. It does look like we’re getting very broad international agreement that this research shouldn’t have happened. But how do we deal with cases where maybe most countries are opposed to some type of research and another country says, “No, we think it could be possibly ethical so we’re going to allow it?”

Catherine: I think this is again, the challenging situation. It’s interesting to me, this picks up, I’m trying to think whether this is maybe 15-20 years ago, but the debates about human cloning internationally, whether there should be a ban on human cloning. There was a declaration made, there’s a UN declaration against human cloning, but it fell down in terms of actually being more than a declaration, having something stronger in terms of an international law on this, because basically in that case, it was the differences between states’ views of the status of the embryo.

Regulating human reproductive research at the international level is very difficult because of some of those issues where like you say, there can be quite significant differences in ethical approaches taken by different countries. Again, in this case, I think what’s been interesting is, “Okay, if we’re going to come across a difficulty in getting an agreement between states and the governmental level, is there things that the scientific community or other groups can do to make sure those debates are happening, and that some common ground is being found to how we should pursue research in these areas, when we should decide it’s maybe safe enough to go down some of these lines?”

I think another point about this case in China was that it’s just not known whether it’s safe to be doing gene editing on humans yet. That’s actually one of the reasons why people shouldn’t be doing it regardless. I hope that gets some way to the answer. I think it is very problematic that we often will find that we can’t get broad international agreement on things, even when there seems to be some level of consensus.

Ariel: We’ve been talking a lot about all of these issues from the perspective of biological sciences, but I want to step back and also look at some of these questions more broadly. There’s two sides that I want to look at. One is just this question of how do we enable scientists to basically get into policy more? I mean, how can we help scientists understand how policymaking works and help them recognize that their voices in policy can actually be helpful? Or, do you think that we are already at a good level there?

Catherine: I would say we’re certainly not at an ideal level yet of science and policy. It does vary across different areas of course, so the thing that was coming up into my mind is in climate change, for example, having the intergovernmental panel doing their reports every few years. There’s a good, collaborative, international evidence base and good science policy process in that area.

But in other areas there’s a big deficit I would say. I’m most familiar with that internationally, but I think some of this scales down to the national level as well. Part of it is going in the other direction almost. When I spoke earlier about needs perhaps for education and awareness raising among scientists about some of these issues around how their research may be used, I think there’s also a need for people in policy to become more informed about science.

That is important. I’m trying to think what are the ways maybe scientists can do that? I think there’s some attempts, so when there’s international negotiations going on, to have … I think I’ve heard them described as mini universities, so maybe a week’s worth of quick updates on where the science is at before a negotiation goes on that’s relevant to that science.

I think one of the key things to say is that there are ways for scientists and the scientific community to have influence both on how policy develops and how it’s implemented, and a lot of this will go through intermediary bodies. In particular, the professional associations and academies that represent scientific communities. They will know, for example, thinking in the UK context, but I think this is similar in the US, there may be a consultation by parliament on how should we address a particular issue?

There was one in the UK a couple of years ago, how should we be regulating genetically modified insects? If a consultation like that’s going on and they’re asking for advice and evidence, there’s often ways of channeling that through academies. They can present statements that represent broader scientific consensus within their communities and input that.

The reason for mentioning them as intermediaries, again, it’s a lot of a burden to put on individual scientists to say, “You should all be getting involved in policy and informing policy. Another part of what you should be doing as part of your role,” but yes, realizing that you can do that as a collective, rather than it just having to be an individual thing I think is valuable.

Ariel: Yeah, there is the issue of, “Hey, in your free time, can you also be doing this?” It’s not like scientists have lots of free time. But one of the things that I get the impression is that scientists are sometimes a little concerned about getting involved with policymaking because they fear overregulation, and that it could harm their research and the good that they’re trying to do with their research. Is this fear justified? Are scientists hampered by policies? Are they helped by policies?

Catherine: Yeah, so it’s both. It’s important to know that the mechanisms of policy can play facilitative roles, they can promote science, as well as setting constraints and limits on it. Again, most governments are recognizing that the life sciences and biology and artificial intelligence and other emerging technologies are going to be really key for their economic growth.

They are doing things to facilitate and support that, and fund it, so it isn’t only about the constraints. However, I guess for a lot of scientists, the way you come across regulation, you’re coming across the bits that are the constraints on your work, or there are things that make you fill in a lot of forms, so it can just be perceived as something that’s burdensome.

But I would also say that certainly something I’ve noticed in recent years is that we shouldn’t think that scientists and technology communities aren’t sometimes asking for areas to be regulated, asking for some guidance on how they should be managing risks. Switching back to a biology example, but with gene drive technologies, the communities working on those have been quite proactive in asking for some forms of, “How do we govern the risks? How should we be assessing things?” Saying, “These don’t quite fit with the current regulatory arrangements, we’d like some further guidance on what we should be doing.”

I can understand that there might be this fear about regulation, but I also think something you said, could this be the source of the reluctance to engage with policy, and I think an important thing to say there is that actually if you’re not engaging with policy, it’s more likely that the regulation is going to be working in ways that are not intentionally, but could be restricting scientific practice. I think that’s really important as well, that maybe the regulation is created in a very well intended way, and it just doesn’t match up with scientific practice.

I think at the moment, internationally this is becoming a discussion around how we might handle the digital nature of biology now, when most regulation is to do with materials. But if we’re going to start regulating the digital versions of biology, so gene sequencing information, that sort of thing, then we need to have a good understanding of what the flows of information are, in which ways they have value within the scientific community, whether it’s fundamentally important to have some of that information open, and we should be very wary of new rules that might enclose it.

I think that’s something again, if you’re not engaging with the processes of regulation and policymaking, things are more likely to go wrong.

Ariel: Okay. We’ve been looking a lot about how scientists deal with the risks of their research, how policymakers can help scientists deal with the risks of their research, et cetera, but it’s all about the risks coming from the research and from the technology, and from the advances. Something that you brought up in a separate conversation before the podcast is to what extent does risk stem from technology, and to what extent can it stem from how we govern it? I was hoping we could end with that question.

Catherine: That’s a really interesting question to me, and I’m trying to work that out in my own research. One of the interesting and perhaps obvious things to say is it’s never down to the technology. It’s down to how we develop it, use it, implement it. The human is always playing a big role in this anyway.

But yes, I think a lot of the time governance mechanisms are perhaps lagging behind the development of science and technology, and I think some of the risk is coming from the fact that we may just not be governing something properly. I think this comes down to things we’ve been mentioning earlier. We need collectively both in policy, in the science communities, technology communities, and society, just to be able to get a better grasp on what is happening in the directions of emerging technologies that could have both these very beneficial and very destructive potentials, and what is it we might need to do in terms of really rethinking how we govern these things?

Yeah, I don’t have any answer for where the sources of risk are coming from, but I think it’s an interesting place to look, is that intersection between the technology development, and the development of regulation and governance.

Ariel: All right, well yeah, I agree. I think that is a really great question to end on, for the audience to start considering as well. Catherine, thank you so much for joining us today. This has been a really interesting conversation.

Catherine: Thank you.

Ariel: As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us on your preferred podcast platform.

[end of recorded material]

Benefits & Risks of Biotechnology

“This is a whole new era where we’re moving beyond little edits on single genes to being able to write whatever we want throughout the genome.”

-George Church, Professor of Genetics at Harvard Medical School

What is biotechnology?

How are scientists putting nature’s machinery to use for the good of humanity, and how could things go wrong?

Biotechnology is nearly as old as humanity itself. The food you eat and the pets you love? You can thank our distant ancestors for kickstarting the agricultural revolution, using artificial selection for crops, livestock, and other domesticated animals. When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese!

When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing, or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells.

In fact, the most exciting biotechnology advances of recent times are occurring at the microscopic level (and smaller!) within the membranes of cells. After decades of basic research into decoding the chemical and genetic makeup of cells, biologists in the mid-20th century launched what would become a multi-decade flurry of research and breakthroughs. Their work has brought us the powerful cellular tools at biotechnologists’ disposal today. In the coming decades, scientists will use the tools of biotechnology to manipulate cells with increasing control, from precision editing of DNA to synthesizing entire genomes from their basic chemical building blocks. These cells could go on to become bomb-sniffing plants, miracle cancer drugs, or ‘de-extincted’ wooly mammoths. And biotechnology may be a crucial ally in the fight against climate change.

But rewriting the blueprints of life carries an enormous risk. To begin with, the same technology being used to extend our lives could instead be used to end them. While researchers might see the engineering of a supercharged flu virus as a perfectly reasonable way to better understand and thus fight the flu, the public might see the drawbacks as equally obvious: the virus could escape, or someone could weaponize the research. And the advanced genetic tools that some are considering for mosquito control could have unforeseen effects, possibly leading to environmental damage. The most sophisticated biotechnology may be no match for Murphy’s Law.

While the risks of biotechnology have been fretted over for decades, the increasing pace of progress – from low cost DNA sequencing to rapid gene synthesis to precision genome editing – suggests biotechnology is entering a new realm of maturity regarding both beneficial applications and more worrisome risks. Adding to concerns, DIY scientists are increasingly taking biotech tools outside of the lab. For now, many of the benefits of biotechnology are concrete while many of the risks remain hypotheticals, but it is better to be proactive and cognizant of the risks than to wait for something to go wrong first and then attempt to address the damage.

How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology. So do nearly all of your cotton clothes.

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test. It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs.

What are the risks of biotechnology?

Along with excitement, the rapid progress of research has also raised questions about the consequences of biotechnology advances. Biotechnology may carry more risk than other scientific fields: microbes are tiny and difficult to detect, but the dangers are potentially vast. Further, engineered cells could divide on their own and spread in the wild, with the possibility of far-reaching consequences. Biotechnology could most likely prove harmful either through the unintended consequences of benevolent research or from the purposeful manipulation of biology to cause harm. One could also imagine messy controversies, in which one group engages in an application for biotechnology that others consider dangerous or unethical.

 

1. Unintended Consequences

Sugarcane farmers in Australia in the 1930’s had a problem: cane beetles were destroying their crop. So, they reasoned that importing a natural predator, the cane toad, could be a natural form of pest control. What could go wrong? Well, the toads became a major nuisance themselves, spreading across the continent and eating the local fauna (except for, ironically, the cane beetle).

While modern biotechnology solutions to society’s problems seem much more sophisticated than airdropping amphibians into Australia, this story should serve as a cautionary tale. To avoid blundering into disaster, the errors of the past should be acknowledged.

  • In 2014, the Center for Disease Control came under scrutiny after repeated errors led to scientists being exposed to Ebola, anthrax, and the flu. And a professor in the Netherlands came under fire in 2011 when his lab engineered a deadly, airborne version of the flu virus, mentioned above, and attempted to publish the details. These and other labs study viruses or toxins to better understand the threats they pose and to try to find cures, but their work could set off a public health emergency if a deadly material is released or mishandled as a result of human error.
  • Mosquitoes are carriers of disease – including harmful and even deadly pathogens like Zika, malaria, and dengue – and they seem to play no productive role in the ecosystem. But civilians and lawmakers are raising concerns about a mosquito control strategy that would genetically alter and destroy disease-carrying species of mosquitoes. Known as a ‘gene drive,’ the technology is designed to spread a gene quickly through a population by sexual reproduction. For example, to control mosquitoes, scientists could release males into the wild that have been modified to produce only sterile offspring. Scientists who work on gene drive have performed risk assessments and equipped them with safeguards to make the trials as safe as possible. But, since a man-made gene drive has never been tested in the wild, it’s impossible to know for certain the impact that a mosquito extinction could have on the environment. Additionally, there is a small possibility that the gene drive could mutate once released in the wild, spreading genes that researchers never planned for. Even armed with strategies to reverse a rogue gene drive, scientists may find gene drives difficult to control once they spread outside the lab.
  • When scientists went digging for clues in the DNA of people who are apparently immune to HIV, they found that the resistant individuals had mutated a protein that serves as the landing pad for HIV on the surface of blood cells. Because these patients were apparently healthy in the absence of the protein, researchers reasoned that deleting its gene in the cells of infected or at-risk patients could be a permanent cure for HIV and AIDS. With the arrival of the new tool, a set of ‘DNA scissors’ called CRISPR/Cas9, that holds the promise of simple gene surgery for HIV, cancer, and many other genetic diseases, the scientific world started to imagine nearly infinite possibilities. But trials of CRISPR/Cas9 in human cells have produced troubling results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. While a bad haircut might be embarrassing, the wrong cut by CRISPR/Cas9 could be much more serious, making you sicker instead of healthier. And if those edits were made to embryos, instead of fully formed adult cells, then the mutations could permanently enter the gene pool, meaning they will be passed on to all future generations. So far, prominent scientists and prestigious journals are calling for a moratorium on gene editing in viable embryos until the risks, ethics, and social implications are better understood.

 

2. Weaponizing biology

The world recently witnessed the devastating effects of disease outbreaks, in the form of Ebola and the Zika virus – but those were natural in origin. The malicious use of biotechnology could mean that future outbreaks are started on purpose. Whether the perpetrator is a state actor or a terrorist group, the development and release of a bioweapon, such as a poison or infectious disease, would be hard to detect and even harder to stop. Unlike a bullet or a bomb, deadly cells could continue to spread long after being deployed. The US government takes this threat very seriously, and the threat of bioweapons to the environment should not be taken lightly either.

Developed nations, and even impoverished ones, have the resources and know-how to produce bioweapons. For example, North Korea is rumored to have assembled an arsenal containing “anthrax, botulism, hemorrhagic fever, plague, smallpox, typhoid, and yellow fever,” ready in case of attack. It’s not unreasonable to assume that terrorists or other groups are trying to get their hands on bioweapons as well. Indeed, numerous instances of chemical or biological weapon use have been recorded, including the anthrax scare shortly after 9/11, which left 5 dead after the toxic cells were sent through the mail. And new gene editing technologies are increasing the odds that a hypothetical bioweapon targeted at a certain ethnicity, or even a single individual like a world leader, could one day become a reality.

While attacks using traditional weapons may require much less expertise, the dangers of bioweapons should not be ignored. It might seem impossible to make bioweapons without plenty of expensive materials and scientific knowledge, but recent advances in biotechnology may make it even easier for bioweapons to be produced outside of a specialized research lab. The cost to chemically manufacture strands of DNA is falling rapidly, meaning it may one day be affordable to ‘print’ deadly proteins or cells at home. And the openness of science publishing, which has been crucial to our rapid research advances, also means that anyone can freely Google the chemical details of deadly neurotoxins. In fact, the most controversial aspect of the supercharged influenza case was not that the experiments had been carried out, but that the researchers wanted to openly share the details.

On a more hopeful note, scientific advances may allow researchers to find solutions to biotechnology threats as quickly as they arise. Recombinant DNA and biotechnology tools have enabled the rapid invention of new vaccines which could protect against new outbreaks, natural or man-made. For example, less than 5 months after the World Health Organization declared Zika virus a public health emergency, researchers got approval to enroll patients in trials for a DNA vaccine.

The ethics of biotechnology

Biotechnology doesn’t have to be deadly, or even dangerous, to fundamentally change our lives. While humans have been altering genes of plants and animals for millennia — first through selective breeding and more recently with molecular tools and chimeras — we are only just beginning to make changes to our own genomes (amid great controversy).

Cutting-edge tools like CRISPR/Cas9 and DNA synthesis raise important ethical questions that are increasingly urgent to answer. Some question whether altering human genes means “playing God,” and if so, whether we should do that at all. For instance, if gene therapy in humans is acceptable to cure disease, where do you draw the line? Among disease-associated gene mutations, some come with virtual certainty of premature death, while others put you at higher risk for something like Alzheimer’s, but don’t guarantee you’ll get the disease. Many others lie somewhere in between. How do we determine a hard limit for which gene surgery to undertake, and under what circumstances, especially given that the surgery itself comes with the risk of causing genetic damage? Scholars and policymakers have wrestled with these questions for many years, and there is some guidance in documents such as the United Nations’ Universal Declaration on the Human Genome and Human Rights.

And what about ways that biotechnology may contribute to inequality in society? Early work in gene surgery will no doubt be expensive – for example, Novartis plans to charge $475,000 for a one-time treatment of their recently approved cancer therapy, a drug which, in trials, has rescued patients facing certain death. Will today’s income inequality, combined with biotechnology tools and talk of ‘designer babies’, lead to tomorrow’s permanent underclass of people who couldn’t afford genetic enhancement?

Advances in biotechnology are escalating the debate, from questions about altering life to creating it from scratch. For example, a recently announced initiative called GP-Write has the goal of synthesizing an entire human genome from chemical building blocks within the next 10 years. The project organizers have many applications in mind, from bringing back wooly mammoths to growing human organs in pigs. But, as critics pointed out, the technology could make it possible to produce children with no biological parents, or to recreate the genome of another human, like making cellular replicas of Einstein. “To create a human genome from scratch would be an enormous moral gesture,” write two bioethicists regarding the GP-Write project. In response, the organizers of GP-Write insist that they welcome a vigorous ethical debate, and have no intention of turning synthetic cells into living humans. But this doesn’t guarantee that rapidly advancing technology won’t be applied in the future in ways we can’t yet predict.

What are the tools of biotechnology?

 

1. DNA Sequencing

It’s nearly impossible to imagine modern biotechnology without DNA sequencing. Since virtually all of biology centers around the instructions contained in DNA, biotechnologists who hope to modify the properties of cells, plants, and animals must speak the same molecular language. DNA is made up of four building blocks, or bases, and DNA sequencing is the process of determining the order of those bases in a strand of DNA. Since the publication of the complete human genome in 2003, the cost of DNA sequencing has dropped dramatically, making it a simple and widespread research tool.

Benefits: Sonia Vallabh had just graduated from law school when her mother died from a rare and fatal genetic disease. DNA sequencing showed that Sonia carried the fatal mutation as well. But far from resigning to her fate, Sonia and her husband Eric decided to fight back, and today they are graduate students at Harvard, racing to find a cure. DNA sequencing has also allowed Sonia to become pregnant, since doctors could test her eggs for ones that don’t have the mutation. While most people’s genetic blueprints don’t contain deadly mysteries, our health is increasingly supported by the medical breakthroughs that DNA sequencing has enabled. For example, researchers were able to track the 2014 Ebola epidemic in real time using DNA sequencing. And pharmaceutical companies are designing new anti-cancer drugs targeted to people with a specific DNA mutation. Entire new fields, such as personalized medicine, owe their existence to DNA sequencing technology.

Risks: Simply reading DNA is not harmful, but it is foundational for all of modern biotechnology. As the saying goes, knowledge is power, and the misuse of DNA information could have dire consequences. While DNA sequencing alone cannot make bioweapons, it’s hard to imagine waging biological warfare without being able to analyze the genes of infectious or deadly cells or viruses. And although one’s own DNA information has traditionally been considered personal and private, containing information about your ancestors, family, and medical conditions,  governments and corporations increasingly include a person’s DNA signature in the information they collect. Some warn that such databases could be used to track people or discriminate on the basis of private medical records – a dystopian vision of the future familiar to anyone who’s seen the movie GATTACA. Even supplying patients with their own genetic information has come under scrutiny, if it’s done without proper context, as evidenced by the dispute between the FDA and the direct-to-consumer genetic testing service 23andMe. Finally, DNA testing opens the door to sticky ethical questions, such as whether to carry to term a pregnancy after the fetus is found to have a genetic mutation.

 

2. Recombinant DNA

The modern field of biotechnology was born when scientists first manipulated – or ‘recombined’ –  DNA in a test tube, and today almost all aspects of society are impacted by so-called ‘rDNA’. Recombinant DNA tools allow researchers to choose a protein they think may be important for health or industry, and then remove that protein from its original context. Once removed, the protein can be studied in a species that’s simple to manipulate, such as E. coli bacteria. This lets researchers reproduce it in vast quantities, engineer it for improved properties, and/or transplant it into a new species. Modern biomedical research, many best-selling drugs, most of the clothes you wear, and many of the foods you eat rely on rDNA biotechnology.

Benefits: Simply put, our world has been reshaped by rDNA. Modern medical advances are unimaginable without the ability to study cells and proteins with rDNA and the tools used to make it, such as PCR, which helps researchers ‘copy and paste’ DNA in a test tube. An increasing number of vaccines and drugs are the direct products of rDNA. For example, nearly all insulin used in treating diabetes today is produced recombinantly. Additionally, cheese lovers may be interested to know that rDNA provides ingredients for a majority of hard cheeses produced in the West. Many important crops have been genetically modified to produce higher yields, withstand environmental stress, or grow without pesticides. Facing the unprecedented threats of climate change, many researchers believe rDNA and GMOs will be crucial in humanity’s efforts to adapt to rapid environmental changes.

Risks: The inventors of rDNA themselves warned the public and their colleagues about the dangers of this technology. For example, they feared that rDNA derived from drug-resistant bacteria could escape from the lab, threatening the public with infectious superbugs. And recombinant viruses, useful for introducing genes into cells in a petri dish, might instead infect the human researchers. Some of the initial fears were allayed when scientists realized that genetic modification is much trickier than initially thought, and once the realistic threats were identified – like recombinant viruses or the handling of deadly toxins –  safety and regulatory measures were put in place. Still, there are concerns that rogue scientists or bioterrorists could produce weapons with rDNA. For instance, it took researchers just 3 years to make poliovirus from scratch in 2006, and today the same could be accomplished in a matter of weeks. Recent flu epidemics have killed over 200,000, and the malicious release of an engineered virus could be much deadlier – especially if preventative measures, such as vaccine stockpiles, are not in place.

3. DNA Synthesis

Synthesizing DNA has the advantage of offering total researcher control over the final product. With many of the mysteries of DNA still unsolved, some scientists believe the only way to truly understand the genome is to make one from its basic building blocks. Building DNA from scratch has traditionally been too expensive and inefficient to be very practical, but in 2010, researchers did just that, completely synthesizing the genome of a bacteria and injecting it into a living cell. Since then, scientists have made bigger and bigger genomes, and recently, the GP-Write project launched with the intention of tackling perhaps the ultimate goal: chemically fabricating an entire human genome. Meeting this goal – and within a 10 year timeline – will require new technology and an explosion in manufacturing capacity. But the project’s success could signal the impact of synthetic DNA on the future of biotechnology.

Benefits: Plummeting costs and technical advances have made the goal of total genome synthesis seem much more immediate. Scientists hope these advances, and the insights they enable, will ultimately make it easier to make custom cells to serve as medicines or even bomb-sniffing plants. Fantastical applications of DNA synthesis include human cells that are immune to all viruses or DNA-based data storage. Prof. George Church of Harvard has proposed using DNA synthesis technology to ‘de-extinct’ the passenger pigeon, wooly mammoth, or even Neanderthals. One company hopes to edit pig cells using DNA synthesis technology so that their organs can be transplanted into humans. And DNA is an efficient option for storing data, as researchers recently demonstrated when they stored a movie file in the genome of a cell.

Risks: DNA synthesis has sparked significant controversy and ethical concerns. For example, when the GP-Write project was announced, some criticized the organizers for the troubling possibilities that synthesizing genomes could evoke, likening it to playing God. Would it be ethical, for instance, to synthesize Einstein’s genome and transplant it into cells? The technology to do so does not yet exist, and GP-Write leaders have backed away from making human genomes in living cells, but some are still demanding that the ethical debate happen well in advance of the technology’s arrival. Additionally, cheap DNA synthesis could one day democratize the ability to make bioweapons or other nuisances, as one virologist demonstrated when he made the horsepox virus (related to the virus that causes smallpox) with DNA he ordered over the Internet. (It should be noted, however, that the other ingredients needed to make the horsepox virus are specialized equipment and deep technical expertise.)

 

4. Genome Editing

Many diseases have a basis in our DNA, and until recently, doctors had very few tools to address the root causes. That appears to have changed with the recent discovery of a DNA editing system called CRISPR/Cas9. (A note on terminology – CRISPR is a bacterial immune system, while Cas9 is one protein component of that system, but both terms are often used to refer to the protein.) It operates in cells like a DNA scissor, opening slots in the genome where scientists can insert their own sequence. While the capability of cutting DNA wasn’t unprecedented, Cas9 dusts the competition with its effectiveness and ease of use. Even though it’s a biotech newcomer, much of the scientific community has already caught ‘CRISPR-fever,’ and biotech companies are racing to turn genome editing tools into the next blockbuster pharmaceutical.

Benefits: Genome editing may be the key to solving currently intractable genetic diseases such as cystic fibrosis, which is caused by a single genetic defect. If Cas9 can somehow be inserted into a patient’s cells, it could fix the mutations that cause such diseases, offering a permanent cure. Even diseases caused by many mutations, like cancer, or caused by a virus, like HIV/AIDS, could be treated using genome editing. Just recently, an FDA panel recommended a gene therapy for cancer, which showed dramatic responses for patients who had exhausted every other treatment. Genome editing tools are also used to make lab models of diseases, cells that store memories, and tools that can detect epidemic viruses like Zika or Ebola. And as described above, if a gene drive, which uses Cas9, is deployed effectively, we could eliminate diseases such as malaria, which kills nearly half a million people each year.

Risks: Cas9 has generated nearly as much controversy as it has excitement, because genome editing carries both safety issues and ethical risks. Cutting and repairing a cell’s DNA is not risk-free, and errors in the process could make a disease worse, not better. Genome editing in reproductive cells, such as sperm or eggs, could result in heritable genetic changes, meaning dangerous mutations could be passed down to future generations. And some warn of unethical uses of genome editing, fearing a rise of ‘designer babies’ if parents are allowed to choose their children’s traits, even though there are currently no straightforward links between one’s genes and their intelligence, appearance, etc. Similarly, a gene drive, despite possibly minimizing the spread of certain diseases, has the potential to create great harm since it is intended to kill or modify an entire species. A successful gene drive could have unintended ecological impacts, be used with malicious intent, or mutate in unexpected ways. Finally, while the capability doesn’t currently exist, it’s not out of the realm of possibility that a rogue agent could develop genetically selective bioweapons to target individuals or populations with certain genetic traits.

 

Recommended References

Videos

Research Papers

Books

Informational Documents

Articles

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett

In both 2016 and 2017, genome editing made it into the annual Worldwide Threat Assessment of the US Intelligence Community. One of biotechnology’s most promising modern developments, it had now been deemed a danger to US national security – and then, after two years, it was dropped from the list again. All of which raises the question: what, exactly, is genome editing, and what can it do?

Most simply, the phrase “genome editing” represents tools and techniques that biotechnologists use to edit the genomethat is, the DNA or RNA of plants, animals, and bacteria. Though the earliest versions of genome editing technology have existed for decades, the introduction of CRISPR in 2013 “brought major improvements to the speed, cost, accuracy, and efficiency of genome editing.

CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats, is actually an ancient mechanism used by bacteria to remove viruses from their DNA. In the lab, researchers have discovered they can replicate this process by creating a synthetic RNA strand that matches a target DNA sequence in an organism’s genome. The RNA strand, known as a “guide RNA,” is attached to an enzyme that can cut DNA. After the guide RNA locates the targeted DNA sequence, the enzyme cuts the genome at this location. DNA can then be removed, and new DNA can be added. CRISPR has quickly become a powerful tool for editing genomes, with research taking place in a broad range of plants and animals, including humans.

A significant percentage of genome editing research focuses on eliminating genetic diseases. However, with tools like CRISPR, it also becomes possible to alter a pathogen’s DNA to make it more virulent and more contagious. Other potential uses include the creation of “‘killer mosquitos,’ plagues that wipe out staple crops, or even a virus that snips at people’s DNA.”

But does genome editing really deserve a spot among the ranks of global threats like nuclear weapons and cyber hacking? To many members of the scientific community, its inclusion felt like an overreaction. Among them was Dr. Piers Millett, a science policy and international security expert whose work focuses on biotechnology and biowarfare.

Millett wasn’t surprised that biotechnology in general made it into these reports: what he didn’t expect was for one specific tool, genome editing, to be called out. In his words: “I would personally be much more comfortable if it had been a broader sentiment to say ‘Hey, there’s a whole bunch of emerging biotechnologies that could destabilize our traditional risk equation in this space, and we need to be careful with that.’ …But calling out specifically genome editing, I still don’t fully understand any rationale behind it.”

This doesn’t mean, however, that the misuse of genome editing is not cause for concern. Even proper use of the technology often involves the genetic engineering of biological pathogens, research that could very easily be weaponized. Says Millett, “If you’re deliberately trying to create a pathogen that is deadly, spreads easily, and that we don’t have appropriate public health measures to mitigate, then that thing you create is amongst the most dangerous things on the planet.”

 

Biowarfare Before Genome Editing

A medieval depiction of the Black Plague.

Developments such as CRISPR present new possibilities for biowarfare, but biological weapons caused concern long before the advent of gene editing. The first recorded use of biological pathogens in warfare dates back to 600 BC, when Solon, an Athenian statesman, poisoned enemy water supplies during the siege of Krissa. Many centuries later, during the 1346 AD siege of Caffa, the Mongol army catapulted plague-infested corpses into the city, which is thought to have contributed to the 14th century Black Death pandemic that wiped out up to two thirds of Europe’s population.

Though biological weapons were internationally banned by the 1925 Geneva Convention, state biowarfare programs continued and in many cases expanded during World War II and the Cold War. In 1972, as evidence of these violations mounted, 103 nations signed a treaty known as the Biological Weapons Convention (BWC). The treaty bans the creation of biological arsenals and outlaws offensive biological research, though defensive research is permissible. Each year, signatories are required to submit certain information about their biological research programs to the United Nations, and violations reported to the UN Security Council may result in an inspection.

But inspections can be vetoed by the permanent members of the Security Council, and there are no firm guidelines for enforcement. On top of this, the line that separates permissible defensive biological research from its offensive counterpart is murky and remains a subject of controversy. And though the actual numbers remain unknown, pathologist Dr. Riedel asserts that “the number of state-sponsored programs [that have engaged in offensive biological weapons research] has increased significantly during the last 30 years.”

 

Dual Use Research

So biological warfare remains a threat, and it’s one that genome editing technology could hypothetically escalate. Genome editing falls into a category of research and technology that’s known as “dual-use” – that is, it has the potential both for beneficial advances and harmful misuses. “As an enabling technology, it enables you to do things, so it is the intent of the user that determines whether that’s a positive thing or a negative thing,” Millett explains.

And ultimately, what’s considered positive or negative is a matter of perspective. “The same activity can look positive to one group of people, and negative to another. How do we decide which one is right and who gets to make that decision?” Genome editing could be used, for example, to eradicate disease-carrying mosquitoes, an application that many would consider positive. But as Millet points out, some cultures view such blatant manipulation of the ecosystem as harmful or “sacrilegious.”

Millett believes that the most effective way to deal with dual-use research is to get the researchers engaged in the discussion. “We have traditionally treated the scientific community as part of the problem,” he says. “I think we need to move to a point where the scientific community is the key to the solution, where we’re empowering them to be the ones who identify the risks, the ones who initiate the discussion about what forms this research should take.” A good scientist, he adds, is one “who’s not only doing good research, but doing research in a good way.”

 

DIY Genome Editing

But there is a growing worry that dangerous research might be undertaken by those who are not scientists at all. There are already a number of do-it-yourself (DIY) genome editing kits on the market today, and these relatively inexpensive kits allow anyone, anywhere to edit DNA using CRISPR technology. Do these kits pose a real security threat? Millett explains that risk level can be assessed based on two distinct criteria: likelihood and potential impact. Where the “greatest” risks lie will depend on the criterion.

“If you take risk as a factor of likelihood of impact, the most likely attacks will come from low-powered actors, but have a minimal impact and be based on traditional approaches, existing pathogens, and well characterized risks and threats,” Millett explains. DIY genome editors, for example, may be great in number but are likely unable to produce a biological agent capable of causing widespread harm.

“If you switch it around and say where are the most high impact threats going to come from, then I strongly believe that that [type of threat] requires a level of sophistication and technical competency and resources that are not easy to acquire at this point in time,” says Millett. “If you’re looking for advanced stuff: who could misuse genome editing? States would be my bet in the foreseeable future.”

State Bioweapons Programs

Large-scale bioweapons programs, such as those run by states, pose a double threat: there is always the possibility of accidental release alongside the potential for malicious use. Millett believes that these threats are roughly equal, a conclusion backed by a thousand page report from Gryphon Scientific, a US defense contractor.

Historically, both accidental release and malicious use of biological agents have caused damage. In 1979, there was the accidental release of aerosolized anthrax from the Sverdlovsk [now Ekaterinburg] bioweapons production facility in the Soviet Union – a clogged air filter in the facility had been removed, but had not been replaced. Ninety-four people were affected by the incident and at least 64 died, along with a number of livestock. The Soviet secret police attempted a cover-up and it was not until years later that the administration admitted the cause of the outbreak.

More recently, Millett says, a US biodefense facility “failed to kill the anthrax that it sent out for various lab trials, and ended up sending out really nasty anthrax around the world.” Though no one was infected, a 2015 government investigation revealed that “over the course of the last decade, 86 facilities in the United States and seven other countries have received low concentrations of live [anthrax] spore samples… thought to be completely inactivated.”

These incidents pale, however, in comparison with Japan’s intentional use of biological weapons during the 1930s and 40s. There is “a published history that suggests up to 30,000 people were killed in China by the Japanese biological weapons program during the lead up to World War II. And if that data is accurate, that is orders of magnitude bigger than anything else,” Millett says.

Given the near-impossibility of controlling the spread of disease, a deliberate attack may have accidental effects far beyond what was intended. The Japanese, for example, may have meant to target only a few Chinese villages, only to unwittingly trigger an epidemic. There are reports, in fact, that thousands of Japan’s own soldiers became infected during a biological attack in 1941.

Despite the 1972 ban on biological weapons programs, Millett believes that many countries still have the capacity to produce biological weapons. As an example, he explains that the Soviets developed “a set of research and development tools that would answer the key questions and give you all the key capabilities to make biological weapons.”

The BWC only bans offensive research, and “underneath the umbrella of a defensive program,” Millett says, “you can do a whole load of research and development to figure out what you would want to weaponize if you were going to make a weapon.” Then, all a country needs to start producing those weapons is “the capacity to scale up production very, very quickly.” The Soviets, for example, built “a set of state-based commercial infrastructure to make things like vaccines.” On a day-to-day basis, they were making things the Soviet Union needed. “But they could be very radically rebooted and repurposed into production facilities for their biological weapons program,” Millett explains. This is known as a “breakout program.”

Says Millett, “I believe there are many, many countries that are well within the scope of a breakout program … so it’s not that they necessarily at this second have a fully prepared and worked-out biological weapons program that they can unleash on the world tomorrow, but they might well have all of the building blocks they need to do that in place, and a plan for how to turn their existing infrastructure towards a weapons program if they ever needed to. These components would be permissible under current international law.”

 

Biological Weapons Convention

This unsettling reality raises questions about the efficacy of the BWC – namely, what does it do well, and what doesn’t it do well? Millett, who worked for the BWC for well over a decade, has a nuanced view.

“The very fact that we have a ban on these things is brilliant,” he says. “We’re well ahead on biological weapons than many other types of weapons systems. We only got the ban on nuclear weapons – and it was only joined by some tiny number of countries – last year. Chemical weapons, only in 1995. The ban on biological weapons is hugely important. Having a space at the international level to talk about those issues is very important.” But, he adds, “we’re rapidly reaching the end of the space that I can be positive about.”

The ban on biological weapons was motivated, at least in part, by the sense that – unlike chemical weapons – they weren’t particularly useful. Traditionally, chemical and biological weapons were dealt with together. The 1925 Geneva Protocol banned both, and the original proposal for the Biological Weapons Convention, submitted by the UK in 1969, would have dealt with both. But the chemical weapons ban was ultimately dropped from the BWC, Millett says, “because that was during Vietnam, and so there were a number of chemical agents that were being used in Vietnam that weren’t going to be banned.” Once the scope of the ban had been narrowed, however, both the US and the USSR signed on.

Millet describes the resulting document as “aspirational.” He explains,“The Biological Weapons Convention is four pages long, whereas the [1995] Chemical Weapons Convention is 200 pages long, give or take.” And the difference “is about the teeth in the treaty.”

“The BWC is…a short document that’s basically a commitment by states not to make these weapons. The Chemical Weapons Convention is an international regime with an organization, with an inspection regime intended to enforce that. Under the BWC, if you are worried about another state, you’re meant to try to resolve those concerns amicably. But if you can’t do that, we move onto Article Six of the Convention, where you report it to the Security Council. The Security Council is meant to investigate it, but of course if you’re a permanent member of the Security Council, you can veto that, so that doesn’t happen.”

 

De-escalation

One easy way that states can avoid raising suspicion is to be more transparent. As Millett puts it, “If you’re not doing naughty things, then it’s on you to demonstrate that you’re not.” This doesn’t mean revealing everything to everybody. It means finding ways to show other states that they don’t need to worry.

As an example, Millett cites the heightened security culture that developed in the US after 9/11. Following the 2001 anthrax letter attacks, as well as a large investment in US biodefense programs, an initiative was started to prevent foreigners from working in those biodefense facilities. “I’m very glad they didn’t go down that path,” says Millett, “because the greatest risk, I think, was not that a foreign national would sneak in.” Rather, “the advantage of having foreign nationals in those programs was at the international level, when country Y stands up and accuses the US of having an illicit bioweapons program hidden in its biodefense program, there are three other countries that can stand up and say, ‘Well, wait a minute. Our scientists are in those facilities. We work very closely with that program, and we see no evidence of what you’re saying.’”

Historically, secrecy surrounding bioweapons programs has led other countries to begin their own research. Before World War I, the British began exploring the use of bioweapons. The Germans were aware of this. By the onset of the war, the British had abandoned the idea, but the Germans, not knowing this, began their own bioweapons program in an attempt to keep up. By World War II, Germany no longer had a bioweapons program. But the Allies believed they still did, and the U.S. bioweapons program was born of such fears.

 

What now?

Asked if he believes genome editing is a bioweapons “game changer”, Millett says no. “I see it as an enabling technology in the short to medium term, then maybe with longer-term implications [for biowarfare], but then we’re out into the far distance of what we can reasonably talk about and predict,” he says. “Certainly for now, I think its big impact is it makes it easier, faster, cheaper, and more reliable to do things that you could do using traditional approaches.”

But as biotechnology continues to evolve, so too will biowarfare. For example, it will eventually be possible for governments to alter specific genes in their own populations. “Imagine aerosolizing a lovely genome editor that knocks out a specifically nasty gene in your population,” says Millett. “It’s a passive thing. You breathe it in [and it] retroactively alters the population[’s DNA].

A government could use such technology to knock out a gene linked to cancer or other diseases. But, Millett says, “what would happen if you came across a couple of genes that at an individual level were not going to have an impact, but at a population level were connected with something, say, like IQ?” With the help of a genome editor, a government could make their population smarter, on average, by a few IQ points.

“There’s good economic data that says that [average IQ] is … statistically important,” Millett says. “The GDP of the country will be noticeably affected if we could just get another two or three percent IQ points. There are direct national security implications of that. If, for example, Chinese citizens got smarter on average over the next couple of generations by a couple of IQ points per generation, that has national security implications for both the UK and the US.”

For now, such an endeavor remains in the realm of science fiction. But technology is evolving at a breakneck speed, and it’s more important than ever to consider the potential implications of our advancements. That said, Millett is optimistic about the future. “I think the key is the distribution of bad actors versus good actors,” he says. As long as the bad actors remain the minority, there is more reason to be excited for the future of biotechnology than there is to be afraid of it.

Dr. Piers Millett holds fellowships at the Future of Humanity Institute, the University of Oxford, and the Woodrow Wilson Center for International Policy and works as a consultant for the World Health Organization. He also served at the United Nations as the Deputy Head of the Biological Weapons Convention.  

Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks?

In this special podcast episode, Ariel speaks with Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Martin is a cosmologist and space scientist based in the University of Cambridge. He is director of The Institute of Astronomy and Master of Trinity College, and he was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords.

Topics discussed in this episode include:

  • Why Martin remains a technical optimist even as he focuses on existential risks
  • The economics and ethics of climate change
  • How AI and automation will make it harder for Africa and the Middle East to economically develop
  • How high expectations for health care and quality of life also put society at risk
  • Why growing inequality could be our most underappreciated global risk
  • Martin’s view that biotechnology poses greater risk than AI
  • Earth’s carrying capacity and the dangers of overpopulation
  • Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
  • The ethics of artificial meat, life extension, and cryogenics
  • How intelligent life could expand into the galaxy
  • Why humans might be unable to answer fundamental questions about the universe

Books and resources discussed in this episode include

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloudiTunesGooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with The Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, and with a few focusing on nuclear weapons, but FLI is really an organization about existential risks, and especially x-risks that are the result of human action. These cover a much broader field than just artificial intelligence.

I’m excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He’s based in Cambridge where he has been director of The Institute of Astronomy, and Master of Trinity College. He was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including The National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.

He’s on the board of The Princeton Institute for Advanced Study, and has served on many bodies connected with international collaboration and science, especially threats stemming from humanity’s ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He’s written seven books for the general public, and his most recent book is about these threats. It’s the reason that I’ve asked him to join us today. First, Martin thank you so much for talking with me today.

Martin: Good to be in touch.

Ariel: Your new book is called On the Future: Prospects for Humanity. In his endorsement of the book Neil deGrasse Tyson says, “From climate change, to biotech, to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.”

I really liked this quote, because I felt like it sums up what your book is about. Basically science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. First, do you agree with that sentiment? Am I accurate in that description?

Martin: No, I certainly agree, and that’s truer of this century than ever before because of greater scientific knowledge we have, and the greater power to use it for good or ill, because the technologies allow tremendously advanced technologies which could be misused by a small number of people.

Ariel: You’ve written in the past about how you think we have essentially a 50/50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe what the overall message you’re hoping that people take away is?

Martin: Well, I describe myself as a technical optimist, but political pessimist because it is clear that we couldn’t be living such good lives today with seven and a half billion people on the planet if we didn’t have the technology which has been developed in the last 100 years, and clearly there’s a tremendous prospect of better technology in the future. But on the other hand what is depressing is the very big gap between the way the world could be, and the way the world actually is. In particular, even though we have the power to give everyone a decent life, the lot of the bottom billion people in the world is pretty miserable and could be alleviated a lot simply by the money owned by the 1,000 richest people in the world.

We have a very unjust society, and the politics is not optimizing the way technology is used for human benefit. My view is that it’s the politics which is an impediment to the best use of technology, and the reason this is important is that as time goes on we’re going to have a growing population which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed.

That’s the problem of the collective effect of us on the planet, but there’s another effect, which is that these new technologies, especially bio, cyber, and AI allow small groups of even individuals to have an effect by error or by design, which could cascade very broadly, even globally. This, I think, makes our society very brittle. We’re very interdependent, and on the other hand it’s easy for there to be a breakdown. That’s what depresses me, the gap between the way things could be, and the downsides if we collectively overreach ourselves, or if individuals cause disruption.

Ariel: You mentioned actually quite a few things that I’m hoping to touch on as we continue to talk. I’m almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have. It’s connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if we heard that there was 10% chance that an asteroid would strike in 2100 people would do something about it.

We wouldn’t say, “Oh, technology will be better in the future so let’s not worry about it now.” Apparently I’m very cynical, because I think that’s exactly what we would do. And I’m curious, what makes you feel more hopeful that even with something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation?

Martin: Well, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit, whereas the problem with climate change is really that it’s first of all, the effect is mainly going to be several decades in the future. It’s started to happen, but the really severe consequences are decades away. But also there’s an uncertainty, and it’s not a sort of sudden event we can easily visualize. It’s not at all clear therefore, how we are actually going to do something about it.

In the case of the asteroid, it would be clear what the strategy would be to try and deal with it, whereas in the case of climate there are lots of ways, and the problem is that the consequences are decades away, and they’re global. Most of the political focus obviously is on short-term worry, short-term problems, and on national or more local problems. Anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now, and it’s hard to keep those issues up the agenda when there are so many urgent things to worry about.

I think you’re maybe right that even if there was a threat of an asteroid, there may be the same sort of torpor, and we’d fail to deal with it, but I thought that’s an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climate it’s not so obviously going to be a catastrophe that people are motivated now to start thinking about it.

Ariel: I’ve heard it go both ways that either climate change is yes, obviously going to be bad but it’s not an existential risk so therefore those of us who are worried about existential risk don’t need to worry about it, but then I’ve also heard people say, “No, this could absolutely be an existential risk if we don’t prevent runaway climate change.” I was wondering if you could talk a bit about what worries you most regarding climate.

Martin: First of all, I don’t think it is an existential risk, but it’s something we should worry about. One point I make in my book is that I think the debate, which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science — although of course there are some complete deniers — but differences about ethics and economics. There’s some people of course who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there’s quite a big uncertainty, matter of fact a true uncertainty about how much warmer you get for a given increase in CO2.

But even among those who accept the IPCC projections of climate change, and the uncertainties therein, I think there’s a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to a rate of, say 5%, and those who think we shouldn’t do it in this context. If you apply a 5% discount rate as you would if you were deciding whether it’s worth putting up an office building or something like that, then of course you don’t give any weight to what happens after about, say 2050.

As Bjorn Lomborg, the well-known environmentalist argues, we should therefore give a lower priority to dealing with climate change than to helping the world’s poor in other more immediate ways. He is consistent given his assumptions about the discount rate. But many of us would say that in this context we should not discount the future so heavily. We should care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won’t be alive at the end of the century. We should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios.

I think the debates about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies, or are we prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life?

Ariel: Do you think the risks are only going to be showing up that much later? We are already seeing these really heavy storms striking. We’ve got Florence in North Carolina right now. There’s a super typhoon hit southern China and the Philippines. We had Maria, and I’m losing track of all the hurricanes that we’ve had. We’ve had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year. Do you think we really need to wait that long?

Martin: I think it’s generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean, and that this will become a more serious trend, but by the end of the century of course it could be very serious indeed. And the main threat is of course to people in the disadvantaged parts of the world. If you take these recent events, it’s been far worse in the Philippines than in the United States because they’re not prepared for it. Their houses are more fragile, etc.

Ariel: I don’t suppose you have any thoughts on how we get people to care more about others? Because it does seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it’s the poorer countries who seem to be suffering more. Then of course there’s the issue of the people who are alive now versus the people in the future.

Martin: That’s right, yes. Well, I think most people do care about their children and grandchildren, and so to that extent they do care about what things will be like at the end of the century, but as you say, the extra-political problem is that the cause of the CO2 emissions is mainly what’s happened in the advanced countries, and the downside is going to be more seriously felt by those in remote parts of the world. It’s easy to overlook them, and hard to persuade people that we ought to make a sacrifice which will be mainly for their benefit.

I think incidentally that’s one of the other things that we have to ensure happens, is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced parts of the world. I think that’s going to be in everyone’s interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there’s going to be massive and well-justified discontent, because unlike in the earlier generations, they’re aware of what they’re missing. They all have mobile phones, they all know what it’s like, and I think there’s going to be embitterment leading to conflict if we don’t narrow this gap, and this requires I think a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.

Ariel: That sort of ties into another question that I had for you, and that is, what do you think is the most underappreciated threat that maybe isn’t quite as obvious? You mentioned the fact that we have these people in poorer countries who are able to more easily see what they’re missing out on. Inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are underappreciated?

Martin: Yes. Just to go back, that threat is of course very serious because by the end of the century there might be 10 times as many people in Africa as in Europe, and of course they would then have every justification in migrating towards Europe with the result of huge disruption. We do have to care about those sorts of issues. I think there are all kinds of reasons apart from straight ethics why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap.

Incidentally, one thing which is a handicap for them is that they won’t have the route to prosperity followed by the so called “Asian tigers,” which were able to have high economic growth by undercutting the labor cost in the west. Now what’s happening is that with robotics it’s possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries, and so Africa and the Middle East won’t have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the west.

This is another reason why it’s going to be a big challenge. That’s something which I think we don’t worry about enough, and need to worry about, because if the inequalities persist when everyone is able to move easily and knows exactly what they’re missing, then that’s a recipe for a very dangerous and disruptive world. I would say that is an underappreciated threat.

Another thing I would count as important is that we are as a society very brittle, and very unstable because of high expectations. I’d like to give you another example. Suppose there were to be a pandemic, not necessarily a genetically engineered terrorist one, but a natural one. Then in contrast to what happened in the 14th century when the Bubonic Plague, the Black Death, occurred and killed nearly half the people in certain towns and the rest went on fatalistically. If we had some sort of plague which affected even 1% of the population of the United States, there’d be complete social breakdown, because that would overwhelm the capacity of hospitals, and people, unless they are wealthy, would feel they weren’t getting their entitlement of healthcare. And if that was a matter of life and death, that’s a recipe for social breakdown. I think given the high expectations of people in the developed world, then we are far more vulnerable to the consequences of these breakdowns, and pandemics, and the failures of electricity grids, et cetera, than in the past when people were more robust and more fatalistic.

Ariel: That’s really interesting. Is it essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong?

Martin: That’s right. And of course, if we know that there are cures available to some disease and there’s not the hospital capacity to offer it to all the people who are afflicted with the disease, then naturally that’s a matter of life and death, and that is going to promote social breakdown. This is a new threat which is of course a downside of the fact that we can at least cure some people.

Ariel: There’s two directions that I want to go with this. I’m going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit, but first I want to touch on biotech threats.

One of the things that’s been a little bit interesting for me is that when I first started at FLI three years ago we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene. Now, three years later I’m not hearing quite as much about the biotech threats, and I’m not sure if that’s because something has actually changed, or if it’s just because at FLI I’ve become more focused on AI and therefore stuff is happening but I’m not keeping up with it. I was wondering if you could talk a bit about what some of the risks you see today are with respect to biotech?

Martin: Well, let me say I think we should worry far more about bio threats than about AI in my opinion. I think as far as the bio threats are concerned, then there are these new techniques. CRISPR, of course, is a very benign technique if it’s used to remove a single damaging gene that gives you a particular disease, and also it’s less objectionable than traditional GM because it doesn’t cross the species barrier in the same way, but it does allow things like a gene drive where you make a species extinct by making it sterile.

That’s good if you’re wiping out a mosquito that carries a deadly virus, but there’s a risk of some effect which distorts the ecology and has a cascading consequence. There are risks of that kind, but more important I think there is a risk of the misuse of these techniques, and not just CRISPR, but for instance the the gain of function techniques that we used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible, things like that which can be done in a more advanced way now I’m sure.

These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape, and of course they are the kinds of things which could be misused. There have, of course, been lots of meetings, you have been at some, to discuss among scientists what the guidelines should be. How can we ensure responsible innovation in these technologies? These are modeled on the famous Conference in Asilomar in the 1970s when recombinant DNA was first being discussed, and the academics who worked in that area, they agreed on a sort of cautious stance, and a moratorium on some kinds of experiments.

But now they’re trying to do the same thing, and there’s a big difference. One is that these scientists are now more global. It’s not just a few people in North America and Europe. They’re global, and there is strong commercial pressures, and they’re far more widely understood. Bio-hacking is almost a student recreation. This means, in my view, that there’s a big danger, because even if we have regulations about certain things that can’t be done because they’re dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws, or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say, and I think this is very scary. Consequences could cascade globally.

Ariel: Do you think that the threat is more likely to come from something happening accidentally, or intentionally?

Martin: I don’t know. I think it could be either. Certainly it could be something accidental from gene drive, or releasing some dangerous virus, but I think if we can imagine it happening intentionally, then we’ve got to ask what sort of people might do it? Governments don’t use biological weapons because you can’t predict how they will spread and who they’d actually kill, and that would be an inhibiting factor for any terrorist group that had well-defined aims.

But my worst nightmare is some person, and there are some, who think that there are too many human beings on the planet, and if they combine that view with the mindset of extreme animal rights people, etc, they might think it would be a good thing for Gaia, for Mother Earth, to get rid of a lot of human beings. They’re the kind of people who, with access to this technology, might have no compunction in releasing a dangerous pathogen. This is the kind of thing that worries me.

Ariel: I find that interesting because it ties into the other question that I wanted to ask you about, and that is the idea of overpopulation. I’ve read it both ways, that overpopulation is in and of itself something of an existential risk, or a catastrophic risk, because we just don’t have enough resources on the planet. You actually made an interesting point, I thought, in your book where you point out that we’ve been thinking that there aren’t enough resources for a long time, and yet we keep getting more people and we still have plenty of resources. I thought that was sort of interesting and reassuring.

But I do think at some point that does become an issue. At then at the same time we’re seeing this huge push, understandably, for improved healthcare, and expanding life spans, and trying to save as many lives as possible, and making those lives last as long as possible. How do you resolve those two sides of the issue?

Martin: It’s true, of course, as you imply, that the population has risen double in the last 50 years, and there were doomsters who in the 1960s and ’70s thought that mass starvation by now, and there hasn’t been because food production has more than kept pace. If there are famines today, as of course there are, it’s not because of overall food shortages. It’s because of wars, or mal-distribution of money to buy the food. Up until now things have gone fairly well, but clearly there are limits to the food that can be produced on the earth.

All I would say is that we can’t really say what the carrying capacity of the earth is, because it depends so much on the lifestyle of people. As I say in the book, the world couldn’t sustainably have 2 billion people if they all lived like present day Americans, using as much energy, and burning as much fossil fuels, and eating as much beef. On the other hand you could imagine lifestyles which are very sort of austere, where the earth could carry 10, or even 20 billion people. We can’t set an upper limit, but all we can say is that given that it’s fairly clear that the population is going to rise to about 9 billion by 2050, and it may go on rising still more after that, we’ve got to ensure that the way in which the average person lives is less profligate in terms of energy and resources, otherwise there will be problems.

I think we also do what we can to ensure that after 2050 the population turns around and goes down. The base scenario is when it goes on rising as it may if people choose to have large families even when they have the choice. That could happen, and of course as you say, life extension is going to have an affect on society generally, but obviously on the overall population too. I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.

And it’s not hopeless, because the actual number of births per year has already started going down. The reason the population is still going up is because more babies survive, and most of the people in the developing world are still young, and if they live as long as people in advanced countries do, then of course that’s going to increase the population even for a steady birth rate. That’s why, unless there’s a real disaster, we can’t avoid the population rising to about 9 billion.

But I think policies can have an affect on what happens after that. I think we do have to try to make people realize that having large numbers of children has negative externalities, as it were in economic jargon, and it is going to be something to put extra pressure on the world, and affects our environment in a detrimental way.

Ariel: As I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023, and the timing was just entertaining to me, because like I said you have a section in your book about why you don’t actually agree with Elon Musk’s plan for some of this stuff.

Martin: That’s right.

Ariel: I was hoping you could talk a little bit about why you’re not as big a plan of space tourism, and what you think of humanity expanding into the rest of the solar system and universe?

Martin: Well, let me say that I think it’s a dangerous delusion to think we can solve the earth’s problems by escaping to Mars or elsewhere. Mass emigration is not feasible. There’s nowhere in the solar system which is as comfortable to live in as the top of Everest or the South Pole. I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world’s problems have to be solved here, dealing with climate change is a dawdle compared to terraforming Mars. SoI don’t think that’s true.

Now, two other things about space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity Probe trundling across the surface of Mars, and maybe missing things that a geologist would notice, but future robots will be able to do much of what a human will do, and to manufacture large structures in space, et cetera, so the practical need to send people to space is going down.

On the other hand, some people may want to go simply as an adventure. It’s not really tourism, because tourism implies it’s safe and routine. It’ll be an adventure like Steve Fossett or the guy who fell supersonically from an altitude balloon. It’d be crazy people like that, and maybe this Japanese tourist is in the same style, who want to have a thrill, and I think we should cheer them on.

I think it would be good to imagine that there are a few people living on Mars, but it’s never going to be as comfortable as our Earth, and we should just cheer on people like this.

And I personally think it should be left to private money. If I was an American, I would not support the NASA space program. It’s very expensive, and it could be undercut by private companies which can afford to take higher risks than NASA could inflict on publicly funded civilians. I don’t think NASA should be doing manned space flight at all. Of course, some people would say, “Well, it’s a national aspiration, a national goal to show superpower pre-eminence by a massive space project.” That was, of course, what drove the Apollo program, and the Apollo program cost about 4% of The US federal budget. Now NASA has .6% or thereabouts. I’m old enough to remember the Apollo moon landings, and of course if you would have asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years at that time.

There would have been, had the program been funded, but of course there was no motive, because the Apollo program was driven by superpower rivalry. And having beaten the Russians, it wasn’t pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program, and leapfrog what the Americans did by going to Mars. That could happen. Otherwise I think the only manned space flight will, and indeed should, be privately funded by adventurers prepared to go on cut price and very risky missions.

But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life’s long-term future, because whereas we are, as humans, fairly well adapted to the earth, they will be in a place, Mars, or an asteroid, or somewhere, for which they are badly adapted. Therefore they would have every incentive to use all the techniques of genetic modification, and cyber technology to adapt to this hostile environment.

A new species, perhaps quite different from humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. They, of course, may download themselves to be electronic. We don’t know how it’ll happen. We all know about the possibilities of advanced intelligence in electronic form. But I think this’ll happen on Mars, or in space, and of course if we think about going further and exploring beyond our solar system, then of course that’s not really a human enterprise because of human life times being limited, but it is a goal that would be feasible if you were a near immortal electronic entity. That’s a way in which our remote descendants will perhaps penetrate beyond our solar system.

Ariel: As you’re looking towards these longer term futures, what are you hopeful that we’ll be able to achieve?

Martin: You say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it’s not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. Of course, one thing which is very important is to be aware of different time scales.

Sometimes you hear people talk about humans watching the death of the sun in five billion years. That’s nonsense, because the timescale for biological evolution by Darwinian selection is about a million years, thousands of times shorter than the lifetime of the sun, but more importantly the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.

It would only take one, or two, or three centuries before we have entities which are very different from human beings if they are created by genetic modification, or downloading to electronic entities. They won’t be normal humans. I think this will happen, and this of course will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by Darwinian selection, to something quite new. This century is very special, which is a century where we might be triggering or jump starting a new kind of technological evolution which could spread from our solar system far beyond, on the timescale very short compared to the time scale for Darwinian evolution and the time scale for astronomical evolution.

Ariel: All right. In the book you spend a lot of time also talking about current physics theories and how those could evolve. You spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future?

Martin: Well, it’s only peripherally linked to it. I put that in the book because I was thinking about, what are the challenges, not just challenges of a practical kind, but intellectual challenges? One point I make is that there are some scientific challenges which we are now confronting which may be beyond human capacity to solve, because there’s no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory.

It’s possible that there be some fundamental aspects of nature that humans will never understand, and they will be a challenge for post-humans. I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain for instance, than in the context of cosmology, although there are challenges in cosmology which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera, and we need a theory like that in order to decide whether our big bang was the only one, or whether there were other big bangs and a kind of multiverse.

It’s possible that in 50 years from now we will have such a theory, we’ll know the answers to those questions. But it could be that there is such a theory and it’s just too hard for anyone to actually understand and make predictions from. I think these issues are relevant to the intellectual constraints on humans.

Ariel: Is that something that you think, or hope, that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow “us” to understand some of these more complex ideas?

Martin: Well, I think it’s certainly possible that machines could actually, in a sense, create entities based on physics which we can’t understand. This is perfectly possible, because obviously we know they can vastly out-compute us at the moment, so it could very well be, for instance, that there is a variant of string theory which is correct, and it’s just too difficult for any human mathematician to work out. But it could be that computers could work it out, so we get some answers.

But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. That’s a separate question which may not affect what it can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things.

I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, “Isn’t it great there’ll be these even deeper intellects than human beings out there,” but others who thought these might just be zombies thought it was very sad if there was no entity which could actually appreciate the beauties and wonders of nature in the way we can. It does matter, in a sense, to our perception of this far future, if we think that these entities which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we’ve emerged. I think that’s a very important question.

Ariel: I want to pull things back to a little bit more shorter term I guess, but still considering this idea of how technology will evolve. You mentioned that you don’t think it’s a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars. I think in general we have this tendency to say, “Oh, well in the future we’ll have technology that can fix whatever issue we’re dealing with now, so we don’t need to worry about it.”

I was wondering if you could sort of comment on that approach. To what extent can we say, “Well, most likely technology will have improved and can help us solve these problems,” and to what extent is that a dangerous approach to take?

Martin: Well, clearly technology has allowed us to live much better, more complex lives than we could in the past, and on the whole the net benefits outweigh the downsides, but of course there are downsides, and they stem from the fact that we have some people who are disruptive, and some people who can’t be trusted. If we had a world where everyone could trust everyone else, we could get rid of about a third of the economy I would guess, but I think the main point is that we are very vulnerable.

We have huge advances, clearly, in networking via the Internet, and computers, et cetera, and we may have the Internet of Things within a decade, but of course people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. That’s just one example, and ditto for biotech which may allow the development of pathogens which kill people of particular races, or have other effects.

There are these technologies which are developing fast, and they can be used to great benefit, but they can be misused in ways that will provide new kinds of horrors that were not available in the past. It’s by no means obvious which way things will go. Will there be a continued net benefit of technology, as I think we’ve said there as been up ’til now despite nuclear weapons, et cetera, or will at some stage the downside run ahead of the benefits.

I do worry about the latter being a possibility, particularly because of this amplification factor, the fact that it only takes a few people in order to cause disruption that could cascade globally. The world is so interconnected that we can’t really have a disaster in one region without its affecting the whole world. Jared Diamond has this book called Collapse where he discusses five collapses of particular civilizations, whereas other parts of the world were unaffected.

I think if we really had some catastrophe, it would affect the whole world. It wouldn’t just affect parts. That’s something which is a new downside. The stakes are getting higher as technology advances, and my book is really aimed to say that these developments are very exciting, but they pose new challenges, and I think particularly they pose challenges because a few dissidents can cause more trouble, and I think it’ll make the world harder to govern. It’ll make cities and countries harder to govern, and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that’s going to be a challenge for all future governments.

Ariel: Reading your book I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. I was curious: what do you hope that people will do after reading the book, or learning more about these issues in general?

Martin: Well, first of all I hope that people can be persuaded to think long term. I mentioned that religious groups, for instance, tend to think long term, and the papal encyclical in 2015 I think had a very important effect on the opinion in Latin America, Africa, and East Asia in the lead up to the Paris Climate Conference, for instance. That’s an example where someone from outside traditional politics would have an effect.

What’s very important is that politicians will only respond to an issue if it’s prominent in the press, and prominent in their inbox, and so we’ve got to ensure that people are concerned about this. Of course, I ended the book saying, “What are the special responsibilities of scientists,” because scientists clearly have a special responsibility to ensure that their work is safe, and that the public and politicians are made aware of the implications of any discovery they make.

I think that’s important, even though they should be mindful that their expertise doesn’t extend beyond their special area. That’s a reason why scientific understanding, in a general sense, is something which really has to be universal. This is important for education, because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera, which have scientific aspects, then everyone has to have enough feel for those aspects to participate in a debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments.

I think an educated population is essential for proper democracy. Obviously that’s a platitude. But the education needs to include, to a greater extent, an understanding of the scope and limits of science and technology. I make this point at the end and hope that it will lead to a greater awareness of these issues, and of course for people in universities, we have a responsibility because we can influence the younger generation. It’s certainly the case that students and people under 30 may be alive towards the end of the century are more mindful of these concerns than the middle aged and old.

It’s very important that these activities like the Effective Altruism movement, 80,000 Hours, and these other movements among students should be encouraged, because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to drunk driving and things like that, which have happened over a few decades, and I think perhaps we can have a more environmental sensitivity so to become regarded as sort of rather naff or tacky to waste energy, and to be extravagant in consumption.

I’m hopeful that attitudes will change in a positive way, but I’m concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light, and small groups can have a global effect. This makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That’s something which is new, and I think is becoming more and more serious.

Ariel: We’ve been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of, that aren’t as big of risks?

Martin: Well, I think we need to ensure responsible innovation in all new technologies. We’ve talked a lot about bio, and we are very concerned about the misuse of cyber technology. As regards AI, of course there are a whole lot of concerns to be had. I personally think that the takeover AI would be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimized by some algorithm which they can’t have explained to them.

I think there is an awareness to this, and I think that what’s being done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI, and also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that’s bad news, whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of the misuse of biotech, et cetera.

The order in which things happen is going to be very important, but I must say I don’t completely share these concerns about machines running away and taking over, ’cause I think there’s a difference in that, for biological evolution there’s been a drive toward intelligence being favored, but so is aggression. In the case of computers, they may drive towards greater intelligence, but it’s not obvious that that is going to be combined with aggression, because they are going to be evolving by intelligent design, not the struggle of the fittest, which is the way that we evolved.

Ariel: What about concerns regarding AI just in terms of being mis-programmed, and AI just being extremely competent? Poor design on our part, poor intelligent design?

Martin: Well, I think in the short term obviously there are concerns about AI making decisions that affect people, and I think most of us would say that we shouldn’t be deprived of our credit rating, or put in prison on the basis of some AI algorithm which can’t be explained to us. We are entitled to have an explanation if something is done to us against our will. That is why it is worrying if too much is going to be delegated to AI.

I also think that constraint on the development of self-driving cars, and things of that kind, is going to be constrained by the fact that these become vulnerable to hacking of various kinds. I think it’ll be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it’s not clear that we will ever accept a driverless car. I think I’m frankly less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted, that we can trust ourselves to them.

Ariel: As I mentioned at the start, and as you mentioned at the start, you are a techno optimist, for as much as the book is about things that could go wrong it did feel to me like it was also sort of an optimistic look at the future. What are you most optimistic about? What are you most hopeful for looking at both short term and long term, however you feel like answering that?

Martin: I’m hopeful that biotech will have huge benefits for health, will perhaps extend human life spans a bit, but that’s something about which we should feel a bit ambivalent. So, I think health, and also food. If you asked me, what is one of the most benign technologies, it’s to make artificial meat, for instance. It’s clear that we can more easily feed a population of 9 billion on a vegetarian diet than on a traditional diet like Americans consume today.

To take one benign technology, I would say artificial meat is one, and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. I’m optimistic about that. If we think about very long term trends then life extension is something which obviously if it happens too quickly is going to be hugely disruptive, multi-generation families, et cetera.

Also, even though we will have the capability within a century to change human beings, I think we should constrain that on earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen, then as I say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn’t changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2,000 years ago and resonate with the people, and their characters, and their attitudes and emotions.

It’s not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. That will be a fundamental, and very fast change in the nature of humanity. The question is, can we do something to at least constrain the rate at which that happens, or at least constrain the way in which it happens? But it is going to be almost certainly possible to completely change human mentality, and maybe even human physique over that time scale. One has only to listen to listen to people like George Church to realize that it’s not crazy to imagine this happening.

Ariel: You mentioned in the book that there’s lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative effects of cryogenics, and the burden that it puts on the future. I was wondering if you could talk really quickly about that?

Martin: There are some people, I know some, who have a medallion around their neck which is an injunction of, if they drop dead they should be immediately frozen, and their blood drained and replaced by liquid nitrogen, and that they should then be stored — there’s a company called Alcor in Arizona that does this — and allegedly revived at some stage when technology advanced. I find it hard to take these seriously, but they say that, well the chance may be small, but if they don’t invest this way then the chance is zero that they have a resurrection.

But I actually think that even if it worked, even if the company didn’t go bust, and sincerely maintained them for centuries and they could then be revived, I still think that what they’re doing is selfish, because they’d be revived into a world that was very different. They’d be refugees from the past, and they’d therefore be imposing an obligation on the future.

We obviously feel an obligation to look after some asylum seeker or refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest for instance, and had to find a new home, but these refugees from the past, as it were, they’re imposing a burden on future generations. I’m not sure that what they’re doing is ethical. I think it’s rather selfish.

Ariel: I hadn’t thought of that aspect of it. I’m a little bit skeptical of our ability to come back.

Martin: I agree. I think the chances are almost zero, even if they were stored and et cetera, one would like to see this technology tried on some animal first to see if they could freeze animals at liquid nitrogen temperatures and then revive it. I think it’s pretty crazy. Then of course, the number of people doing it is fairly small, and some of the companies doing it, there’s one in Russia, which are real ripoffs I think, and won’t survive. But as I say, even if these companies did keep going for a couple of centuries, or however long is necessary, then it’s not clear to me that it’s doing good. I also quoted this nice statement about, “What happens if we clone, and create a neanderthal? Do we put him in a zoo or send him to Harvard,” said the professor from Stanford.

Ariel: Those are ethical considerations that I don’t see very often. We’re so focused on what we can do that sometimes we forget. “Okay, once we’ve done this, what happens next?”

I appreciate you being here today. Those were my questions. Was there anything else that you wanted to mention that we didn’t get into?

Martin: One thing we didn’t discuss, which was a serious issue, is the limits of medical treatment, because you can make extraordinary efforts to keep people alive long before they’d have died naturally, and to keep alive babies that will never live a normal life, et cetera. Well, I certainly feel that that’s gone too far at both ends of life.

One should not devote so much effort to extreme premature babies, and allow people to die more naturally. Actually, if you asked me about predictions I’d make about the next 30 or 40 years, first more vegetarianism, secondly more euthanasia.

Ariel: I support both, vegetarianism, and I think euthanasia should be allowed. I think it’s a little bit barbaric that it’s not.

Martin: Yes.

I think we’ve covered quite a lot, haven’t we?

Ariel: I tried to.

Martin: I’d just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It’s not really a science book, although it emphasizes how scientific ideas are what’s going to determine how our civilization evolves. I’d also like to say that for those in universities, we know it’s only interim for students, but we have universities like MIT, and my University of Cambridge, we have convening power to gather people together to address these questions.

I think the value of the centers which we have in Cambridge, and you have in MIT, are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high that if our efforts can really reduce the risk of a disaster by one part in 10,000, we’ve more than earned our keep. I’m very supportive of our Centre for Existential Risk in Cambridge, and also the Future of Life Institute which you have at MIT.

Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic, and the threats of low radiation doses, et cetera, it’s not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable threats which could affect the whole future of humanity. I think it’s very important that these groups should be encouraged and fostered, and I’m privileged to be part of them.

Ariel: All right. Again, the book is On the Future: Prospects for Humanity by Martin Rees. I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics that you covered, and I don’t think I got all of them, but there’s climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics, and as you mentioned the role that scientists need to play in ensuring a safe future.

I highly recommend the book as a really great introduction to the potential risks, and the hopefully much greater potential benefits that science and technology can pose for the future. Martin, thank you again for joining me today.

Martin: Thank you, Ariel, for talking to me.

[end of recorded material]

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer.

To learn more, I spoke with Rob Wiblin and Brenton Mayer of 80,000 Hours. The following are highlights of the interview, but you can listen to the full podcast above or read the transcript here.

Can you give us some background about 80,000 Hours?

Rob: 80,000 Hours has been around for about six years and started when Benjamin Todd and Will MacAskill wanted to figure out how they could do as much good as possible. They started looking into things like the odds of becoming an MP in the UK or if you became a doctor, how many lives would you save. Pretty quickly, they were learning things that no one else had investigated.

They decided to start 80,000 Hours, which would conduct this research in a more systematic way and share it with people who wanted to do more good with their career.

80,000 hours is roughly the number of hours that you’d work in a full-time professional career. That’s a lot of time, so it pays off to spend quite a while thinking about what you’re going to do with that time.

On the other hand, 80,000 hours is not that long relative to the scale of the problems that the world faces. You can’t tackle everything. You’ve only got one career, so you should be judicious about what problems you try to solve and how you go about solving them.

How do you help people have more of an impact with their careers?

Brenton: The main thing is a career guide. We’ll talk about how to have satisfying careers, how to work on one of the world’s most important problems, how to set yourself up early so that later on you can have a really large impact.

The second part that we do is do career coaching and try to apply advice to individuals.

What is earning to give?

Rob: Earning to give is the career approach where you try to make a lot of money and give it to organizations that can use it to have a really large positive impact. I know people who can make millions of dollars a year doing the thing they love and donate most of that to effective nonprofits, supporting 5, 10, 15, possibly even 20 people to do direct work in their place.

Can you talk about research you’ve been doing regarding the world’s most pressing problems?

Rob: One of the first things we realized is that if you’re trying to help people alive today, your money can go further in the developing world. We just need to scale up solutions to basic health problems and economic issues that have been resolved elsewhere.

Moving beyond that, what other groups in the world are extremely neglected? Factory farmed animals really stand out. There’s very little funding focused on improving farm animal welfare.

The next big idea was, of all the people that we could help, what fraction are alive today? We think that it’s only a small fraction. There’s every reason to think humanity could live for another 100 generations on Earth and possibly even have our descendants alive on other planets.

We worry a lot about existential risks and ways that civilization can go off track and never recover. Thinking about the long-term future of humanity is where a lot of our attention goes and where I think people can have the largest impact with their career.

Regarding artificial intelligence safety, nuclear weapons, biotechnology and climate change, can you consider different ways that people could pursue either careers or “earn to give” options for these fields?

Rob: One would be to specialize in machine learning or other technical work and use those skills to figure out how can we make artificial intelligence aligned with human interests. How do we make the AI do what we want and not things that we don’t intend?

Then there’s the policy and strategy side, trying to answer questions like how do we prevent an AI arms race? Do we want artificial intelligence running military robots? Do we want the government to be more involved in regulating artificial intelligence or less involved? You can also approach this if you have a good understanding of politics, policy, and economics. You can potentially work in government, military or think tanks.

Things like communications, marketing, organization, project management, and fundraising operations — those kinds of things can be quite hard to find skilled, reliable people for. And it can be surprisingly hard to find people who can handle media or do art and design. If you have those skills, you should seriously consider applying to whatever organizations you admire.

[For nuclear weapons] I’m interested in anything that can promote peace between the United States and Russia and China. A war between those groups or an accidental nuclear incident seems like the most likely thing to throw us back to the stone age or even pre-stone age.

I would focus on ensuring that they don’t get false alarms; trying to increase trust between the countries in general and the communication lines so that if there are false alarms, they can quickly diffuse the situation.

The best opportunities [in biotech] are in early surveillance of new diseases. If there’s a new disease coming out, a new flu for example, it takes  a long time to figure out what’s happened.

And when it comes to controlling new diseases, time is really of the essence. If you can pick it up within a few days or weeks, then you have a reasonable shot at quarantining the people and following up with everyone that they’ve met and containing it. Any technologies that we can invent or any policies that will allow us to identify new diseases before they’ve spread to too many people is going to help with both natural pandemics, and also any kind of synthetic biology risks, or accidental releases of diseases from biological researchers.

Brenton: A Wagner and Weitzman paper suggests that there’s about a 10% chance of warming larger than 4.8 degrees Celsius, or a 3% chance of more than 6 degrees Celsius. These are really disastrous outcomes. If you’re interested in climate change, we’re pretty excited about you working on these very bad scenarios. Sensible things to do would be improving our ability to forecast; thinking about the positive feedback loops that might be inherent in Earth’s climate; thinking about how to enhance international cooperation.

Rob: It does seem like solar power and storage of energy from solar power is going to have the biggest impact on emissions over at least the next 50 years. Anything that can speed up that transition makes a pretty big contribution.

Rob, can you explain your interest in long-term multigenerational indirect effects and what that means?

Rob: If you’re trying to help people and animals thousands of years in the future, you have to help them through a causal chain that involves changing the behavior of someone today and then that’ll help the next generation and so on.

One way to improve the long-term future of humanity is to do very broad things that improve human capabilities like reducing poverty, improving people’s health, making schools better.

But in a world where the more science and technology we develop, the more power we have to destroy civilization, it becomes less clear that broadly improving human capabilities is a great way to make the future go better. If you improve science and technology, you both improve our ability to solve problems and create new problems.

I think about what technologies can we invent that disproportionately make the world safer rather than more risky. It’s great to improve the technology to discover new diseases quickly and to produce vaccines for them quickly, but I’m less excited about generically pushing forward the life sciences because there’s a lot of potential downsides there as well.

Another way that we can robustly prepare humanity to deal with the long-term future is to have better foresight about the problems that we’re going to face. That’s a very concrete thing you can do that puts humanity in a better position to tackle problems in the future — just being able to anticipate those problems well ahead of time so that we can dedicate resources to averting those problems.

To learn more, visit 80000hours.org and subscribe to Rob’s new podcast.

The Future of Humanity Institute Releases Three Papers on Biorisks

Click here to see this page in other languages:  Russian 

Earlier this month, the Future of Humanity Institute (FHI) released three new papers that assess global catastrophic and existential biosecurity risks and offer a cost-benefit analysis of various approaches to dealing with these risks.

The work – done by Piers Millett, Andrew Snyder-Beattie, Sebastian Farquhar, and Owen Cotton-Barratt – looks at what the greatest risks might be, how cost-effective they are to address, and how funding agencies can approach high-risk research.

In one paper, Human Agency and Global Catastrophic Biorisks, Millett and Snyder-Beattie suggest that “the vast majority of global catastrophic biological risk (GCBR) comes from human agency rather than natural resources.” This risk could grow as future technologies allow us to further manipulate our environment and biology. The authors list many of today’s known biological risks but they also highlight how unknown risks in the future could easily arise as technology advances. They call for a GCBR community that will provide “a space for overlapping interests between the health security communities and the global catastrophic risk communities.”

Millett and Snyder-Beattie also authored the paper, Existential Risk and Cost-Effective Biosecurity. This paper looks at the existential threat of future bioweapons to assess whether the risks are high enough to justify investing in threat-mitigation efforts. They consider a spectrum of biosecurity risks, including biocrimes, bioterrorism, and biowarfare, and they look at three models to estimate the risk of extinction from these weapons. As they state in their conclusion: “Although the probability of human extinction from bioweapons may be extremely low, the expected value of reducing the risk (even by a small amount) is still very large, since such risks jeopardize the existence of all future human lives.”

The third paper is Pricing Externalities to Balance Public Risks and Benefits of Research, by Farquhar, Cotton-Barratt, and Snyder-Beattie. Here they consider how scientific funders should “evaluate research with public health risks.” The work was inspired by the controversy surrounding the “gain-of-function” experiments performed on the H5N1 flu virus. The authors propose an approach that translates an estimate of the risk into a financial price, which “can then be included in the cost of the research.” They conclude with the argument that the “approaches discussed would work by aligning the incentives for scientists and for funding bodies more closely with those of society as a whole.”

FHI Quarterly Update (July 2017)

The following update was originally posted on the FHI website:

In the second 3 months of 2017, FHI has continued its work as before exploring crucial considerations for the long-run flourishing of humanity in our four research focus areas:

  • Macrostrategy – understanding which crucial considerations shape what is at stake for the future of humanity.
  • AI safety – researching computer science techniques for building safer artificially intelligent systems.
  • AI strategy – understanding how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence.
  • Biorisk – working with institutions around the world to reduce risk from especially dangerous pathogens.

We have been adapting FHI to our growing size. We’ve secured 50% more office space, which will be shared with the proposed Institute for Effective Altruism. We are developing plans to restructure to make our research management more modular and to streamline our operations team.

We have gained two staff in the last quarter. Tanya Singh is joining us as a temporary administrator, coming from a background in tech start-ups. Laura Pomarius has joined us as a Web Officer with a background in design and project management. Two of our staff will be leaving in this quarter. Kathryn Mecrow is continuing her excellent work at the Centre for Effective Altruism where she will be their Office Manager. Sebastian Farquhar will be leaving to do a DPhil at Oxford but expects to continue close collaboration. We thank them for their contributions and wish them both the best!

Key outputs you can read

A number of co-authors including FHI researchers Katja Grace and Owain Evans surveyed hundreds of researchers to understand their expectations about AI performance trajectories. They found significant uncertainty, but the aggregate subjective probability estimate suggested a 50% chance of high-level AI within 45 years. Of course, the estimates are subjective and expert surveys like this are not necessarily accurate forecasts, though they do reflect the current state of opinion. The survey was widely covered in the press.

An earlier overview of funding in the AI safety field by Sebastian Farquhar highlighted slow growth in AI strategy work. Miles Brundage’s latest piece, released via 80,000 Hours, aims to expand the pipeline of workers for AI strategy by suggesting practical paths for people interested in the area.

Anders Sandberg, Stuart Armstrong, and their co-author Milan Cirkovic published a paper outlining a potential strategy for advanced civilizations to postpone computation until the universe is much colder, and thereby producing up to a 1030 multiplier of achievable computation. This might explain the Fermi paradox, although a future paper from FHI suggests there may be no paradox to explain.

Individual research updates

Macrostrategy and AI Strategy

Nick Bostrom has continued work on AI strategy and the foundations of macrostrategy and is investing in advising some key actors in AI policy. He gave a speech at the G30 in London and presented to CEOs of leading Chinese technology firms in addition to a number of other lectures.

Miles Brundage wrote a career guide for AI policy and strategy, published by 80,000 Hours. He ran a scenario planning workshop on uncertainty in AI futures. He began a paper on verifiable and enforceable agreements in AI safety while a review paper on deep reinforcement learning he co-authored was accepted. He spoke at Newspeak House and participated in a RAND workshop on AI and nuclear security.

Owen Cotton-Barratt organised and led a workshop to explore potential quick-to-implement responses to a hypothetical scenario where AI capabilities grow much faster than the median expected case.

Sebastian Farquhar continued work with the Finnish government on pandemic preparedness, existential risk awareness, and geoengineering. They are currently drafting a white paper in three working groups on those subjects. He is contributing to a technical report on AI and security.

Carrick Flynn began working on structuredly transparent crime detection using AI and encryption and attended EAG Boston.

Clare Lyle has joined as a research intern and has been working with Miles Brundage on AI strategy issues including a workshop report on AI and security.

Toby Ord has continued work on a book on existential risk, worked to recruit two research assistants, ran a forecasting exercise on AI timelines and continues his collaboration with DeepMind on AI safety.

Anders Sandberg is beginning preparation for a book on ‘grand futures’.  A paper by him and co-authors on the aestivation hypothesis was published in the Journal of the British Interplanetary Society. He contributed a report on the statistical distribution of great power war to a Yale workshop, spoke at a workshop on AI at the Johns Hopkins Applied Physics Lab, and at the AI For Good summit in Geneva, among many other workshop and conference contributions. Among many media appearances, he can be found in episodes 2-6 of National Geographic’s series Year Million.

AI Safety

Stuart Armstrong has made progress on a paper on oracle designs and low impact AI, a paper on value learning in collaboration with Jan Leike, and several other collaborations including those with DeepMind researchers. A paper on the aestivation hypothesis co-authored with Anders Sandberg was published.

Eric Drexler has been engaged in a technical collaboration addressing the adversarial example problem in machine learning and has been making progress toward a publication that reframes the AI safety landscape in terms of AI services, structured systems, and path-dependencies in AI research and development.

Owain Evans and his co-authors released their survey of AI researchers on their expectations of future trends in AI. It was covered in the New Scientist, MIT Technology Review, and leading newspapers and is under review for publication. Owain’s team completed a paper on using human intervention to help RL systems avoid catastrophe. Owain and his colleagues further promoted their online textbook on modelling agents.

Jan Leike and his co-authors released a paper on universal reinforcement learning, which makes fewer assumptions about its environment than most reinforcement learners. Jan is a research associate at FHI while working at DeepMind.

Girish Sastry, William Saunders, and Neal Jean have joined as interns and have been helping Owain Evans with research and engineering on the prevention of catastrophes during training of reinforcement learning agents.

Biosecurity

Piers Millett has been collaborating with Andrew Snyder-Beattie on a paper on the cost-effectiveness of interventions in biorisk, and the links between catastrophic biorisks and traditional biosecurity. Piers worked with biorisk organisations including the US National Academies of Science, the global technical synthetic biology meeting (SB7), and training for those overseeing Ebola samples among others.

Funding

FHI is currently in a healthy financial position, although we continue to accept donations. We expect to spend approximately £1.3m over the course of 2017. Including three new hires but no further growth, our current funds plus pledged income should last us until early 2020. Additional funding would likely be used to add to our research capacity in machine learning, technical AI safety and AI strategy. If you are interested in discussing ways to further support FHI, please contact Niel Bowerman.

Recruitment

Over the coming months we expect to recruit for a number of positions. At the moment, we are interested in applications for internships from talented individuals with a machine learning background to work in AI safety. We especially encourage applications from demographic groups currently under-represented at FHI.

GP-write and the Future of Biology

Imagine going to the airport, but instead of walking through – or waiting in – long and tedious security lines, you could walk through a hallway that looks like a terrarium. No lines or waiting. Just a lush, indoor garden. But these plants aren’t something you can find in your neighbor’s yard – their genes have been redesigned to act as sensors, and the plants will change color if someone walks past with explosives.

The Genome Project Write (GP-write) got off to a rocky start last year when it held a “secret” meeting that prohibited journalists. News of the event leaked, and the press quickly turned to fears of designer babies and Frankenstein-like creations. This year, organizers of the meeting learned from the 2016 debacle. Not only did they invite journalists, but they also highlighted work by researchers like June Medford, whose plants research could lead to advancements like the security garden above.

Jef Boeke, one of the lead authors of the GP-write Grand Challenge, emphasized that this project was not just about writing the human genome. “The notion that we could write a human genome is simultaneously thrilling to some and not so thrilling to others,” Boeke told the group. “We recognize that this will take a lot of discussion.”

Boeke explained that the GP-write project will happen in the cells, and the researchers involved are not trying to produce an organism. He added that this work could be used to solve problems associated with climate change and the environment, invasive species, pathogens, and food insecurity.

To learn more about why this project is important, I spoke with genetics researcher, John Min, about what GP-write is and what it could accomplish. Min is not directly involved with GP-write, but he works with George Church, another one of the lead authors of the project.

Min explained, “We aren’t currently capable of making DNA as long as human chromosomes – we can’t make that from scratch in the laboratory. In this case, they’ll use CRISPR to make very specific cuts in the genome of an existing cell, and either use synthesized DNA to replace whole chunks or add new functionality in.”

He added, “An area of potentially exciting research with this new project is to create a human cell immune to all known viruses. If we can create this in the lab, then we can start to consider how to apply it to people around the world. Or we can use it to build an antibody library against all known viruses. Right now, tackling such a project is completely unaffordable – the costs are just too astronomic.”

But costs aren’t the only reason GP-write is hugely ambitious. It’s also incredibly challenging science. To achieve the objectives mentioned above, scientists will synthesize, from basic chemicals, the building blocks of life. Synthesizing a genome involves slowly editing out tiny segments of genes and replacing them with the new chemical version. Then researchers study each edit to determine what, if anything, changed for the organism involved. Then they repeat this for every single known gene. It is a tedious, time-consuming process, rife with errors and failures that send scientists back to the drawing board over and over, until they finally get just one gene right. On top of that, Min explained, it’s not clear how to tell when a project transitions from editing a cell, to synthesizing it. “How many edits can you make to an organism’s genome before you can say you’ve synthesized it?” he asked.

Clyde Hutchison, working with Craig Venter, recently came closest to answering that question. He and Venter’s team published the first paper depicting attempts to synthesize a simple bacterial genome. The project involved understanding which genes were essential, which genes were inessential, and discovering that some genes are “quasi-essential.” In the process, they uncovered “149 genes with unknown biological functions, suggesting the presence of undiscovered functions that are essential for life.”

This discovery tells us two things. First, it shows just how enormous the GP-write project is. To find 149 unknown genes in simple bacteria offers just a taste of how complicated the genomes of more advanced organisms will be. Kris Saha, Assistant Professor of Biomedical Engineering at the University of Wisconsin-Madison, explained this to the Genetic Experts News Service:

“The evolutionary leap between a bacterial cell, which does not have a nucleus, and a human cell is enormous. The human genome is organized differently and is much more complex. […] We don’t entirely understand how the genome is organized inside of a typical human cell. So given the heroic effort that was needed to make a synthetic bacterial cell, a similar if not more intense effort will be required – even to make a simple mammalian or eukaryotic cell, let alone a human cell.”

Second, this discovery gives us a clue as to how much more GP-write could tell us about how biology and the human body work. If we can uncover unknown functions within DNA, how many diseases could we eliminate? Could we cure aging? Could we increase our energy levels? Could we boost our immunities? Are there risks we need to prepare for?

The best assumption for that last question is: yes.

“Safety is one of our top priorities,” said Church at the event’s press conference, which included other leaders of the project. They said they expect safeguards to be engineered into research “from the get-go,” and part of the review process would include assessments of whether research within the project could be developed to have both positive or negative outcomes, known as Dual Use Research of Concern (DURC)

The meeting included roughly 250 people from 10 countries with backgrounds in science, ethics, law, government, and more. In general, the energy at the conference was one of excitement about the possibilities that GP-write could unleash.

“This project not only changes the way the world works, but it changes the way we work in the world,” said GP-write lead author Nancy J. Kelley.

Why 2016 Was Actually a Year of Hope

Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.

Artificial Intelligence

Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.

AI and Health

Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.

More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.

AI and Society

Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests  artificial intelligence could help us use satellite data to combat global poverty.

It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.

Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.

AI in Action

And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.

And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.

And speaking of addressing climate change…

Climate Change

With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.

Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.

But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.

Biotechnology

Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.

CRISPR

In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.

Gene Drive

Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.

Other Biotech News

The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.

Nuclear Weapons

If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.

In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.

As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.

The United Nations Takes on Weapons

But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.

The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.

Looking Ahead

And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.

Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.

Podcast: FLI 2016 – A Year In Review

For FLI, 2016 was a great year, full of our own success, but also great achievements from so many of the organizations we work with. Max, Meia, Anthony, Victoria, Richard, Lucas, David, and Ariel discuss what they were most excited to see in 2016 and what they’re looking forward to in 2017.

AGUIRRE: I’m Anthony Aguirre. I am a professor of physics at UC Santa Cruz, and I’m one of the founders of the Future of Life Institute.

STANLEY: I’m David Stanley, and I’m currently working with FLI as a Project Coordinator/Volunteer Coordinator.

PERRY: My name is Lucas Perry, and I’m a Project Coordinator with the Future of Life Institute.

TEGMARK: I’m Max Tegmark, and I have the fortune to be the President of the Future of Life Institute.

CHITA-TEGMARK: I’m Meia Chita-Tegmark, and I am a co-founder of the Future of Life Institute.

MALLAH: Hi, I’m Richard Mallah. I’m the Director of AI Projects at the Future of Life Institute.

KRAKOVNA: Hi everyone, I am Victoria Krakovna, and I am one of the co-founders of FLI. I’ve recently taken up a position at Google DeepMind working on AI safety.

CONN: And I’m Ariel Conn, the Director of Media and Communications for FLI. 2016 has certainly had its ups and downs, and so at FLI, we count ourselves especially lucky to have had such a successful year. We’ve continued to progress with the field of AI safety research, we’ve made incredible headway with our nuclear weapons efforts, and we’ve worked closely with many amazing groups and individuals. On that last note, much of what we’ve been most excited about throughout 2016 is the great work these other groups in our fields have also accomplished.

Over the last couple of weeks, I’ve sat down with our founders and core team to rehash their highlights from 2016 and also to learn what they’re all most looking forward to as we move into 2017.

To start things off, Max gave a summary of the work that FLI does and why 2016 was such a success.

TEGMARK: What I was most excited by in 2016 was the overall sense that people are taking seriously this idea – that we really need to win this race between the growing power of our technology and the wisdom with which we manage it. Every single way in which 2016 is better than the Stone Age is because of technology, and I’m optimistic that we can create a fantastic future with tech as long as we win this race. But in the past, the way we’ve kept one step ahead is always by learning from mistakes. We invented fire, messed up a bunch of times, and then invented the fire extinguisher. We at the Future of Life Institute feel that that strategy of learning from mistakes is a terrible idea for more powerful tech, like nuclear weapons, artificial intelligence, and things that can really alter the climate of our globe.

Now, in 2016 we saw multiple examples of people trying to plan ahead and to avoid problems with technology instead of just stumbling into them. In April, we had world leaders getting together and signing the Paris Climate Accords. In November, the United Nations General Assembly voted to start negotiations about nuclear weapons next year. The question is whether they should actually ultimately be phased out; whether the nations that don’t have nukes should work towards stigmatizing building more of them – with the idea that 14,000 is way more than anyone needs for deterrence. And – just the other day – the United Nations also decided to start negotiations on the possibility of banning lethal autonomous weapons, which is another arms race that could be very, very destabilizing. And if we keep this positive momentum, I think there’s really good hope that all of these technologies will end up having mainly beneficial uses.

Today, we think of our biologist friends as mainly responsible for the fact that we live longer and healthier lives, and not as those guys who make the bioweapons. We think of chemists as providing us with better materials and new ways of making medicines, not as the people who built chemical weapons and are all responsible for global warming. We think of AI scientists as – I hope, when we look back on them in the future – as people who helped make the world better, rather than the ones who just brought on the AI arms race. And it’s very encouraging to me that as much as people in general – but also the scientists in all these fields – are really stepping up and saying, “Hey, we’re not just going to invent this technology, and then let it be misused. We’re going to take responsibility for making sure that the technology is used beneficially.”

CONN: And beneficial AI is what FLI is primarily known for. So what did the other members have to say about AI safety in 2016? We’ll hear from Anthony first.

AGUIRRE: I would say that what has been great to see over the last year or so is the AI safety and beneficiality research field really growing into an actual research field. When we ran our first conference a couple of years ago, they were these tiny communities who had been thinking about the impact of artificial intelligence in the future and in the long-term future. They weren’t really talking to each other; they weren’t really doing much actual research – there wasn’t funding for it. So, to see in the last few years that transform into something where it takes a massive effort to keep track of all the stuff that’s being done in this space now. All the papers that are coming out, the research groups – you sort of used to be able to just find them all, easily identified. Now, there’s this huge worldwide effort and long lists, and it’s difficult to keep track of. And that’s an awesome problem to have.

As someone who’s not in the field, but sort of watching the dynamics of the research community, that’s what’s been so great to see. A research community that wasn’t there before really has started, and I think in the past year we’re seeing the actual results of that research start to come in. You know, it’s still early days. But it’s starting to come in, and we’re starting to see papers that have been basically created using these research talents and the funding that’s come through the Future of Life Institute. It’s been super gratifying. And seeing that it’s a fairly large amount of money – but fairly small compared to the total amount of research funding in artificial intelligence or other fields – but because it was so funding-starved and talent-starved before, it’s just made an enormous impact. And that’s been nice to see.

CONN: Not surprisingly, Richard was equally excited to see AI safety becoming a field of ever-increasing interest for many AI groups.

MALLAH: I’m most excited by the continued mainstreaming of AI safety research. There are more and more publications coming out by places like DeepMind and Google Brain that have really lent additional credibility to the space, as well as a continued uptake of more and more professors, and postdocs, and grad students from a wide variety of universities entering this space. And, of course, OpenAI has come out with a number of useful papers and resources.

I’m also excited that governments have really realized that this is an important issue. So, while the White House reports have come out recently focusing more on near-term AI safety research, they did note that longer-term concerns like superintelligence are not necessarily unreasonable for later this century. And that they do support – right now – funding safety work that can scale toward the future, which is really exciting. We really need more funding coming into the community for that type of research. Likewise, other governments – like the U.K. and Japan, Germany – have all made very positive statements about AI safety in one form or another. And other governments around the world.

CONN: In addition to seeing so many other groups get involved in AI safety, Victoria was also pleased to see FLI taking part in so many large AI conferences.

KRAKOVNA: I think I’ve been pretty excited to see us involved in these AI safety workshops at major conferences. So on the one hand, our conference in Puerto Rico that we organized ourselves was very influential and helped to kick-start making AI safety more mainstream in the AI community. On the other hand, it felt really good in 2016 to complement that with having events that are actually part of major conferences that were co-organized by a lot of mainstream AI researchers. I think that really was an integral part of the mainstreaming of the field. For example, I was really excited about the Reliable Machine Learning workshop at ICML that we helped to make happen. I think that was something that was quite positively received at the conference, and there was a lot of good AI safety material there.

CONN: And of course, Victoria was also pretty excited about some of the papers that were published this year connected to AI safety, many of which received at least partial funding from FLI.

KRAKOVNA: There were several excellent papers in AI safety this year, addressing core problems in safety for machine learning systems. For example, there was a paper from Stuart Russell’s lab published at NIPS, on cooperative IRL. This is about teaching AI what humans want – how to train an RL algorithm to learn the right reward function that reflects what humans want it to do. DeepMind and FHI published a paper at UAI on safely interruptible agents, that formalizes what it means for an RL agent not to have incentives to avoid shutdown. MIRI made an impressive breakthrough with their paper on logical inductors. I’m super excited about all these great papers coming out, and that our grant program contributed to these results.

CONN: For Meia, the excitement about AI safety went beyond just the technical aspects of artificial intelligence.

CHITA-TEGMARK: I am very excited about the dialogue that FLI has catalyzed – and also engaged in – throughout 2016, and especially regarding the impact of technology on society. My training is in psychology; I’m a psychologist. So I’m very interested in the human aspect of technology development. I’m very excited about questions like, how are new technologies changing us? How ready are we to embrace new technologies? Or how our psychological biases may be clouding our judgement about what we’re creating and the technologies that we’re putting out there. Are these technologies beneficial for our psychological well-being, or are they not?

So it has been extremely interesting for me to see that these questions are being asked more and more, especially by artificial intelligence developers and also researchers. I think it’s so exciting to be creating technologies that really force us to grapple with some of the most fundamental aspects, I would say, of our own psychological makeup. For example, our ethical values, our sense of purpose, our well-being, maybe our biases and shortsightedness and shortcomings as biological human beings. So I’m definitely very excited about how the conversation regarding technology – and especially artificial intelligence – has evolved over the last year. I like the way it has expanded to capture this human element, which I find so important. But I’m also so happy to feel that FLI has been an important contributor to this conversation.

CONN: Meanwhile, as Max described earlier, FLI has also gotten much more involved in decreasing the risk of nuclear weapons, and Lucas helped spearhead one of our greatest accomplishments of the year.

PERRY: One of the things that I was most excited about was our success with our divestment campaign. After a few months, we had great success in our own local Boston area with helping the City of Cambridge to divest its $1 billion portfolio from nuclear weapon producing companies. And we see this as a really big and important victory within our campaign to help institutions, persons, and universities to divest from nuclear weapons producing companies.

CONN: And in order to truly be effective we need to reach an international audience, which is something Dave has been happy to see grow this year.

STANLEY: I’m mainly excited about – at least, in my work – the increasing involvement and response we’ve had from the international community in terms of reaching out about these issues. I think it’s pretty important that we engage the international community more, and not just academics. Because these issues – things like nuclear weapons and the increasing capabilities of artificial intelligence – really will affect everybody. And they seem to be really underrepresented in mainstream media coverage as well.

So far, we’ve had pretty good responses just in terms of volunteers from many different countries around the world being interested in getting involved to help raise awareness in their respective communities, either through helping develop apps for us, or translation, or promoting just through social media these ideas in their little communities.

CONN: Many FLI members also participated in both local and global events and projects, like the following we’re about  to hear from Victoria, Richard, Lucas and Meia.

KRAKOVNA: The EAGX Oxford Conference was a fairly large conference. It was very well organized, and we had a panel there with Demis Hassabis, Nate Soares from MIRI, Murray Shanahan from Imperial, Toby Ord from FHI, and myself. I feel like overall, that conference did a good job of, for example, connecting the local EA community with the people at DeepMind, who are really thinking about AI safety concerns like Demis and also Sean Legassick, who also gave a talk about the ethics and impacts side of things. So I feel like that conference overall did a good job of connecting people who are thinking about these sorts of issues, which I think is always a great thing.  

MALLAH: I was involved in this endeavor with IEEE regarding autonomy and ethics in autonomous systems, sort of representing FLI’s positions on things like autonomous weapons and long-term AI safety. One thing that came out this year – just a few days ago, actually, due to this work from IEEE – is that the UN actually took the report pretty seriously, and it may have influenced their decision to take up the issue of autonomous weapons formally next year. That’s kind of heartening.

PERRY: A few different things that I really enjoyed doing were giving a few different talks at Duke and Boston College, and a local effective altruism conference. I’m also really excited about all the progress we’re making on our nuclear divestment application. So this is an application that will allow anyone to search their mutual fund and see whether or not their mutual funds have direct or indirect holdings in nuclear weapons-producing companies.

CHITA-TEGMARK:  So, a wonderful moment for me was at the conference organized by Yann LeCun in New York at NYU, when Daniel Kahneman, one of my thinker-heroes, asked a very important question that really left the whole audience in silence. He asked, “Does this make you happy? Would AI make you happy? Would the development of a human-level artificial intelligence make you happy?” I think that was one of the defining moments, and I was very happy to participate in this conference.

Later on, David Chalmers, another one of my thinker-heroes – this time, not the psychologist but the philosopher – organized another conference, again at NYU, trying to bring philosophers into this very important conversation about the development of artificial intelligence. And again, I felt there too, that FLI was able to contribute and bring in this perspective of the social sciences on this issue.

CONN: Now, with 2016 coming to an end, it’s time to turn our sites to 2017, and FLI is excited for this new year to be even more productive and beneficial.

TEGMARK: We at the Future of Life Institute are planning to focus primarily on artificial intelligence, and on reducing the risk of accidental nuclear war in various ways. We’re kicking off by having an international conference on artificial intelligence, and then we want to continue throughout the year providing really high-quality and easily accessible information on all these key topics, to help inform on what happens with climate change, with nuclear weapons, with lethal autonomous weapons, and so on.

And looking ahead here, I think it’s important right now – especially since a lot of people are very stressed out about the political situation in the world, about terrorism, and so on – to not ignore the positive trends and the glimmers of hope we can see as well.

CONN: As optimistic as FLI members are about 2017, we’re all also especially hopeful and curious to see what will happen with continued AI safety research.

AGUIRRE: I would say I’m looking forward to seeing in the next year more of the research that comes out, and really sort of delving into it myself, and understanding how the field of artificial intelligence and artificial intelligence safety is developing. And I’m very interested in this from the forecast and prediction standpoint.

I’m interested in trying to draw some of the AI community into really understanding how artificial intelligence is unfolding – in the short term and the medium term – as a way to understand, how long do we have? Is it, you know, if it’s really infinity, then let’s not worry about that so much, and spend a little bit more on nuclear weapons and global warming and biotech, because those are definitely happening. If human-level AI were 8 years away… honestly, I think we should be freaking out right now. And most people don’t believe that, I think most people are in the middle it seems, of thirty years or fifty years or something, which feels kind of comfortable. Although it’s not that long, really, on the big scheme of things. But I think it’s quite important to know now, which is it? How fast are these things, how long do we really have to think about all of the issues that FLI has been thinking about in AI? How long do we have before most jobs in industry and manufacturing are replaceable by a robot being slotted in for a human? That may be 5 years, it may be fifteen… It’s probably not fifty years at all. And having a good forecast on those good short-term questions I think also tells us what sort of things we have to be thinking about now.

And I’m interested in seeing how this massive AI safety community that’s started develops. It’s amazing to see centers kind of popping up like mushrooms after a rain all over and thinking about artificial intelligence safety. This partnership on AI between Google and Facebook and a number of other large companies getting started. So to see how those different individual centers will develop and how they interact with each other. Is there an overall consensus on where things should go? Or is it a bunch of different organizations doing their own thing? Where will governments come in on all of this? I think it will be interesting times. So I look forward to seeing what happens, and I will reserve judgement in terms of my optimism.

KRAKOVNA: I’m really looking forward to AI safety becoming even more mainstream, and even more of the really good researchers in AI giving it serious thought. Something that happened in the past year that I was really excited about, that I think is also pointing in this direction, is the research agenda that came out of Google Brain called “Concrete Problems in AI Safety.” And I think I’m looking forward to more things like that happening, where AI safety becomes sufficiently mainstream that people who are working in AI just feel inspired to do things like that and just think from their own perspectives: what are the important problems to solve in AI safety? And work on them.

I’m a believer in the portfolio approach with regards to AI safety research, where I think we need a lot of different research teams approaching the problems from different angles and making different assumptions, and hopefully some of them will make the right assumption. I think we are really moving in the direction in terms of more people working on these problems, and coming up with different ideas. And I look forward to seeing more of that in 2017. I think FLI can also help continue to make this happen.

MALLAH: So, we’re in the process of fostering additional collaboration among people in the AI safety space. And we will have more announcements about this early next year. We’re also working on resources to help people better visualize and better understand the space of AI safety work, and the opportunities there and the work that has been done. Because it’s actually quite a lot.

I’m also pretty excited about fostering continued theoretical work and practical work in making AI more robust and beneficial. The work in value alignment, for instance, is not something we see supported in mainstream AI research. And this is something that is pretty crucial to the way that advanced AIs will need to function. It won’t be very explicit instructions to them; they’ll have to be making decision based on what they think is right. And what is right? It’s something that… or even structuring the way to think about what is right requires some more research.

STANLEY: We’ve had pretty good success at FLI in the past few years helping to legitimize the field of AI safety. And I think it’s going to be important because AI is playing a large role in industry and there’s a lot of companies working on this, and not just in the US. So I think increasing international awareness about AI safety is going to be really important.

CHITA-TEGMARK: I believe that the AI community has raised some very important questions in 2016 regarding the impact of AI on society. I feel like 2017 should be the year to make progress on these questions, and actually research them and have some answers to them. For this, I think we need more social scientists – among people from other disciplines – to join this effort of really systematically investigating what would be the optimal impact of AI on people. I hope that in 2017 we will have more research initiatives, that we will attempt to systematically study other burning questions regarding the impact of AI on society. Some examples are: how can we ensure the psychological well-being for people while AI creates lots of displacement on the job market as many people predict. How do we optimize engagement with technology, and withdrawal from it also? Will some people be left behind, like the elderly or the economically disadvantaged? How will this affect them, and how will this affect society at large?

What about withdrawal from technology? What about satisfying our need for privacy? Will we be able to do that, or is the price of having more and more customized technologies and more and more personalization of the technologies we engage with… will that mean that we will have no privacy anymore, or that our expectations of privacy will be very seriously violated? I think these are some very important questions that I would love to get some answers to. And my wish, and also my resolution, for 2017 is to see more progress on these questions, and to hopefully also be part of this work and answering them.

PERRY: In 2017 I’m very interested in pursuing the landscape of different policy and principle recommendations from different groups regarding artificial intelligence. I’m also looking forward to expanding out nuclear divestment campaign by trying to introduce divestment to new universities, institutions, communities, and cities.

CONN: In fact, some experts believe nuclear weapons pose a greater threat now than at any time during our history.

TEGMARK: I personally feel that the greatest threat to the world in 2017 is one that the newspapers almost never write about. It’s not terrorist attacks, for example. It’s the small but horrible risk that the U.S. and Russia for some stupid reason get into an accidental nuclear war against each other. We have 14,000 nuclear weapons, and this war has almost happened many, many times. So, actually what’s quite remarkable and really gives a glimmer of hope is that – however people may feel about Putin and Trump – the fact is they are both signaling strongly that they are eager to get along better. And if that actually pans out and they manage to make some serious progress in nuclear arms reduction, that would make 2017 the best year for nuclear weapons we’ve had in a long, long time, reversing this trend of ever greater risks with ever more lethal weapons.

CONN: Some FLI members are also looking beyond nuclear weapons and artificial intelligence, as I learned when I asked Dave about other goals he hopes to accomplish with FLI this year.

STANLEY: Definitely having the volunteer team – particularly the international volunteers – continue to grow, and then scale things up. Right now, we have a fairly committed core of people who are helping out, and we think that they can start recruiting more people to help out in their little communities, and really making this stuff accessible. Not just to academics, but to everybody. And that’s also reflected in the types of people we have working for us as volunteers. They’re not just academics. We have programmers, linguists, people having just high school degrees all the way up to Ph.D.’s, so I think it’s pretty good that this varied group of people can get involved and contribute, and also reach out to other people they can relate to.

CONN: In addition to getting more people involved, Meia also pointed out that one of the best ways we can help ensure a positive future is to continue to offer people more informative content.

CHITA-TEGMARK: Another thing that I’m very excited about regarding our work here at the Future of Life Institute is this mission of empowering people to information. I think information is very powerful and can change the way people approach things: they can change their beliefs, their attitudes, and their behaviors as well. And by creating ways in which information can be readily distributed to the people, and with which they can engage very easily, I hope that we can create changes. For example, we’ve had a series of different apps regarding nuclear weapons that I think have contributed a lot to peoples knowledge and has brought this issue to the forefront of their thinking.

CONN: Yet as important as it is to highlight the existential risks we must address to keep humanity safe, perhaps it’s equally important to draw attention to the incredible hope we have for the future if we can solve these problems. Which is something both Richard and Lucas brought up for 2017.

MALLAH: I’m excited about trying to foster more positive visions of the future, so focusing on existential hope aspects of the future. Which are kind of the flip side of existential risks. So we’re looking at various ways of getting people to be creative about understanding some of the possibilities, and how to differentiate the paths between the risks and the benefits.

PERRY: Yeah, I’m also interested in creating and generating a lot more content that has to do with existential hope. Given the current global political climate, it’s all the more important to focus on how we can make the world better.

CONN: And on that note, I want to mention one of the most amazing things I discovered this past year. It had nothing to do with technology, and everything to do with people. Since starting at FLI, I’ve met countless individuals who are dedicating their lives to trying to make the world a better place. We may have a lot of problems to solve, but with so many groups focusing solely on solving them, I’m far more hopeful for the future. There are truly too many individuals that I’ve met this year to name them all, so instead, I’d like to provide a rather long list of groups and organizations I’ve had the pleasure to work with this year. A link to each group can be found at futureoflife.org/2016, and I encourage you to visit them all to learn more about the wonderful work they’re doing. In no particular order, they are:

Machine Intelligence Research Institute

Future of Humanity Institute

Global Catastrophic Risk Institute

Center for the Study of Existential Risk

Ploughshares Fund

Bulletin of Atomic Scientists

Open Philanthropy Project

Union of Concerned Scientists

The William Perry Project

ReThink Media

Don’t Bank on the Bomb

Federation of American Scientists

Massachusetts Peace Action

IEEE (Institute for Electrical and Electronics Engineers)

Center for Human-Compatible Artificial Intelligence

Center for Effective Altruism

Center for Applied Rationality

Foresight Institute

Leverhulme Center for the Future of Intelligence

Global Priorities Project

Association for the Advancement of Artificial Intelligence

International Joint Conference on Artificial Intelligence

Partnership on AI

The White House Office of Science and Technology Policy

The Future Society at Harvard Kennedy School

 

I couldn’t be more excited to see what 2017 holds in store for us, and all of us at FLI look forward to doing all we can to help create a safe and beneficial future for everyone. But to end on an even more optimistic note, I turn back to Max.

TEGMARK: Finally, I’d like – because I spend a lot of my time thinking about our universe – to remind everybody that we shouldn’t just be focused on the next election cycle. We have not decades, but billions of years of potentially awesome future for life, on Earth and far beyond. And it’s so important to not let ourselves get so distracted by our everyday little frustrations that we lose sight of these incredible opportunities that we all stand to gain from if we can get along, and focus, and collaborate, and use technology for good.

Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

Click here to see this page in other languages : Russian 

In the early 1900s, the Italian chemist Giacomo Ciamician recognized that fossil fuel use was unsustainable. And like many of today’s environmentalists, he turned to nature for clues on developing renewable energy solutions, studying the chemistry of plants and their use of solar energy. He admired their unparalleled mastery of photochemical synthesis—the way they use light to synthesize energy from the most fundamental of substances—and how “they reverse the ordinary process of combustion.”

In photosynthesis, Ciamician realized, lay an entirely renewable process of energy creation. When sunlight reaches the surface of a green leaf, it sets off a reaction inside the leaf. Chloroplasts, energized by the light, trigger the production of chemical products—essentially sugars—which store the energy such that the plant can later access it for its biological needs. It is an entirely renewable process; the plant harvests the immense and constant supply of solar energy, absorbs carbon dioxide and water, and releases oxygen. There is no other waste.

If scientists could learn to imitate photosynthesis by providing concentrated carbon dioxide and suitable catalyzers, they could create fuels from solar energy. Ciamician was taken by the seeming simplicity of this solution. Inspired by small successes in chemical manipulation of plants, he wondered, “does it not seem that, with well-adapted systems of cultivation and timely intervention, we may succeed in causing plants to produce, in quantities much larger than the normal ones, the substances which are useful to our modern life?”

In 1912, Ciamician sounded the alarm about the unsustainable use of fossil fuels, and he exhorted the scientific community to explore artificially recreating photosynthesis. But little was done. A century later, however, in the midst of a climate crisis, and armed with improved technology and growing scientific knowledge, his vision reached a major breakthrough.

After more than ten years of research and experimentation, Peidong Yang, a chemist at UC Berkeley, successfully created the first photosynthetic biohybrid system (PBS) in April 2015. This first-generation PBS uses semiconductors and live bacteria to do the photosynthetic work that real leaves do—absorb solar energy and create a chemical product using water and carbon dioxide, while releasing oxygen—but it creates liquid fuels. The process is called artificial photosynthesis, and if the technology continues to improve, it may become the future of energy.

How Does This System Work?

Yang’s PBS can be thought of as a synthetic leaf. It is a one-square-inch tray that contains silicon semiconductors and living bacteria; what Yang calls a semiconductor-bacteria interface.

In order to initiate the process of artificial photosynthesis, Yang dips the tray of materials into water, pumps carbon dioxide into the water, and shines a solar light on it. As the semiconductors harvest solar energy, they generate charges to carry out reactions within the solution. The bacteria take electrons from the semiconductors and use them to transform, or reduce, carbon dioxide molecules and create liquid fuels. In the meantime, water is oxidized on the surface of another semiconductor to release oxygen. After several hours or several days of this process, the chemists can collect the product.

With this first-generation system, Yang successfully produced butanol, acetate, polymers, and pharmaceutical precursors, fulfilling Ciamician’s once-far-fetched vision of imitating plants to create the fuels that we need. This PBS achieved a solar-to-chemical conversion efficiency of 0.38%, which is comparable to the conversion efficiency in a natural, green leaf.

first-g-ap

A diagram of the first-generation artificial photosynthesis, with its four main steps.

Describing his research, Yang says, “Our system has the potential to fundamentally change the chemical and oil industry in that we can produce chemicals and fuels in a totally renewable way, rather than extracting them from deep below the ground.”

If Yang’s system can be successfully scaled up, businesses could build artificial forests that produce the fuel for our cars, planes, and power plants by following the same laws and processes that natural forests follow. Since artificial photosynthesis would absorb and reduce carbon dioxide in order to create fuels, we could continue to use liquid fuel without destroying the environment or warming the planet.

However, in order to ensure that artificial photosynthesis can reliably produce our fuels in the future, it has to be better than nature, as Ciamician foresaw. Our need for renewable energy is urgent, and Yang’s model must be able to provide energy on a global scale if it is to eventually replace fossil fuels.

Recent Developments in Yang’s Artificial Photosynthesis

Since the major breakthrough in April 2015, Yang has continued to improve his system in hopes of eventually producing fuels that are commercially viable, efficient, and durable.

In August 2015, Yang and his team tested his system with a different type of bacteria. The method is the same, except instead of electrons, the bacteria use molecular hydrogen from water molecules to reduce carbon dioxide and create methane, the primary component of natural gas. This process is projected to have an impressive conversion efficiency of 10%, which is much higher than the conversion efficiency in natural leaves.

A conversion efficiency of 10% could potentially be commercially viable, but since methane is a gas it is more difficult to use than liquid fuels such as butanol, which can be transferred through pipes. Overall, this new generation of PBS needs to be designed and assembled in order to achieve a solar-to-liquid-fuel efficiency above 10%.

second-g-ap

A diagram of this second-generation PBS that produces methane.

In December 2015, Yang advanced his system further by making the remarkable discovery that certain bacteria could grow the semiconductors by themselves. This development short-circuited the two-step process of growing the nanowires and then culturing the bacteria in the nanowires. The improved semiconductor-bacteria interface could potentially be more efficient in producing acetate, as well as other chemicals and fuels, according to Yang. And in terms of scaling up, it has the greatest potential.

third-g-ap

A diagram of this third-generation PBS that produces acetate.

In the past few weeks, Yang made yet another important breakthrough in elucidating the electron transfer mechanism between the semiconductor-bacteria interface. This sort of fundamental understanding of the charge transfer at the interface will provide critical insights for the designing of the next generation PBS with better efficiency and durability. He will be releasing the details of this breakthrough shortly.

Despite these important breakthroughs and modifications to the PBS, Yang clarifies, “the physics of the semiconductor-bacteria interface for the solar driven carbon dioxide reduction is now established.” As long as he has an effective semiconductor that absorbs solar energy and feeds electrons to the bacteria, the photosynthetic function will initiate, and the remarkable process of artificial photosynthesis will continue to produce liquid fuels.

Why This Solar Power Is Unique

Peter Forbes, a science writer and the author of Nanoscience: Giants of the Infinitesimal, admires Yang’s work in creating this system. He writes, “It’s a brilliant synthesis: semiconductors are the most efficient light harvesters, and biological systems are the best scavengers of CO2.”

Yang’s artificial photosynthesis only relies on solar energy. But it creates a more useable source of energy than solar panels, which are currently the most popular and commercially viable form of solar power. While the semiconductors in solar panels absorb solar energy and convert it into electricity, in artificial photosynthesis, the semiconductors absorb solar energy and store it in “the carbon-carbon bond or the carbon-hydrogen bond of liquid fuels like methane or butanol.”

This difference is crucial. The electricity generated from solar panels simply cannot meet our diverse energy needs, but these renewable liquid fuels and natural gases can. Unlike solar panels, Yang’s PBS absorbs and breaks down carbon dioxide, releases oxygen, and creates a renewable fuel that can be collected and used. With artificial photosynthesis creating our fuels, driving cars and operating machinery becomes much less harmful. As Katherine Bourzac phrases nicely, “This is one of the best attempts yet to realize the simple equation: sun + water + carbon dioxide = sustainable fuel.”

The Future of Artificial Photosynthesis

Yang’s PBS has been advancing rapidly, but he still has work to do before the technology can be considered commercially viable. Despite encouraging conversion efficiencies, especially with methane, the PBS is not durable enough or cost-effective enough to be marketable.

In order to improve this system, Yang and his team are working to figure out how to replace bacteria with synthetic catalysts. So far, bacteria have proven to be the most efficient catalysts, and they also have high selectivity—that is, they can create a variety of useful compounds such as butanol, acetate, polymers and methane. But since bacteria live and die, they are less durable than a synthetic catalyst and less reliable if this technology is scaled up.

Yang has been testing PBS’s with live bacteria and synthetic catalysts in parallel systems in order to discover which type works best. “From the point of view of efficiency and selectivity of the final product, the bacteria approach is winning,” Yang says, “but if down the road we can find a synthetic catalyst that can produce methane and butanol with similar selectivity, then that is the ultimate solution.” Such a system would give us the ideal fuels and the most durable semiconductor-catalyst interface that can be reliably scaled up.

Another concern is that, unlike natural photosynthesis, artificial photosynthesis requires concentrated carbon dioxide to function. This is easy to do in the lab, but if artificial photosynthesis is scaled up, Yang will have to find a feasible way of supplying concentrated carbon dioxide to the PBS. Peter Forbes argues that Yang’s artificial photosynthesis could be “coupled with carbon-capture technology to pull COfrom smokestack emissions and convert it into fuel”. If this could be done, artificial photosynthesis would contribute to a carbon-neutral future by consuming our carbon emissions and releasing oxygen. This is not the focus of Yang’s research, but it is an integral piece of the puzzle that other scientists must provide if artificial photosynthesis is to supply the fuels we need on a large scale.

When Giacomo Ciamician considered the future of artificial photosynthesis, he imagined a future of abundant energy where humans could master the “photochemical processes that hitherto have been the guarded secret of the plants…to make them bear even more abundant fruit than nature, for nature is not in a hurry and mankind is.” And while the rush was not apparent to scientists in 1912, it is clear now, in 2016.

Peidong Yang has already created a system of artificial photosynthesis that out-produces nature. If he continues to increase the efficiency and durability of his PBS, artificial photosynthesis could revolutionize our energy use and serve as a sustainable model for generations to come. As long as the sun shines, artificial photosynthesis can produce fuels and consume waste. And in this future of artificial photosynthesis, the world would be able to grow and use fuels freely; knowing that the same, natural process that created them would recycle the carbon at the other end.

Yang shares this hope for the future. He explains, “Our vision of a cyborgian evolution—biology augmented with inorganic materials—may bring the PBS concept to full fruition, selectively combining the best of both worlds, and providing society with a renewable solution to solve the energy problem and mitigate climate change.”

If you would like to learn more about Peidong Yang’s research, please visit his website at http://nanowires.berkeley.edu/.

The Federal Government Updates Biotech Regulations

Click here to see this page in other languages:  Russian 

By Wakanene Kamau

This summer’s GMO labeling bill and the rise of genetic engineering techniques to combat Zika — the virus linked to microcephaly and Guillain-Barre syndrome — have cast new light on how the government ensures public safety.

As researchers and companies scramble to apply the latest advances in synthetic biology, like the gene-editing technique CRISPR, the public has grown increasingly wary of embracing technology that they perceive as a threat to their health or the health of the environment. How, and to what degree, can the drive to develop and deploy new biotechnologies be reconciled with the need to keep the public safe and informed?

Last Friday, the federal government took a big step in framing the debate by releasing two documents that will modernize the 1986 Coordinated Framework for the Regulation of Biotechnology (Coordinated Framework). The Coordinated Framework is the outline for the network of regulatory policies that are used to ensure the safety of biotechnology products.

The Update to the Coordinated Framework, one of the documents released last week, is the first comprehensive review of how the federal government presently regulates biotechnology. It provides case-studies, graphics, and tables to clarify what tools the government uses to make decisions.

The National Strategy for Modernizing the Regulatory System for Biotechnology Products, the second recently released document, provides the long-term vision for how government agencies will handle emerging technologies. It includes oversight by the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

These documents are the result of work than began last summer when the Office of Science and Technology Policy (OSTP) announced a yearlong project to revise the way biotechnology innovations are regulated. The central document, The Coordinated Framework for the Regulation of Biotechnology, was last updated over 20 years ago.

The Coordinated Framework was first issued in 1986 as a response to a new gene-splicing technique that was leaving academic laboratories and entering the marketplace. Researchers had learned to take DNA from multiple sources and splice it together in a process called recombineering. This recombined DNA, known as rDNA, opened the floodgates for new uses that expanded beyond biomedicine and into industries like agriculture and cosmetics.

As researchers saw increasing applications for use in the environment, namely in genetically engineering animals and plants, concerns arose from a variety of stakeholders calling for attention from the federal government. Special interest groups were wary of the effect of commercial rDNA on public and environmental health; outside investors sought assurances that products would be able to legally enter the market; and fledgling biotech companies struggled to navigate regulatory networks.

This tension led the OSTP to develop an interagency effort to outline how to oversee the biotechnology industry. The culmination of this process created a policy framework for how existing legislation would be applied to various kinds of biotechnology. It coordinated across three responsible organizations: the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

Broadly, the FDA regulates genetically modified food and food additives, the USDA oversees genetically modified plants and animals, and the EPA tracks microbial pesticides and engineered algaes. By 1986, the first iteration of the Coordinated Framework was finalized and issued.

The Coordinated Framework was updated in 1992 to more clearly describe the scope of how federal agencies would exercise authority in cases where the established rule of law left room for interpretation. The central premise of the update was to look at the product itself and not the process by which it was made. The OSTP and federal government did not see new biotechnology methods as inherently risky but recognized that their applications could be.

However, since 1992, there have been a number of technologies that have raised new questions on the scope of agency authority. Among these are new methods for new applications, such as bioreactors for the biosynthesis of industrially important chemicals or CRISPR-Cas9 to develop gene drives to combat vector-borne disease.  Researchers are also increasingly using new methods for old applications, such as zinc finger nucleases and transcription activator-like effector nucleases, in addition to CRISPR-Cas9, for genome editing to introduce beneficial traits in crops.

But what kind of risks do these innovations create and how could the Coordinated Framework be used to mitigate them?

In theory, the Coordinated Framework aligns a new innovation with the federal agency that has the most experience working in its respective field. In practice, however, making decisions between agencies with overlapping interests and experience has been difficult.

The recent debate over the review of a genetically modified mosquito developed by the UK-based start-up Oxitec to combat the Zika virus shows how controversial the subject can be. Oxitec’s genetically engineered a male Aedes aegypti mosquito (the host of Zika, along with dengue, yellow fever, and chikungunya viruses) with a gene lethal to offspring it has with wild female mosquitoes. The plan would be to release the genetically engineered male mosquitoes into the wild where they can mate with native female mosquitos and crash the local population.

Using older genetics techniques, this process would have needed approval from the USDA, which has extensive experience with insecticides. However, because the new method is akin to a “new animal drug,” its oversight fell to the FDA. And the FDA created an uproar when it approved field trials of the Oxitec technology in Florida this August.

Confusion and frustration over who is and who should be responsible in cases like this one have brought an end to the 20 year silence on the measure.  In fact, the need to involve a greater amount of clarity, responsibility, and understanding in the regulatory approval process was reaffirmed last year.  The OSTP sent a Memo last summer to the FDA, USDA and EPA announcing the scheduled update to the Coordinated Framework.

Since the Memo was released, the OSTP has organized a series of three “public engagement sessions” (notes available here, here and here) to explain how to the Coordinated Framework presently works, as well as to accept input from the public. The release of the Update to the Coordinated Framework and the National Strategy are two measures of accountability. The Administration will accept feedback on the measures for 40 days following a notice of request for public comment to be published by the Federal Register.

While scientific breakthroughs have the potential to spur wide-ranging innovations, it is important to ensure due respect is given to the potential dangers those innovations present.

You can sign up for updates from the White House on Bioregulation here.

Wakanene is a science writer based in Seattle, Wa. You can reach him on twitter @ws_kamau.

 

Effective Altruism 2016

The Effective Altruism Movement

Edit: The following article has been updated to include more highlights as well as links to videos of the talks.

How can we more effectively make the world a better place? Over 1,000 concerned altruists converged at the Effective Altruism Global conference this month in Berkeley, CA to address this very question. For two and a half days, participants milled around the Berkeley campus, attending talks, discussions, and workshops to learn more about efforts currently underway to improve our ability to not just do good in the world, but to do the most good.

Those who arrived on the afternoon of Friday, August 5 had the opportunity to mingle with other altruists and attend various workshops geared toward finding the best careers, improving communication, and developing greater self-understanding and self-awareness.

But the conference really kicked off on Saturday, August 6, with talks by Will MacAskill and Toby Ord, who both helped found the modern effective altruistism movement. Ord gave the audience a brief overview of the centuries of science and philosophy that provided the base for effective altruism. “Effective altruism is to the pursuit of good as the scientific revolution is to the pursuit of truth,” he explained. Yet, as he pointed out, effective altruism has only been a real “thing” for five years.

Will MacAskill

Will MacAskill introduced the conference and spoke of the success the EA movement has had in the last year.

Toby Ord speaking about the history of effective altruism.

Toby Ord spoke about the history of effective altruism.

 

MacAskill took the stage after Ord to highlight the movement’s successes over the past year, including coverage by such papers as the New York Times and the Washington Post. And more importantly, he talked about the significant increase in membership they saw this year, as well as in donations to worthwhile causes. But he also reminded the audience that a big part of the movement is the process of effective altruism. He said:

“We don’t know what the best way to do good is. We need to figure that out.”

For the rest of the two days, participants considered past charitable actions that had been most effective, problems and challenges altruists face today, and how the movement can continue to grow. There were too many events to attend them all, but there were many highlights.

Highlights From the Conference

When FLI cofounder, Jaan Tallin, was asked why he chose to focus on issues such as artificial intelligence, which may or may not be a problem in the future, rather than mosquito nets, which could save lives today, he compared philanthropy to investing. Higher risk investments have the potential for a greater payoff later. Similarly, while AI may not seem like much of  threat to many people now, ensuring it remains safe could save billions of lives in the future. Tallin spoke as part of a discussion on Philanthropy and Technology.

Jaan Tallin speaking remotely about his work with EA efforts.

Jaan Tallin speaking remotely about his work with EA efforts.

Martin Reese, a member of FLI’s Science Advisory Board, argued that we are in denial of the seriousness of our risks. At the same time, he said that minimizing risks associated with technological advances can only be done “with great difficulty.”  He encouraged EA participants to figure out which threats can be dismissed as science fiction and which are legitimate, and he encouraged scientists to become more socially engaged.

As if taking up that call to action, Kevin Esvelt talked about his own attempts to ensure gene drive research in the wild is accepted and welcomed by local communities. Gene drives could be used to eradicate such diseases as malaria, schistosomiasis, Zika, and many others, but fears of genetic modification could slow research efforts. He discussed his focus on keeping his work as open and accessible as possible, engaging with the public to allow anyone who might be affected by his research to have as much input as they want. “Closed door science,” he added, “is more dangerous because we have no way of knowing what other people are doing.”  A single misstep with this early research in his field could imperil all future efforts for gene drives.

Kevin Esvelt talks about his work with CRISPR and gene drives.

Kevin Esvelt talks about his work with CRISPR and gene drives.

That same afternoon, Cari Tuna, President of the Open Philanthropy Project, sat down with Will McAskill for an interview titled, “Doing Philosophy Better,” which focused on her work with OPP and Effective Altruism and how she envisions her future as a philanthropist. She highlighted some of the grants she’s most excited about, which include grants to Give Directly, Center for Global Development, and Alliance for Safety and Justice. When asked about how she thought EA could improve, she emphasized, “We consider ourselves a part of the Effective Altruism community, and we’re excited to help it grow.” But she also said, “I think there is a tendency toward overconfidence in the EA community that sometimes undermines our credibility.” She mentioned that one of the reasons she trusted GiveWell was because of their self reflection. “They’re always asking, ‘how could we be wrong?'” she explained, and then added, “I would really love to see self reflection become more of a core value of the effective altruism community.”

cari tuna

Cari Tuna interviewed by Will McAskill (photo from the Center for Effective Altruism).

The next day, FLI President, Max Tegmark, highlighted the top nine myths of AI safety, and he discussed how important it is to dispel these myths so researchers can focus on the areas necessary to keep AI beneficial. Some of the most distracting myths include arguments over when artificial general intelligence could be created, whether or not it could be “evil,” and goal-oriented issues. Tegmark also added that the best thing people can do is volunteer for EA groups.

During the discussion about the risks and benefits of advanced artificial intelligence, Dileep George, cofounder of Vicarious, reminded the audience why this work is so important. “The goal of the future is full unemployment so we can all play,” he said. Dario Amodei of OpenAI emphasized that having curiosity and trying to understand how technology is evolving can go a long way toward safety. And though he often mentioned the risks of advanced AI, Toby Ord, a philosopher and research fellow with the Future of Humanity Institute, also added, “I think it’s more likely than not that AI will contribute to a fabulous outcome.” Later in the day, Chris Olah, an AI researcher at Google Brain and one of the lead authors of the paper, Concrete Problems in AI Safety, explained his work as trying to build a bridge to futuristic problems by doing empirical research today.

Moderator Riva-Melissa Tez, Dario Amodei, George Dileep, and Toby Ord at the Risks and Benefits of Advanced AI discussion.

Moderator Riva-Melissa Tez, Dario Amodei, Dileep George, and Toby Ord at the Risks and Benefits of Advanced AI discussion. (Not pictured, Daniel Dewey)

FLI’s Richard Mallah gave a talk on mapping the landscape of AI safety research threads. He showed how there are many meaningful dimensions along which such research can be organized, how harmonizing the various research agendas into a common space allows us to reason about different kinds of synergies and dependencies, and how consideration of the white space in such representations can help us find both unknown knowns and unknown unknowns about the space.

Tara MacAulay, COO at the Centre for Effective Altruism, spoke during the discussion on “The Past, Present, and Future of EA.” She talked about finding the common values in the movement and coordinating across skill sets rather than splintering into cause areas or picking apart who is and who is not in the movement. She said, “The opposite of effective altruism isn’t ineffective altruism. The opposite of effective altruism is apathy, looking at the world and not caring, not doing anything about it . . . It’s helplessness. . . . throwing up our hands and saying this is all too hard.”

MacAulay also moderated a panel discussion called, Aggregating Knowledge, which was significant, not only for its thoughtful content about accessing, understanding, and communicating all of the knowledge available today, but also because it was an all-woman panel. The panel included Sarah Constantin, Amanda Askell, Julia Galef, and Heidi McAnnaly, who discussed various questions and problems the EA community faces when trying to assess which actions will be most effective. MacAulay summarized the discussion at the end when she said, “Figuring out what to do is really difficult but we do have a lot of tools available.” She concluded with a challenge to the audience to spend five minutes researching some belief they’ve always had about the world to learn what the evidence actually says about it.

aggregating knowledge

Sarah Constantin, Amanda Askell, Julia Galef, Heidi McAnnaly, and Tara MacAulay (photo from the Center for Effective Altruism).

Prominent government leaders also took to the stage to discuss how work with federal agencies can help shape and impact the future. Tom Kalil, Deputy Director for Technology and Innovation highlighted how much of today’s technology, from cell phones to Internet, got its start in government labs. Then, Jason Matheny, Director of IARPA, talked about how delays in technology can actually cost millions of lives. He explained that technology can make it less costly to enhance moral developments and that, “ensuring that we have a future counts a lot.”

Tom Kalil speaks about the history of government research and its impact on technology.

Tom Kalil speaks about the history of government research and its impact on technology.

Jason Matheny talks about how employment with government agencies can help advance beneficial technologies.

Jason Matheny talks about how employment with government agencies can help advance beneficial technologies.

Robin Hanson, author of The Age of Em, talked about his book and what the future will hold if we continue down our current economic path while the ability to create brain emulation is developed. He said that if creating ems becomes cheaper than paying humans to do work, “that would change everything.” Ems would completely take over the job market and humans would be pushed aside. He explained that some people might benefit from this new economy, but it would vary, just as it does today, with many more people suffering from poverty and fewer gaining wealth.

Robin Hanson talks to a group about how brain emulations might take over the economy and what their world will look like.

Robin Hanson talks to a group about how brain emulations might take over the economy and what their world will look like.

 

Applying EA to Real Life

Lucas Perry, also with FLI, was especially impressed by the career workshops offered by 80,000 Hours during the conference. He said:

“The 80,000 Hours workshops were just amazing for giving new context and perspective to work. 80,000 Hours gave me the tools and information necessary to reevaluate my current trajectory and see if it really is best of all possible paths for me and the world.

In the end, I walked away from the conference realizing I had been missing out on something so important for most of my life. I found myself wishing that effective altruism, and organizations like 80,000 Hours, had been a part of my fundamental education. I think it would have helped immensely with providing direction and meaning to my life. I’m sure it will do the same for others.”

In total, 150 people spoke over the course of those two and a half days. MacAskill finally concluded the conference with another call to focus on the process of effective altruism, saying:

“Constant self-reflection, constant learning, that’s how we’re going to be able to do the most good.”

 

View from the conference.

View from the conference.

Podcast: Could an Earthquake Destroy Humanity?

Earthquakes as Existential Risks

Earthquakes are not typically considered existential or even global catastrophic risks, and for good reason: they’re localized events. While they may be devastating to the local community, rarely do they impact the whole world. But is there some way an earthquake could become an existential or catastrophic risk? Could a single earthquake put all of humanity at risk? In our increasingly connected world, could an earthquake sufficiently exacerbate a biotech, nuclear or economic hazard, triggering a cascading set of circumstances that could lead to the downfall of modern society?

Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of FLI consider extreme earthquake scenarios to figure out if there’s any way such a risk is remotely plausible. This podcast was produced in a similar vein to Myth Busters and xkcd’s What If series.

We only consider a few scenarios in this podcast, but we’d love to hear from other people. Do you have ideas for an extreme situation that could transform a locally devastating earthquake into a global calamity?

This episode features insight from seismologist Martin Chapman of Virginia Tech.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Problem with Brexit: 21st Century Challenges Require International Cooperation

Retreating from international institutions and cooperation will handicap humanity as we tackle our greatest problems.

The UK’s referendum in favor of leaving the EU and the rise of nationalist ideologies in the US and Europe is worrying on multiple fronts. Nationalism espoused by the likes of Donald Trump (U.S.), Nigel Farage (U.K.), Marine Le Pen (France), and Heinz-Christian Strache (Austria) may lead to a resurgence of some of the worst problems of the first half of 20th century. These leaders are calling for policies that would constrain trade and growth, encourage domestic xenophobia, and increase rivalries and suspicion between countries.

Even more worrying, however, is the bigger picture. In the 21st century, our greatest challenges will require global solutions. Retreating from international institutions and cooperation will handicap humanity’s ability to address our most pressing upcoming challenges.

The Nuclear Age

Many of the challenges of the 20th century – issues of public health, urbanization, and economic and educational opportunity – were national problems that could be dealt with at the national level. July 16th, 1945 marked a significant turning point. On that day, American scientists tested the first nuclear weapon in the New Mexican desert. For the first time in history, individual human beings had within their power a technology capable of destroying all of humanity.

Thus, nuclear weapons became the first truly global problem. Weapons with such a destructive force were of interest to every nation and person on the planet. Only international cooperation could produce a solution.

Despite a dangerous arms race between the US and the Soviet Union — including a history of close calls — humanity survived 70 years without a catastrophic global nuclear war. This was in large part due to international institutions and agreements that discouraged wars and further proliferation.

But what if we replayed the Cold War without the U.N. mediating disputes between nuclear adversaries? And without the bitter taste of the Second World War fresh in the minds of all who participated? Would we still have the same benign outcome?

We cannot say what such a revisionist history would look like, but the chances of a catastrophic outcome would surely be higher.

21st Century Challenges

The 21st century will only bring more challenges that are global in scope, requiring more international solutions. Climate change by definition requires a global solution since carbon emissions will lead to global warming regardless of which countries emit them.

In addition, continued development of new powerful technologies — such as artificial intelligence, biotechnologies, and nanotechnologies — will put increasingly large power in the hands of the people who develop and control them. These technologies have the potential to improve the human condition and solve some of our biggest problems. Yet they also have the potential to cause tremendous damage if misused.

Whether through accident, miscalculation, or madness, misuse of these powerful technologies could pose a catastrophic or even existential risk. If a Cold-War-style arms race for new technologies occurs, it is only a matter of time before a close call becomes a direct hit.

Working Together

As President Obama said in his speech at Hiroshima, “Technological progress without an equivalent progress in human institutions can doom us.”

Over the next century, technological progress can greatly improve the human experience. To ensure a positive future, humanity must find the wisdom to handle the increasingly powerful technologies that it is likely to produce and to address the global challenges that are likely to arise.

Experts have blamed the resurgence of nationalism on anxieties over globalization, multiculturalism, and terrorism. Whatever anxieties there may be, we live in a global world where our greatest challenges are increasingly global, and we need global solutions. If we resist international cooperation, we will battle these challenges with one, perhaps both, arms tied behind our back.

Humanity must learn to work together to tackle the global challenges we face. Now is the time to strengthen international institutions, not retreat from them.

Existential Risks Are More Likely to Kill You Than Terrorism

People tend to worry about the wrong things.

According to a 2015 Gallup Poll, 51% of Americans are “very worried” or “somewhat worried” that a family member will be killed by terrorists. Another Gallup Poll found that 11% of Americans are afraid of “thunder and lightning.” Yet the average person is at least four times more likely to die from a lightning bolt than a terrorist attack.

Similarly, statistics show that people are more likely to be killed by a meteorite than a lightning strike (here’s how). Yet I suspect that most people are less afraid of meteorites than lightning. In these examples and so many others, we tend to fear improbable events while often dismissing more significant threats.

One finds a similar reversal of priorities when it comes to the worst-case scenarios for our species: existential risks. These are catastrophes that would either annihilate humanity or permanently compromise our quality of life. While risks of this sort are often described as “high-consequence, improbable events,” a careful look at the numbers by leading experts in the field reveals that they are far more likely than most of the risks people worry about on a daily basis.

Let’s use the probability of dying in a car accident as a point of reference. Dying in a car accident is more probable than any of the risks mentioned above. According to the 2016 Global Challenges Foundation report, “The annual chance of dying in a car accident in the United States is 1 in 9,395.” This means that if the average person lived 80 years, the odds of dying in a car crash will be 1 in 120. (In percentages, that’s 0.01% per year, or 0.8% over a lifetime.)

Compare this to the probability of human extinction stipulated by the influential “Stern Review on the Economics of Climate Change,” namely 0.1% per year.* A human extinction event could be caused by an asteroid impact, supervolcanic eruption, nuclear war, a global pandemic, or a superintelligence takeover. Although this figure appears small, over time it can grow quite significant. For example, it means that the likelihood of human extinction over the course of a century is 9.5%. It follows that your chances of dying in a human extinction event are nearly 10 times higher than dying in a car accident.

But how seriously should we take the 9.5% figure? Is it a plausible estimate of human extinction? The Stern Review is explicit that the number isn’t based on empirical considerations; it’s merely a useful assumption. The scholars who have considered the evidence, though, generally offer probability estimates higher than 9.5%. For example, a 2008 survey taken during a Future of Humanity Institute conference put the likelihood of extinction this century at 19%. The philosopher and futurist Nick Bostrom argues that it “would be misguided” to assign a probability of less than 25% to an existential catastrophe before 2100, adding that “the best estimate may be considerably higher.” And in his book Our Final Hour, Sir Martin Rees claims that civilization has a fifty-fifty chance of making it through the present century.

My own view more or less aligns with Rees’, given that future technologies are likely to introduce entirely new existential risks. A discussion of existential risks five decades from now could be dominated by scenarios that are unknowable to contemporary humans, just like nuclear weapons, engineered pandemics, and the possibility of “grey goo” were unknowable to people in the fourteenth century. This suggests that Rees may be underestimating the risk, since his figure is based on an analysis of currently known technologies.

If these estimates are believed, then the average person is 19 times, 25 times, or even 50 times more likely to encounter an existential catastrophe than to perish in a car accident, respectively.

These figures vary so much in part because estimating the risks associated with advanced technologies requires subjective judgments about how future technologies will develop. But this doesn’t mean that such judgments must be arbitrary or haphazard: they can still be based on technological trends and patterns of human behavior. In addition, other risks like asteroid impacts and supervolcanic eruptions can be estimated by examining the relevant historical data. For example, we know that an impactor capable of killing “more than 1.5 billion people” occurs every 100,000 years or so, and supereruptions happen about once every 50,000 years.

Nonetheless, it’s noteworthy that all of the above estimates agree that people should be more worried about existential risks than any other risk mentioned.

Yet how many people are familiar with the concept of an existential risk? How often do politicians discuss large-scale threats to human survival in their speeches? Some political leaders — including one of the candidates currently running for president — don’t even believe that climate change is real. And there are far more scholarly articles published about dung beetles and Star Trek than existential risks. This is a very worrisome state of affairs. Not only are the consequences of an existential catastrophe irreversible — that is, they would affect everyone living at the time plus all future humans who might otherwise have come into existence — but the probability of one happening is far higher than most people suspect.

Given the maxim that people should always proportion their fears to the best available evidence, the rational person should worry about the above risks in the following order (from least to most risky): terrorism, lightning strikes, meteorites, car crashes, and existential catastrophes. The psychological fact is that our intuitions often fail to track the dangers around us. So, if we want to ensure a safe passage of humanity through the coming decades, we need to worry less about the Islamic State and al-Qaeda, and focus more on the threat of an existential catastrophe.

x-risksarielfigure*Editor’s note: To clarify, the 0.1% from the Stern Report is used here purely for comparison to the numbers calculated in this article. The number was an assumption made at Stern and has no empirical backing. You can read more about this here.

The Collective Intelligence of Women Could Save the World

Neil deGrasse Tyson was once asked about his thoughts on the cosmos. In a slow, gloomy voice, he intoned, “The universe is a deadly place. At every opportunity, it’s trying to kill us. And so is Earth. From sinkholes to tornadoes, hurricanes, volcanoes, tsunamis.” Tyson humorously described a very real problem: the universe is a vast obstacle course of catastrophic dangers. Asteroid impacts, supervolcanic eruptions, and global pandemics represent existential risks that could annihilate our species or irreversibly catapult us back into the Stone Age.

But nature is the least of our worries. Today’s greatest existential risks stem from advanced technologies like nuclear weapons, biotechnology, synthetic biology, nanotechnology, and even artificial superintelligence. These tools could trigger a disaster of unprecedented proportions. Exacerbating this situation are “threat multipliers” — issues like climate change and biodiveristy loss, which, while devastating in their own right, can also lead to an escalation of terrorism, pandemics, famines, and potentially even the use of WTDs (weapons of total destruction).

The good news is that none of these existential threats are inevitable. Humanity can overcome every single known danger. But accomplishing this will require the smartest groups working together for the common good of human survival.

So, how do we ensure that we have the smartest groups working to solve the problem?

Get women involved.

A 2010 study, published in Science, made two unexpected discoveries. First, it established that groups can exhibit a collective intelligence (or c factor). Most of us are familiar with general human intelligence, which describes a person’s intelligence level across a broad spectrum of cognitive tasks. It turns out groups also have a similar “collective” intelligence that determines how successfully they can navigate these cognitive tasks. This is an important finding because “research, management, and many other kinds of tasks are increasingly accomplished by groups — working both face-to-face and virtually.” To optimize group performance, we need to understand what makes a group more intelligent.

This leads to the second unexpected discovery. Intuitively, one might think that groups with really smart members will themselves be really smart. This is not the case. The researchers found no strong correlation between the average intelligence of members and the collective intelligence of the group. Similarly, one might suspect that the group’s IQ will increase if a member of the group has a particularly high IQ. Surely a group with Noam Chomsky will perform better than one in which he’s replaced by Joe Schmo. But again, the study found no strong correlation between the smartest person in the group and the group’s collective smarts.

Instead, the study found three factors linked to group intelligence. The first pertains to the “social sensitivity” of group members, measured by the “Reading the Mind in the Eyes” test. This term refers to one’s ability to infer the emotional states of others by picking up on certain non-verbal clues. The second concerns the number of speaking turns taken by members of the group. “In other words,” the authors write, “groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking.”

The last factor relates to the number of female members: the more women in the group, the higher the group’s IQ. As the authors of the study explained, “c was positively and significantly correlated with the proportion of females in the group.” If you find this surprising, you’re not alone: the authors themselves didn’t anticipate it, nor were they looking for a gender effect.

Why do women make groups smarter? The authors suggest that it’s because women are, generally speaking, more socially sensitive than men, and the link between social sensitivity and collective intelligence is statistically significant.

Another possibility is that men tend to dominate conversations more than women, which can disrupt the flow of turn-taking. Multiple studies have shown that women are interrupted more often than men; that when men interrupt women, it’s often to assert dominance; and that men are more likely to monopolize professional meetings. In other words, there’s robust empirical evidence for what the writer and activist Rebecca Solnit describes as “mansplaining.”

These data have direct implications for existential riskology:

Given the unique, technogenic dangers that haunt the twenty-first century, we need the smartest groups possible to tackle the problems posed by existential risks. We need groups comprised of women.

Yet the existential risk community is marked by a staggering imbalance of gender participation. For example, a random sample of 40 members of the “Existential Risk” group on Facebook (of which I am an active member) included only 3 women. Similar asymmetries can be found in many of the top research institutions working on global challenges.

This dearth of female scholars constitutes an existential emergency. If the studies above are correct, then the groups working on existential risk issues are not nearly as intelligent as they could be.

The obvious next question is: How can the existential risk community rectify this potentially dangerous situation? Some answers are implicit in the data above: for example, men could make sure that women have a voice in conversations, aren’t interrupted, and don’t get pushed to the sidelines in conversations monopolized by men.

Leaders of existential risk studies should also strive to ensure that women are adequately represented at conferences, that their work is promoted to the same extent as men’s, and that the environments in which existential risk scholarship takes place is free of discrimination. Other factors that have been linked to women avoiding certain fields include the absence of visible role models, the pernicious influence of gender stereotypes, the onerous demands of childcare, a lack of encouragement, and the statistical preference of women for professions that focus on “people” rather than “things.”

No doubt there are other factors not mentioned, and other strategies that could be identified. What can those of us already ensconced in the field do to achieve greater balance? What changes can the community make to foster more diversity? How can we most effectively maximize the collective intelligence of teams working on existential risks?

As Sir Martin Rees writes in Our Final Hour, “what happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.” Future generations may very well thank us for taking the link between collective intelligence and female participation seriously.

Note: there’s obviously a moral argument for ensuring that women have equal opportunities, get paid the same amount as men, and don’t have to endure workplace discrimination. The point of this article is to show that even if one brackets moral considerations, there are still compelling reasons for making the field more diverse. (For more , see chapter 14 of my book, which  lays out a similar argument.

Writing the Human Genome

The Human Genome Project made big news in the early 2000s when an international group of scientists successfully completed a decade-long endeavor to map out the entirety of the human genome. Then, last month, genetic researchers caused some minor controversy when a group of about 150 scientists, lawyers and entrepreneurs met behind closed doors to discuss “writing” the human genome – that is, synthesizing the human DNA sequences from scratch.

In response to the uproar, the group published a short article in Science this week, explaining the basic ideas behind their objectives.

The project, HGP-write (human genome project – write), is led by Jef D. Boeke, Andrew Hessel, Nancy J. Kelley, and FLI science advisory board member George Church, though over 20 participants helped pen the Science article. In the article, they explain, “Genome synthesis is a logical extension of the genetic engineering tools that have been used safely within the biotech industry for ~40 years and have provided important societal benefits.”

Recent advances in genetics and biotech, such as the explosion of CRISPR-cas9 and even the original Human Genome Project, have provided glimpses into a possible future in which we can cure cancer, ward off viruses, and generate healthy human organs. Scientists involved with HGP-write hope this project will finally help us achieve those goals. They wrote:

Potential applications include growing transplantable human organs; engineering immunity to viruses in cell lines via genome-wide recoding (12); engineering cancer resistance into new therapeutic cell lines; and accelerating high-productivity, cost-efficient vaccine and pharmaceutical development using human cells and organoids.

While there are clearly potential benefits to this technology, concerns about the project are to be expected, especially given the closed-door nature of the meeting. In response to the meeting last month, Drew Endy and Laurie Zoloth argued:

Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real, like today’s Harvard conference, should not take place without open and advance consideration of whether it is morally right to proceed.

The director of the National Institutes of Health, Francis S. Collins, was equally hesitant to embrace the project. In a statement to the New York Times, he said, “whole-genome, whole-organism synthesis projects extend far beyond current scientific capabilities, and immediately raise numerous ethical and philosophical red flags.”

In the Science article, the researchers of HGP-write insist that “HGP-write will require public involvement and consideration of ethical, legal, and social implications (ELSI) from the start.” This is a point Church reiterated to the Washington Post, explaining that there were already ELSI researchers who participated in the original meeting and that he expects more researchers to join as a response to the Science article.

The primary goal of the project is “to reduce the costs of engineering and testing large (0.1 to 100 billion base pairs) genomes in cell lines by over 1000-fold within 10 years.” The HGP-write initiative hopes to launch this year “with $100 million in committed support,” and they plan to complete the project for less than the $3 billion price tag of the original Human Genome Project.

CRISPR, Gene Drive Technology, and Hope for the Future

The following article was written by John Min and George Church.

Imagine for a moment, a world where we are able to perform genetic engineering on such large scales as to effectively engineer nature.  In this world, parasites that only cause misery and suffering would not exist, only minimal pesticides and herbicides would be necessary in agriculture, and the environment would be better adapted to maximize positive interactions with all human activities while maintaining sustainability.  While this may all sound like science fiction, the technology that might allow us to reach this utopia is very real, and if we develop it responsibly, this dream may well become reality.

‘Gene drive’ technology, or more specifically, CRISPR gene drives, have been heralded by the press as a potential solution for mosquito-borne diseases such as malaria, dengue, and most recently, Zika. In general, gene drive is a technology that allows scientists to bias the rate of inheritance of specific genes in wild populations of organisms. A gene is said to ‘drive’ when it is able to increase the frequency of its own inheritance higher than the expected probability of 50%. In doing so, gene drive systems exhibit unprecedented ability to directly manipulate genes on a population-wide scale in nature.

The idea to use gene drive systems to propagate engineered genes in natural systems is not new.  Indeed, a proposal to construct gene drives using naturally occurring homing nucleases, genes that can specifically cut DNA and insert extra copies of itself, was published by Austin Burt in 2003 (Burt, 2013). In fact, the concept was discussed even before the earliest studies on naturally driving genetic elements — such as transposons, which are small sections of DNA that can insert extra copies of itself — over half a century ago (Serebrovskii, 1940) (Vanderplank, 1944).

However, it is only with advances in modern genome editing technology, such as CRISPR, that scientists are finally able to digitally target gene drives to any desired location in the genome. Ever since the first CRISPR gene drive design was described in a 2014 publication by Kevin Esvelt and George Church (Esvelt, et al., 2014), man-made gene drive systems have been successfully tested in three separate species, yeast, fruit fly, and mosquitoes (DiCarlo, et al., 2015) (Gantz & Bier, 2015) (Gantz, et al., 2015) .

The term ‘CRISPR’ stands for clustered regularly-interspaced short palindromic repeats and describes an adaptive immune system against viral infections originally discovered in bacteria.  Nucleases, or proteins that cut DNA, in the CRISPR family are generally able to cut DNA anywhere as specified by a short stretch of RNA sequence at high precision and accuracy.

The nuclease cas9, in particular, has become a favorite among geneticists around the world since the publication of a series of high impact journal articles in late 2012 and early 2013 (Jinek, et al., 2012) (Cong, et al., 2013) (Hwang, et al., 2013). Using cas9, scientists are able to create ‘double-stranded breaks,’ or cuts in DNA, at nearly any location specified by a 20 nucleotide piece of RNA sequence.

After being cut, we can take advantage of natural DNA repair mechanisms to persuade cells to incorporate new genetic information into the break. This allows us to introduce new genes into an organism or even bar-code it at a genetic level. By using CRISPR technology, scientists are also able to insert synthesized gene drive systems into a host organism’s genome with the same high level of precision and reliability.

Potential applications for CRISPR gene drives are broad and numerous, as the technology is expected to work in any organism that reproduces sexually.

While popular media attention is chiefly focused on the elimination of mosquito-borne diseases, applications also exist in the fight against the rise of Lyme disease in the U.S. Beyond public health, gene drives can be used to eliminate invasive species from non-native habitats, such as mosquitos in Hawaii. In this case, many native Hawaiian bird species, especially the many honeycreepers, are being driven to extinction by mosquito-borne avian malaria. The removal of mosquitos in Hawaii would both save the  bird populations, as well as make Hawaii even more attractive as a tropical paradise for tourists.

With such rapid expansion of gene drive technology over the past year, it is only natural for there to be some concern and fear over attempting to genetically engineer nature at such a large scale. The only way to truly address these fears is to rigorously test the spreading properties of various gene drive designs within the safety of the laboratory — something that has also been in active development over the last year.

It is also important to remember that mankind has been actively engineering the world around us since the dawn of civilization, albeit with more primitive tools. Using a mixture of breeding and mechanical tools, we have managed to transform teosinte into modern corn, created countless breeds of dogs and cats, and transformed vast stretches everything from lush forests to deserts into modern farmland.

Yet, these amazing feats are not without consequence. Most products of our breeding techniques are unable to survive independently in nature, and countless species have become extinct as the result of our agricultural expansion and eco-engineering.

It is imperative that we approach gene drives differently, with increased consideration for the consequences of our actions on both the natural world as well as ourselves. Proponents of gene drive technology would like to initiate a new research paradigm centered on collective decision making. As most members of the public will inevitably be affected by a gene drive release, it is only ethical to include the public throughout the research and decision making process of gene drive development.  Furthermore, by being transparent and inviting of public criticism, researchers are able to crowd-source the “de-bugging” process, as well as minimize the risk of a gene drive release going awry.

We must come to terms with the reality that thousands of acres of habitat continue to be destroyed annually through a combination of chemical sprays, urban and agricultural expansion, and the introduction of invasive species, just to name a few. To improve up on this, I would like to echo the hopes of my mentor, Kevin Esvelt, toward the use of “more science, and fewer bulldozers for environmental engineering” in hopes of creating a more sustainable co-existence between man and nature. The recent advancements in CRISPR gene drive technology represent an important step toward this hopeful future.

 

About the author: John Min is a PhD. Candidate in the BBS program at Harvard Medical School co-advised by Professor George Church and Professor Kevin Esvelt at MIT Media Labs.  He is currently working on creating a laboratory model for gene drive research.

 

References

Burt, A. (2013). Site-specific selfish genes as tools for the control and genetic engineering of naturl populations. Proceedings of the biological sciences B, 270:921-928.

Cong, L., Ann Ran, F., Cox, D., Lin, S., Barretto, R., Habib, N., . . . Zhang, F. (2013). Multiplex Genome Engineering Using CRISPR/Cas Systems. Science, 819-823.

DiCarlo, J. E., Chavez, A., Dietz, S. L., Esvelt, K. M., & Church, G. M. (2015). RNA-guided gene drives can efficiently and reversibly bias inheritance in wild yeast. bioRxiv preprint, DOI:10.1101/013896.

Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Concerning RNA-guided gene drives for the alteration of wild populations. eLIFE, 1-21.

Gantz, V. M., & Bier, E. (2015). The mutagenic chain reaction: A method for converting heterozygous to homozygous mutations. Science, Vol. 348 442-444.

Gantz, V., Jasinskiene, N., Tatarenkova, O., fazekas, A., Macias, V. M., Bier, E., & James, A. A. (2015). Highly efficient Cas90mediated gene drive for population modification of the malaria vector mosquito Anopheles stephensi. PNAS, vol.112 49.

Hwang, W. Y., Fu, Y., Reyon, D., Maeder, M. L., Tsai, S. Q., Sander, J. D., . . . Joung, J. (2013). Efficient genome editing in zebrafish using a CRISPR-Cas system. Nature Biotechnology, 227-229.

Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., Doudna, J. A., & Charpentier, E. (2012). A Programmable Dual-RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity. Science, 816-821.

Serebrovskii, A. (1940). On the possibility of a new method for the control of insect pests. Zool.Zh.

Vanderplank, F. (1944). Experiments in crossbreeding tsetse flies, Gossina species. Nature, vol.144 607-608.

 

 

X-risk News of the Week: Nuclear Winter and a Government Risk Report

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The big news this week landed squarely in the x-risk end of the spectrum.

First up was a New York Times op-ed titled, Let’s End the Peril of a Nuclear Winter, and written by climate scientists, Drs. Alan Robock and Owen Brian Toon. In it, they describe the horrors of nuclear winter — the frigid temperatures, the starvation, and the mass deaths — that could terrorize the entire world if even a small nuclear war broke out in one tiny corner of the globe.

Fear of nuclear winter was one of the driving forces that finally led leaders of Russia and the US to agree to reduce their nuclear arsenals, and concerns about nuclear war subsided once the Cold War ended. However, recently, leaders of both countries have sought to strengthen their arsenals, and the threat of a nuclear winter is growing again. While much of the world struggles to combat climate change, the biggest risk could actually be that of plummeting temperatures if a nuclear war were to break out.

In an email to FLI, Robock said:

“Nuclear weapons are the greatest threat that humans pose to humanity.  The current nuclear arsenal can still produce nuclear winter, with temperatures in the summer plummeting below freezing and the entire world facing famine.  Even a ‘small’ nuclear war, using less than 1% of the current arsenal, can produce starvation of a billion people.  We have to solve this problem so that we have the luxury of addressing global warming.

 

Also this week, the Senate Armed Services Committee, led by James Clapper, released the Worldwide Threat Assessment of the US Intelligence Community for 2016. The document is 33 pages of potential problems the government is most concerned about in the coming year, a few of which can fall into the category of existential risks:

  1. The Internet of Things (IoT). Though this doesn’t technically pose an existential risk, it does have the potential to impact quality of life and some of the freedoms we typically take for granted. The report states: “In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”
  2. Artificial Intelligence. Clapper’s concerns are broad in this field. He argues: “Implications of broader AI deployment include increased vulnerability to cyberattack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment. […] The increased reliance on AI for autonomous decision making is creating new vulnerabilities to cyberattacks and influence operations. […] AI systems are susceptible to a range of disruptive and deceptive tactics that might be difficult to anticipate or quickly understand. Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks.”
  3. Nuclear. Under the category of Weapons of Mass Destruction (WMD), Clapper dedicated the most space to concerns about North Korea’s nuclear weapons. However he also highlighted concerns about China’s work to modernize its nuclear weapons, and he argues that Russia violated the INF Treaty when they developed a ground-launch cruise missile.
  4. Genome Editing. Interestingly, gene editing was also listed in the WMD category. As Clapper explains, “Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products.” Though he doesn’t explicitly refer to the CRISPR-Cas9 system, he does worry that the low cost and ease-of-use for new technologies will enable “deliberate or unintentional misuse” that could “lead to far reaching economic and national security implications.”

The report, though long, is an easy read, and it’s always worthwhile to understand what issues are motivating the government’s actions.

 

With our new series by Matt Scherer about the legal complications of some of the anticipated AI and autonomous weapons developments, the big news should have been about all of the headlines this week that claimed the federal government now considers AI drivers to be real drivers. Scherer, however, argues this is bad journalism. He provides his interpretation of the NHTSA letter in his recent blog post, “No, the NHTSA did not declare that AIs are legal drivers.”

 

While the headlines of the last few days may have veered toward x-risk, this week also marks the start of the 30th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference. For almost a week, AI researchers will convene in Phoenix to discuss their developments and breakthroughs, and on Saturday, FLI grantees will present some of their research at the AI Ethics and Society Workshop. This is expected to be an event full of hope and excitement about the future!