Nuclear Winter with Alan Robock and Brian Toon

The UN voted last week to begin negotiations on a global nuclear weapons ban, but for now, nuclear weapons still jeopardize the existence of almost all people on earth.

I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.

Toon and Robock have studied and modeled nuclear winter off and on for over 30 years, and they joined forces ten years ago to use newer climate models to look at the climate effects of a small nuclear war.

The following interview has been heavily edited, but you can listen to it in its entirety here or read the complete transcript here.

Ariel: How is it that you two started working together?

Toon: This was initiated by a reporter. At the time, Pakistan and India were having a conflict over Kashmir and threatening each other with nuclear weapons. A reporter wanted to know what effect this might have on the rest of the planet. I calculated the amount of smoke and found, “Wow that was a lot of smoke!”

Alan had a great volcano model, so at the American Geophysical Union meeting that year, I tried to convince him to work on this problem. Alan was pretty skeptical.

Robock: I don’t remember being skeptical. I remember being very interested. I said, “How much smoke would there be?” Brian told me 5,000,000 tons of smoke, and I said, “That sounds like a lot!”

We put it into a NASA climate model and found it would be the largest climate change in recorded human history. The basic physics is very simple. If you block out the Sun, it gets cold and dark at the Earth’s surface.

We hypothesized that if each country used half of their nuclear arsenal, that would be 50 weapons on each side. We assumed the simplest bomb, which is the size dropped on Hiroshima and Nagasaki — a 15 kiloton bomb.

The answer is the global average temperature would go down by about 1.5 degrees Celsius. In the middle of continents, temperature drops would be larger and last for a decade or more.

We took models that calculate agricultural productivity and calculated how wheat, corn, soybeans, and rice production would change. In the 5 years after this war, using less than 1% of the global arsenal on the other side of the world, global food production would go down by 20-40 percent for 5 years, and for the next 5 years, 10-20 percent.

Ariel: Could you address criticisms of whether or not the smoke would loft that high or spread globally?

Toon: The only people that have been critical are Alan and I. The Departments of Energy and Defense, which should be investigating this problem, have done absolutely nothing. No one has done studies of fire propagation in big cities — no fire department is going to go put out a nuclear fire.

As far as the rising smoke, we’ve had people investigate that and they all find the same things: it goes into the upper atmosphere and then self-lofts. But, these should be investigated by a range of scientists with a range of experiences.

Robock: What are the properties of the smoke? We assume it would be small, single, black particles. That needs to be investigated. What would happen to the particles as they sit in the stratosphere? Would they react with other particles? Would they degrade? Would they grow? There are additional questions and unknowns.

Toon: Alan made lists of the important issues. And we have gone to every agency that we can think of, and said, “Don’t you think someone should study this?” Basically, everyone we tried so far has said, “Well, that’s not my job.”

Ariel: Do you think there’s a chance then that as we acquired more information that even smaller nuclear wars could pose similar risks? Or is 100 nuclear weapons the minimum?

Robock: First, it’s hard to imagine how once a nuclear war starts, it could be limited. Communications are destroyed, people panic — how would people even be able to rationally have a nuclear war and stop?

Second, we don’t know. When you get down to small numbers, it depends on what city, what time of year, the weather that day. And we don’t want to emphasize India and Pakistan – any two nuclear countries could do this.

Toon: The most common thing that happens when we give a talk is someone will stand up and say, “Oh, but a war would only involve one nuclear weapon.” But the only nuclear war we’ve had, the nuclear power, the United States, used every weapon that it had on civilian targets.

If you have 1000 weapons and you’re afraid your adversary is going to attack you with their 1000 weapons, you’re not likely to just bomb them with one weapon.

Robock: Let me make one other point. If the United States attacked Russia on a first strike and Russia did nothing, the climate change resulting from that could kill almost everybody in the United States. We’d all starve to death because of the climate response. People used to think of this as mutually assured destruction, but really it’s being a suicide bomber: it’s self-assured destruction.
Ariel: What scares you most regarding nuclear weapons?

Toon: Politicians’ ignorance of the implications of using nuclear weapons. Russia sees our advances to keep Eastern European countries free — they see that as an attempt to move military forces near Russia where [NATO] could quickly attack them. There’s a lack of communication, a lack of understanding of [the] threat and how people see different things in different ways. So Russians feel threatened when we don’t even mean to threaten them.

Robock: What scares me is an accident. There have been a number of cases where we came very close to having nuclear war. Crazy people or mistakes could result in a nuclear war. Some teenaged hacker could get into the systems. We’ve been lucky to have gone 71 years without a second nuclear war. The only way to prevent it is to get rid of the nuclear weapons.

Toon: We have all these countries with 100 weapons. All those countries can attack anybody on the Earth and destroy most of the country. This is ridiculous, to develop a world where everybody can destroy anybody else on the planet. That’s what we’re moving toward.

Ariel: Is there anything else you think the public needs to understand about nuclear weapons or nuclear winter?

Robock: I would think about all of the countries that don’t have nuclear weapons. How did they make that decision? What can we learn from them?

The world agreed to a ban on chemical weapons, biological weapons, cluster munitions, land mines — but there’s no ban on the worst weapon of mass destruction, nuclear weapons. The UN General Assembly voted next year to negotiate a treaty to ban nuclear weapons, which will be a first step towards reducing the arsenals and disarmament. But people have to get involved and demand it.

Toon: We’re not paying enough attention to nuclear weapons. The United States has invested hundreds of billions of dollars in building better nuclear weapons that we’re never going to use. Why don’t we invest that in schools or in public health or in infrastructure? Why invest it in worthless things we can’t use?

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Transcript: Nuclear Winter Podcast with Alan Robock and Brian Toon


Ariel: I’m Ariel Conn with the Future of Life Institute. I’m here with meteorologist Dr. Alan Robock and physicist Dr. Brian Toon, who have both become influential climate scientists and researchers in the field of nuclear winter. The basic idea behind nuclear winter is that a nuclear war would burn cities and industrial areas and the resulting firestorms would loft huge amounts of smoke high enough into the atmosphere to impact the weather for years. The smoke would block the Sun and cause temperatures to plummet. This could impact farming and the food supply globally, and potentially result in the deaths of millions and even billions of people worldwide, even in countries that had no involvement in the original war. Over the last couple of decades, nuclear winter has fallen off of the radar screen. However, in the last 10 years Drs. Robock and Toon have looked into what the climate response would be for just a small nuclear war between new nuclear states, such as India and Pakistan. And they’ve also revisited nuclear winter theory with modern climate models. First, thank you Professor Robock and Professor Toon for talking with me today.

Robock: Happy to be here…

Toon: My pleasure…

Ariel: Thank you. So, Professor Toon you were one of the 5 authors on first major paper about nuclear winter, which came out in about 1983. But, by that time nuclear weapons had been around for nearly 40 years. So I was wondering, what lead to the realization that this could be a risk?

Toon: Well, people were concerned about the environmental impacts of nuclear weapons even at their origin. There were congressional studies and hearing about it in the 1950s involving people like the head of the weather bureau at the time Harry Wexler, and famous mathematicians like von Neumann. People kept investigating these possible problems. The National Academy of Sciences in the United States had studies of this periodically. For example, about 5 years before nuclear winter was discovered they identified ozone loss as a concern from nitrogen oxide being put into the stratosphere. But, what happened with nuclear winter, our group had actually thought about the environmental effects and hadn’t really identified any clear ones. And then the Alvarez group at Berkeley discovered the asteroid impact that killed the dinosaurs, which made us rethink how particles in the atmosphere, smoke or dust, could cause an extinction. And so we began investigating what would happen from putting all of the smoke in the upper atmosphere that could be generated from fires that were set in cities and other places like that. So, that was really the genesis.  Carl Sagan became involved a little later. His interest was very different. He was interested in what is called the Drake Equation, which is a way you compute the number of advanced civilizations in the galaxy. When he did that calculation he ended up with billions and billions of civilizations, which was a problem because they weren’t contacting us. So, he concluded that these billions and billions of civilizations must be destroying themselves early in their development of civilization by having nuclear wars.

Ariel: And, so, what was the initial public response to that work? And I guess to Sagan’s hypothesis as well?

Toon: Well, the initial response was very different depending on who you were. We worked for NASA at the time. NASA forbid us to talk about this and they prevented us from giving a scientific talk at a scientific meeting about it. The justification was it hadn’t been thoroughly reviewed, which allowed Carl to organize a worldwide thorough review, distributing the information across the Earth and I was very surprised at the time. I saw this as just another science problem, like the asteroids killing the dinosaurs, and I was surprised that there was so much interest in it. And I think there was so much interest partly because of the times. People were concerned about the Star Wars program and the Reagan administration and even though he was opposed to nuclear weapons he was nevertheless making a big deal about them and so people were very concerned about a nuclear war. There were 70,000 nuclear warheads at the time. And there were lots of people upset about the large number of weapons and the whole idea of Russia and the United States competing with one another to build more and more weapons that they didn’t need. So, there was a large international concern about it. There was immediately a National Academy study about it, which supported our findings. There hasn’t been one since then, despite the history of having them periodically. There was a World Meteorological Organization  SCOPE report about it, which is an international combination of national academies looking at it, which also supported the results at the time. So, in general there was a lot of scientific support of it, but there was also a large attempt to suppress the information by various government agencies. Carl was a very good communicator and so people understood what Carl had to say and became very concerned about this problem. I have to say I was equally surprised 10 years ago when Alan and I started working on this problem again that there was so little interest in it! People seemed unconcerned about nuclear conflicts and had lost all their interest in investigating nuclear weapon’s effect on the environment, which had been an active issue for 30 or more years. In the 1980s the Department of Defense at least investigated it. Now they’re kind of stonewalling it and don’t want to even talk about it.

Ariel: So, I want to get back to the more recent research here in a minute, but first I wanted to ask you one more question about the history of the theory of nuclear winter. Can you explain a little bit about why the theory fell off the radar in the first place, sort of to the point where most people today aren’t even familiar with it? It was at least getting public attention in the eighties, but then it seems to have sort of dropped off.

Toon: Our group had a lot of other things to work on, like the Ozone Hole. I spent years working on the Ozone Hole. It was an environmental catastrophe about to happen. If we hadn’t done something about that, it could have been devastating by now to the planet. And so, we did what we could do with the models we had. NASA forbid us from working with other modeling groups. They forbid the NASA modeling groups from working on it. So, only a very small number of people were allowed to work on the problem. So we did as much as we could do, which, you know I think at the time, was quite a lot and then we moved on to other problems which was typical for us to do. As I said, every group that looked at this basically found that the fundamental parts of it that had been proposed were correct. There were some minor issues about the amount of smoke would be produced, some people thought those to be a little bit less than we suggested, but then they found that the smoke was more absorbing. So, in bulk there were lots of suggestions for improvements to the theory and some of them would go one way or would go the other way and in general they ended up pretty much as we said. And so, the only annoying thing that happened was that about at the time we lost interest in it Starley Thompson and Steve Schneider published a paper in Foreign Affairs, which is not a scientific journal, was not reviewed by scientists, in which they ran a climate model for 20 days or something like that and pronounced that it would be more like fall than winter. Now, how do you know in a 20 day simulation, which was with a model that we would consider laughable today, what the climate was? There is a whole group of people that has been involved in scientific problems ever since the debate about whether cigarettes smoking is bad for you. The same people that said cigarette smoking is not a problem said the Ozone Hole is not a problem, said nuclear winter was not a problem, and say the greenhouse effect is not a problem. There is a whole group that make a profession out of this. They picked up this one paper and said that “oh the whole thing is not a problem.” At the time we weren’t even paying any attention to Thompson and Schneider’s paper because we thought it was irrelevant and not very well done. Then the Soviet Union fell apart and people concluded that there wasn’t likely to be a nuclear war and so people lost interest.

Ariel: OK. And then Professor Robock, I want to move to you now. The earliest paper you have listed in your nuclear winter page is from 1984 and that was a paper published in Nature. I was hoping you could talk a little bit about you also got interested in the subject and maybe tell us a little bit about what your early research was on.

Robock: I went to the fall American Geophysical Union meeting and I saw there was going to be a talk about nuclear weapons and climate change and I thought, “This looks pretty interesting!.” I went to it and it was canceled! But, I was still very interested in it and it turned out that they changed it from nuclear war to nuclear winter and published a paper the next year. So, I had already been working on the effects of volcanic eruptions on climate and I had a climate model that was called an energy balance model that could run for several years but it simplified the atmospheric circulation and so I put smoke instead of volcanic particles and instead of sulfate aerosols and found this long term effect from nuclear war that would last for years. The next year there was a joint Soviet/American teleconference, a two day meeting at the Shoreham Hotel in Washington, which I went to. And one of the great things about this story is that Russian scientists were doing the same calculations, Vladimir Aleksandrov and Georgiy Stenchikov. And their results, like mine, agreed with the original nuclear winter work that Brian and his colleagues did. And so it was presented to both governments as science that was agreed on and this really helped influence Gorbachev and Reagan to end the nuclear arms race. And then I started doing some work on, well this is all theory, climate models are just theory. Is there any way to test it? Most scientists have a laboratory. Our laboratory is the whole planet. We can’t actually have a nuclear war to see, so we were looking at parts of the theory that we could test and one was forest fires and how much cooling you got under the smoke from forest fires. So, I did several papers studying forest fires that had been observed and how the temperature changed. And I got funding from the Defense Department, the Defense Nuclear Agency, for a couple of years to do this. And then after it turned out that all the work actually supported the theory they stopped giving us money. And again I went on to other things. I went on to the Mount Pinatubo Volcano, which erupted in 1991, and that was very interesting and I studied soil moisture and other things, so I went on to other things. But I thought, by the end of the 80s, 1990 Brian’s group wrote a second paper that sort of summarized all of the results and then it was real and so the nuclear arms race was over, the number of weapons were going down, and we went on to other things.

Ariel: And then, how is it that you two started working together? It looks like that was about 10 years ago?

Toon: This was sort of initiated by a reporter calling me up. At the time Pakistan and India were having a conflict over Kashmir and were threatening each other with nuclear weapons. And a reporter wanted to know what effect this might have on the rest of the planet. I thought if you were in India and Pakistan it would probably be pretty bad, but probably the rest of the planet would wouldn’t be significantly affected. I didn’t know how many weapons they had or really anything about Pakistan and India, so I felt kinda guilty about saying that without investigating it. So, I spent a year or two of my free time looking into how many weapons they had and thinking about it and discovered they had a lot of weapons. And I calculated the amount of smoke that might be generated in a simple way and found out, “Well, wow that was a lot of smoke.” I did a simple energy balance calculation and decided that a huge radiative forcing would result from that. I contacted Rich Turco who had been working on this for a long time and we went over it and computing the amount of smoke and then I thought, “I don’t want to publish this without a real climate model.” We have real climate models now and I didn’t want to do any back of the envelope things. So, I said, “Well who can do a good job on this?” And I said well, “Alan’s got a really great volcano model. He could do this problem.” And so at the meeting at the American Geophysical Union that year, in December, I don’t remember exactly which year it was, you know Richard and I went up and talked with Alan and tried to convince Alan to work on this problem. And Alan was pretty skeptical about well, “Nothing is going to happen, I’m going to waste my time.” They did some calculations and became interested because he actually discovered there was a large impact there. Alan probably has his own view of what happened then.

Robock: I don’t remember being skeptical. I remember being very interested. I said, “How much smoke would there be?” And Brian told me 5,000,000 tons of smoke. And I said, “That sounds like a lot!” And so, we went back and put it into our climate model, which was a NASA climate model, by the way, from the Goddard Institute for Space Studies, we were using for volcanoes in it. And we found it wouldn’t be nuclear winter. The temperatures wouldn’t get below freezing in the summer time. But, it would be the largest climate change in recorded human history. It would be colder than the Little Ice Age. So, we published two papers. We were really excited about this. So, we wrote two papers, one about the smoke and one about the climate response and sent it to Science, which had published the results before, Brian’s results, my work on smoke from forest fires and they said they weren’t interested. We were really shocked that they weren’t interested. They had been very interested before. And they said, “But, you know, if you get it published somewhere else, we’ll consider a perspectives paper on what it all means.” “OK, thanks very much.” “Let’s try Nature. They’re the other high profile science journal.” And they said, “Two papers is too many. Can you combine it into one paper and we’ll send it out for review.” So, we combined it into one paper and sent it out for review and the review said it was too long. It was very frustrating because these journals are high profile and they are supposed to do things fast and this also took quite a long time. And by then our first calculations had been with a climate model that had a very simple ocean, which made the compute time about half the amount. We realized though, with a full ocean it would be more realistic, so by then we had the time to go back and re-run them with a full ocean. And by the way this work was done with Luke Oman, who was my graduate student, and with Georgiy Stenchikov who was working with me now at Rutgers, who was one of the original Russian scientists working on this. And we eventually got them published in the European journal Atmospheric Chemistry and Physics. And then we went to Science and said, “OK, can we write the perspectives paper now?” And they said, “What are you talking about?” So, I looked up the email and said, “You said you’d take a paper on this.” “Oh yeah. OK. And so we did publish a paper in Science, a one page paper about what it all meant. But, as Brian said, nobody paid any attention. And it was kind of frustrating, nobody cared. There was no controversy about it. No papers published that tried to prove that there was something wrong with our model, that we did something wrong. The basic physics is very simple. If you block out the Sun, it gets cold and dark at the Earth’s surface, and dry. And the basic physics is very simple if you heat that stratosphere, it will destroy ozone and you will get more ultraviolet radiation. And so, those results we were pretty confident in them and since then we’ve had two other modeling groups who have been interested and done the same calculation with different climate models and gotten basically the same result. Every climate model is imperfect and so it is good to check it with different models and that’s was we did.

Ariel: And can you guys tell me a little about what the results were. If India and Pakistan were to have a nuclear war, what would happen?

Robock: We hypothesized, we took a scenario that if each country used half of their nuclear arsenal that would be 50 weapons on each side. And we didn’t know much about their weapons, it’s all secret. So we said, “Let’s assume that they’re the simplest bomb you can put together, which is the size that was dropped on Hiroshima and Nagasaki about 15,000 tons of TNT equivalent, a 15 kiloton bomb.” So, Brian’s group did this. They looked at how much area would burn based on what happened in Hiroshima and extrapolated that to modern mega-cities and calculated how much smoke would go up into the atmosphere. The total was 6,500,000 tons of smoke, but we basically did 85 cities with 5,000,000 tons of smoke and how much would go into the upper atmosphere. And we put it into the model and the model would allow the atmosphere to be heated, the smoke would be lofted up and blown around the world in the stratosphere above where we have weather and so it would last for years and it wouldn’t be washed out. And the answer is the global average temperature would go down by about one and a half degrees Celsius, about two and a half degrees Fahrenheit, but that’s averaged over the whole world. In the middle of continents, temperature drops would be larger and it would last for a decade or more. And so, OK, it gets cold and the growing season gets a little bit shorter because you have a frost later in the spring and earlier in the fall, but what does that mean? And so, the next step was we took models that calculate agricultural productivity, in the United States, in China, in the biggest grain growing regions of the world, and we calculated how would wheat, corn, soybeans, and rice production change under colder temperatures, less precipitation, less sunlight, and shorter growing seasons. And we found that in the 5 years after this tiny little war using less than 1% of the global arsenal on the other side of the world, global food production would go down by 20 to 40 percent for 5 years and for the next 5 years 10 to 20 percent. So, that means that there would be huge stress on countries that import food and even on countries that grow food. China had hundreds of millions of fewer people back when their production of rice was that low. And the next step, of course, is to then look at global economic and world food trade models and see how the price of food would change and make hypotheses about whether people would hoard the food or try and make money by selling it and how many people would really lose access to food and how much famine would increase. That’s the next step that we want to do and really haven’t done that in a very systematic way. There has been one paper, it wasn’t even a scientific paper, that sort of waved its hands and said 1-2 billion people might starve because of famine, but that really needs to be done much more scientifically.

Ariel: And so that estimate of 1-2 billion, that’s based just off the small war between India and Pakistan?

Robock: Yeah, and then there was a controversy as Brian said. In the 1980s, we didn’t have very big computers. The biggest computer was the Cray computer at the National Center for Atmospheric Research where they did 20-day simulations of nuclear war with a model that didn’t include the ocean or the upper atmosphere. The Russian model was much simpler. It was like a PC. It broke the world up into huge boxes. So we said, “Let’s go back with a modern climate model and test ‘was nuclear winter theory correct?’” And we did it and we found it was definitely correct. The results that were done on the simple models were basically the right answer, that you got temperatures below freezing in the grain growing regions of the world for a couple years from the amount of smoke that would go in from a war between the US and Russia. Another interesting thing was that in the 1980s our basic scenario was we took a third of the Russian arsenal and a third of the US arsenal and put it on all of the targets in both cities. Now, the number of nuclear weapons is much lower. The New Start agreement requires that the US and Russia each get down to about 2,000 weapons. And so we said, “What about a war between the US and Russia today?” And so we said, “OK. Let’s target those weapons,” and it turns out you still get the same amount of smoke, 150,000,000 tons of smoke. Why? Because in the scenario before they put one bomb on every possible target they could blow up. They still had this huge pile of weapons and so this is only a third of their arsenal then. So, let’s put two on each one just in case the first one doesn’t work. Well, there is still 9 bombs for every target. And so, talk about overkill! And we still have one bomb for every target so we can produce just as much smoke as we did back then. So, nuclear winter is still possible today. That’s what’s really scary and that’s what people aren’t scared about.

Ariel: So, I have a couple questions I want to go toward, but I think first, you mentioned that if a war took place between India and Pakistan the weapons used would be about the size of the bombs dropped on Japan. And you also say that 5,000,000,000 tons of smoke could be all it takes to start a nuclear winter. But, I guess I don’t have an idea of how much 5,000,000,000 tons of smoke is. How does that compare to the smoke from the fires after the Nagasaki and Hiroshima bombs? Or how does it compare to major historic fires like the one 1906 in San Francisco?

Toon: Well there are these cities, for example San Francisco, most people don’t realize that after the earthquake in San Francisco a firestorm developed. There’s a beautiful story written about this by Jack London describing the gigantic plumes of smoke and the roaring wind blowing into the city. It was the largest maritime evacuation in American history. They sent boats from Oakland to go get the people out of the marina district in San Francisco away from the fires. But that’s probably like 1 percent of what you would get from a war between India and Pakistan. So the effect of one city burning like that is too small to be noticeable. And of course, when you’re in the middle of a war like the Second World War people weren’t making measurements. So, we just didn’t get any data from the numerous firestorms that occurred then because people were busy fighting a war. So, we can’t really tell much from these events except that a mass fire is terrifying and there are examples from Dresden, from Vonnegut’s book, which describes the horrors of being in a firestorm. Fires are so intense that it even killed people that were in underground bunkers and burned up coal that was being stored for the winter underground, it burned the windows and cars and stuff like that. It was incredibly intense fires. So, these things are obviously very terrifying to experience in the real world.

Robock: The cities today have a lot more material in them that can burn so that for the same sized bomb you’d probably get a lot more smoke than you would in the past. There were cities that were firebombed, Tokyo, Dresden, Hamburg, Darmstadt, a number of smaller cities in Japan too. But, there weren’t as many cities then that burned as compared to this scenario. The closest thing that would come in terms of climate impact would be either asteroid impact or a large super volcanic eruption that would put so much material up in the atmosphere that it would basically block out the Sun and it would get very cold at the Earth’s surface. And so we have evidence that there were extinctions and huge disruptions when that happened. The main difference is that a nuclear war would be something that we would do to ourselves, not something that we can’t control from underground or from outer space.

Toon: Another way to think about the amount of material is that a volcano, like Mt. Pinatubo, put something like 10 or 20 million tons of sulfuric acid into the upper atmosphere. So, that’s, you know, 2-4 times more than we’re talking about in a simple war. It just happens that sulfuric acid is transparent. It’s like water. It didn’t do that much to the climate. But, it did observably change the climate. So, the amount of smoke we’re talking about, you know, is in the range of natural phenomena like volcanoes. It’s just that smoke is much worse material to put into the upper atmosphere because it absorbs the sunlight.

Robock: It’s black and so it absorbs sunlight and doesn’t let it get transmitted through it. It also lasts a lot longer. The sulfate aerosols from volcanic eruptions start falling out immediately and after a year only about a third of it is left, after 2 years it’s basically all gone. But, the smoke gets heated by the sunlight. It gets lofted up into the high atmosphere where it lasts for 10 or 20 years. And after 7 years a third of it is left instead of 1 year. So, it lasts 7 times longer than the particles from volcanic eruptions and each particle is much more effective at blocking sunlight.

Ariel: So, this is one of the areas that I’ve heard there is still some uncertainty and disagreement. And I was wondering if you guys could address some of the criticisms of whether or not the smoke would loft that high or spread globally.

Toon: There is a whole series of issues here and the only people that have been critical are Alan and I. So we have identified where we think more work needs to be done. The Departments of Energy and Defense, which should be investigating this problem, have done absolutely nothing. They have not published a single paper in the open literature analyzing this problem. We’ve tried to get it involved by European scientists. Some of them have done climate simulations with results similar to Alan’s. But, none of them have gone back and investigated these issues of how much fuel is there in a modern city, how much would burn. There are things that you would never imagine like in Dubai. For example, a couple of these gigantic skyscrapers they have have recently caught fire because the padding on the outside of the building contains flammable foams. Someone apparently lit one up with a cigarette. There are movies on the web that you can see these incredible fires burning in these large buildings. And so there are all kinds of new building materials involving foams and plastics. The insides of buildings are full of books, and plastic furniture, and flammable materials. There is no information on how much fuel is in these cities. That needs to be investigated. No one has done studies of fire propagation in big cities. It’s very complicated. You light fires all over the place. No fire department is going to go put out a nuclear fire because there are going to be hundreds of them or thousands of them and all of these buildings are going to catch on fire. There will be small fires and they will build and grow and you won’t be able to put them out. Of course, the fire departments will have all been destroyed by the blast. So, no one has investigated how these fires would grow. As far as the rising smoke, we’ve had people investigate that with a series of models of different scales and they all find the same things, it goes into the upper atmosphere and then self-lofts. We’ve had people look at experimental data from large forest fires. They see evidence of smoke going into the lower stratosphere and upper troposphere and self-lofting. But, these are things that should be investigated by a range of science people with a range of experiences and that hasn’t happened.

Robock: Another issue is, what are the properties of the smoke? So, we assume it would be small, single, black particles and that nothing would happen to their surfaces. And so, theory tells us that they might grow, stick to each other, that chemistry might happen and it might be coated by things that might make it slightly less absorbent and there might be chemical reactions on them. And so that also needs to be investigated. There is one paper that just came out where a group in Sweden started to look at that with a very simple model simulation and they added what’s called organic carbon, which is not the black carbon. It’s a browner carbon with different chemicals that also come out. So, that also needs to be looked at. What would happen to the particles as they sit in the stratosphere for a couple of years? Would they react with other particles? Would they degrade? Or would they grow? So those are details which, again we’re trying to get money to do research to look at this. We realize that what we’ve done is the best we can do right now, but there are additional questions and unknowns by definition are unknown. We might discover things that make it worse rather than better, we’ll discover both. There were criticisms in the 1980s, “Oh you haven’t considered everything. Once we figure it out the problem is going to go away.” And it turns out that some things made the effects less and some things made them bigger and so that’s the nature of science. But, there are still important questions to look at. We just are kind of frustrated. The Department of Homeland Security really should fund this. They’ll fund you to study one terrorist bomb in New York City. When you explain to them, “Well, a war between India and Pakistan is a much greater threat to the US homeland than one terrorist bomb, as horrible as that is.” “Oh, well that’s not my job. Go talk to some other program manager.” Who of course doesn’t exist.

Toon: Alan made lists of where we think the important issues are. And we have gone to every agency that we can think of with these lists, and said, “Don’t you think someone should study this?” Basically, everyone we tried so far has said, “Well, that’s not my job,”

Ariel: Do you think there’s a chance then that as we acquired more information about how cities would burn and what particulates would be lofted and how smoke would rise, that even smaller nuclear wars could pose similar risks? Or, do you think that 100 nuclear weapons is the minimum?

Robock: There’s a couple things to say about that. First of all, it’s hard to imagine how once a nuclear war starts, it could be limited. Communications are destroyed, people panic, and you don’t know how people would even be able to rationally have a nuclear war and stop. Second of all, we don’t know – I mean it’s possible that one bomb could trigger a firestorm that propagates through a major city. We just assume that the area that burned in Hiroshima would be the area that would burn, but it could be a much larger area depending on the weather that day. So people have asked me: so how few nuclear weapons can you use without destroying the world’s climate? And I really don’t want to answer that question, I don’t want to make it seem like, OK, 17 is cool, 18 is going to be really bad. When you get down to that small numbers, it really depends on what city, what time of year, what was the weather that day, and so we want to just look at scenarios for a huge range and let people decide whether they want to take a chance at that much damage. We don’t know the answer to that question. But it could be a much smaller number of weapons, especially if the weapons are bigger. The US arsenal has much bigger – 100 kiloton and 500 kiloton weapons, not 15 kiloton weapons – and so one of those would burn a much larger area. One Trident nuclear submarine has the firepower of 1000 Hiroshimas. We’ve got 14 of them, and that’s only half of our arsenal, and Russia has the same size. And so we have so much overkill. And we don’t want to emphasize India and Pakistan – any two nuclear countries could do this. Israel, China, maybe. India got their nuclear weapons because they were worried about China, and then Pakistan got theirs because they’re worried about India. Iran wants to get theirs because they’re worried about the terrible nuclear superpowers in the Mideast, which are the United States and Israel. And so that’s how proliferation takes place. The only nuclear power, or nuclear country, that couldn’t probably destroy the world’s climate is North Korea because apparently they only have about 10 weapons. But every other one could, and it doesn’t have to just be an India-Pakistan scenario, although that scenario is one of the scariest ones because every week you read on the Kashmiri border people are getting killed, shooting back and forth across the border, and that could really escalate.

Toon: So the most common thing that happens when we give a talk about this is someone will stand up and say, “Oh, but a war would only involve one nuclear weapon.” Somehow they convince themselves that only one weapon would be used. But you have no idea what would happen in a war like that. The only nuclear war we’ve had, the nuclear power, the United States, used every weapon that it had on civilian targets. People want to stand up and say, “Oh only one bomb would be used and they’d go blow up a desert somewhere.” Sure, maybe that could happen, but it’s human nature to use every weapon you’ve got. And people that have looked at the scenarios… So to me the greatest uncertainty here really is, what is a scenario of a nuclear war: how many weapons would be used? What would the targets be? For example, Alan and I have had discussions with the Swiss who are trying to do a model, and someone said, “Well, half of the weapons are going to be dropped on rural areas.”

Well who’s going to bomb a rural area? It’s sort of like, people want to diminish this by claiming that people are going to bomb targets where there’s no fuel. OK well we don’t know what the scenario is, that’s a great uncertainty, but if you have 1000 weapons and you’re afraid your adversary is going to attack you with their 1000 weapons, you’re not very likely to just bomb them with one weapon. The scenario is just unknown, it’s unpredictable, and not possible to predict.

Robock: Let me just make one other point. It turns out that if the United States attacked Russia on a first strike and Russia did nothing, the climate change resulting from that would kill everybody in the United States. We’d all starve to death because of the climate response. People used to think of this as mutually assured destruction, if you attack one country they’ll attack you back, but really it’s being a suicide bomber, it’s self-assured destruction. If you attack the other country and they don’t do anything else you’re going to kill yourself. And so, threatening to use nuclear weapons is acting like a suicide bomber, it just makes no sense at all. And I don’t think most people in the defense establishment, that we know of, are even aware of this. So that’s what’s really scary. They aren’t even aware… They just think of these as just bigger bombs like they used before.

Ariel: I’m glad you mentioned the self-assured destruction. That was something else I wanted to bring up. We did talk about the smaller nuclear war, but if a larger nuclear war were to occur, say between the US and Russia, or even if the US or Russia decided to intervene with their own nuclear weapons in a smaller nuclear war. What are the predictions right now about how nuclear winter would look if a significantly larger number of nuclear weapons were dropped?

Toon: Well I think the agricultural effects there are very simple. It’s, as Alan said, going to be below freezing in the major grain growing areas of the world for a couple years. There will be absolutely no food grown at middle latitudes. In the tropics there might still be some food, but in the tropics, you suffer a big loss of rainfall. When it gets cold it stops raining, so you’re going to have failures of monsoons in Southeast Asia, people in South America are going to have no rain there. There’s going to be a worldwide nearly total loss of food…
Robock: …and you’re going to have enhanced ultraviolet radiation by the ozone destruction, so in the tropics it’s going to be sunburn for everybody unless you stay under a cover. Carl used to talk about extinction of the human species, but I think that was an exaggeration. It’s hard to think of a scenario that would produce that. If you live in the Southern Hemisphere, it’s a nuclear free zone, so there wouldn’t be any bombs dropped there presumably. If you lived in New Zealand and there wasn’t that much temperature change because you’re surrounded by an ocean, there’s lots of fish and dead sheep around, then you probably would survive. But you wouldn’t have any modern medicine… you’d be back to caveman days. You wouldn’t have any civilization, so it’s a horrible thing to contemplate, but we probably couldn’t make ourselves extinct that way.

Toon: People did look at this in the 1980s and concluded that if you couldn’t grow any food for a year or so, and there was no way to import food, which there wouldn’t be if nobody else is growing food, that the human population would probably come down to a few hundred million people, because that’s what primitive agriculture could sustain. No one has looked at this in a long time, but at that time they said, well what will happen to New Zealand? And in New Zealand, they found out they didn’t have anybody that knew how to repair their generators, so they’d lose their electrical power supply, and you know, they wouldn’t have fuels being imported for transport and cars and things like that. They wouldn’t be able to run tractors, so yeah they would be able to survive but it’d be people down at the beach with fishing poles because their boats wouldn’t have any fuel, and they wouldn’t be able to import any food, they wouldn’t be able to use tractors…

Robock: No modern medicine, no medicines imported…

Toon: You’d be out growing stuff in your backyard, and what you can grow you eat. And I’ve tried that. I can’t grow enough stuff to feed myself for very long.

Robock: And you need enough weapons to keep your starving neighbors away. It wouldn’t be pleasant.
Ariel: So taking all of that into account, and given the risks in the political climate, what scares you both most right now regarding nuclear weapons?

Toon: I think that what scares me the most is politicians’ ignorance of the implications of using nuclear weapons. We’ve got one of the political candidates for President of the United States wanting to know, why can’t he drop nuclear weapons on Syria. And asking, well why can’t he use nuclear weapons? And suggesting that Japan and South Korea develop nuclear weapons. And then you’ve got the even worse situation with what appears to be going on in the ex-Soviet Union, or Russia, where Putin is suggesting that people locate bomb shelters, he’s put nuclear weapons into the Baltic, or at least nuclear weapon capable missiles, and he’s claiming… Some of his surrogates are claiming that nuclear war is probable between the United States and Russia in the next few years. There are people there. This is all political noise, probably. But nevertheless, they are telling their citizens to be prepared for nuclear conflict in Russia, and because they’re afraid of the US. From our point of view, from the US point of view, Turkey has come near revolution, we take our weapons stored there and move them to Romania. We see that as moving nuclear weapons from a dangerous place. Russia sees that as putting nuclear weapons close to them where we can use them, and they’re concerned about our anti-missile missiles, which we claim are trying to protect Europe from missiles from Iran. Russia sees those as moving short-range weapons close to their border which could have nuclear warheads on them and attack the Soviet Union with little warning. So Russia sees our advances in which NATO has moved forward trying to keep Eastern European countries free from invasion from Russia, they see that as an attempt to move military forces near Russia where they could quickly attack them. So they feel like they’re being threatened. Now in the West we’re not even aware they feel that way. So there’s a lot of lack of communication going on, lack of understanding of threat from different sides and how people see different things in different ways. And so Russians feel threatened when we don’t even mean to threaten them. This is very dangerous when these leaders are talking about nuclear conflict. They obviously don’t understand the devastation that would occur from even a limited war.

Robock: In addition to that, what scares me is an accident. There have been a number of cases where we almost had nuclear war because the Russians thought a Norwegian sounding rocket was an attack, or a computer program was put in, a test program was put in, where it looked like the US was under attack when it was really a test drill. And the Cuban Missile Crisis. There have been a number of cases where we came very close to having nuclear war. And it’s not true that there’s a suitcase that a guy goes around with the President and that’s the only way to start a nuclear war. There are plans for many military commanders to launch nuclear weapons. What if Washington is destroyed? Are they then be forbidden from using nuclear weapons? They have plans. So crazy people, mistakes, could result in a nuclear war. Some teenaged hacker could get into the systems and so, I think we’ve been very lucky to have gone 71 years without a second nuclear war. And, the only way to prevent it is to get rid of the nuclear weapons. Why do all of the other countries in the world with weapons, except the US and Russia, have so few weapons? China, Britain, France, have a couple hundred weapons. That’s more than enough they feel for their defense. Israel has maybe 100-200, Pakistan and India have 100. Although Pakistan is building more. But the US and Russia still have thousands. What if we each got down to 200 immediately? That would really lessen the impacts on climate if they were ever used, and as I said before, using them is irrational, but at least that would be a first step towards getting rid of them. So there are still way too many in the hands of people that could use them either intentionally or, if some crazy person got in charge – can you imagine that happening? – or an accident.

Toon: People don’t realize how many cities there are. For example, the United States in the 1950s, the Department of Defense concluded that they could destroy the Soviet Union – half of all their industry and cities – if they could deliver 100 Hiroshima-sized nuclear weapons. And they wanted to have 200 because they thought half would fail. You know, that’s just counting cities. Right now Russia has fewer than 300 cities with 100,000 people in them. If you had 300 nuclear weapons, you could attack every city in Russia with 100,000 people in it. Those are not very big cities. The United States has a few more than 300. India, if it wanted to, as it continues to build up its arsenal… Right now it’s got more than 100 weapons probably, it could destroy any country in the world with those 100 weapons. So not only don’t you need many weapons, but we’re moving to a situation where we have all these countries with 100 weapons and India is launching rockets to Mars and to the moon… They’re going to be able to attack any country in the world and destroy a significant fraction of their cities. So we’re moving to a situation where not only do Russia and the United States have more weapons than they need to destroy any country’s cities, but even small countries are moving into situations – like France, Britain, India and Pakistan, pretty soon…

Robock: …China, don’t forget China…

Toon: You know, all those countries can attack anybody on the Earth and destroy most of the country. This is ridiculous, to develop a world where everybody can destroy anybody else on the planet. And that’s what we’re moving toward.

Ariel: All right, I’m almost hesitant to ask, but given all of that, is there anything else you think the public needs to understand about nuclear weapons and their risks, or about nuclear winter, that didn’t get covered?

Robock: I would think about all of the countries that don’t have nuclear weapons. How did they make that decision? How to make a nuclear weapon is not a secret anymore. All you need is highly-enriched uranium or plutonium, there are 40-50 countries that have that material but they’ve chosen not to make nuclear weapons. Some of them feel like they’re protected by treaties like NATO, by countries with weapons, but others don’t and they’ve chosen not to make nuclear weapons, they’ve felt that they’re safer without nuclear weapons. Most of the world is like this – the whole Southern Hemisphere, all of Latin America has no nuclear weapons. South Africa built nuclear weapons with the help of Israel and then gave up their nuclear program. Brazil and Argentina worked on it and decided they’d be better off without it. And so what can we learn from them? I think the only way that we can go forward is to get rid of our weapons. And the world has agreed to a ban on chemical weapons, they’re forbidden. There’s a ban on biological weapons, there’s a ban on cluster munitions, there’s a ban on land mines, but there’s no ban on the worst weapon of mass destruction of all, nuclear weapons. The United Nations, partly as a result of our work, had an Open-ended Working Group this year where many countries – most of the countries of the world are demanding a ban on nuclear weapons. This next week, the United Nations General Assembly, for the first time they’re going to have a vote to have a consultation next year toward a treaty to ban nuclear weapons. The nuclear powers are voting against it, but most of the world is going to vote for it and it’s going to pass and there’s going to be a ban on nuclear weapons next year, which will be a first step towards reducing the arsenals and to disarmament. So there is some forward work on this, but the people have to get involved and demand it, and that was part of the reason why the nuclear arms race ended in the 1980s, yet people are more concerned with other things. We’ve tried to get the attention of President Obama. We want him to justify his Nobel Peace Prize. We’ve talked to his science advisor who knows about our work, and yet there’s no evidence that this information got to him or to the military. There’s still a need to continue to publicize this for people to organize and to demand that we get rid of these things so that our civilization isn’t destroyed, as may have happened in many other places around the galaxy.

Ariel: Professor Toon, did you want to add anything else?

Toon: No, I think Alan is correct that people have so many things to worry about at the moment, that it’s kind of overkill. People can’t deal with it any more information about catastrophes. We hear about every disaster in the world. You know, five people die in some African country and everyone hears about it. People are overwhelmed with all these issues about diseases and riots and terrorists and various other things. Nevertheless, we have to pay attention to the world. We’re not paying enough attention to these nuclear weapons. They’re one of the biggest threats that face the human race, and we need to do something about that because there’s just too much potential for accidents or politicians who don’t know what they’re doing, to involve us in a nuclear conflict that could destroy the civilization that we built over thousands of years. So we have to deal with this problem, and we need to deal with it now before even more countries become nuclear weapon states. So we’re headed in a bad direction at the moment, and it’s partly because politicians are ill-informed and the populace is not trying to do something about these problems, so we need people to pay attention and to inform their politicians that this is not the way to go. We need to go the other direction and make it clear that they’re unusable, politicians cannot use them, it’s a waste of money to have them. The United States has invested hundreds of billions of dollars in building better nuclear weapons that we’re never going to use. Why don’t we invest that in schools? Why don’t we invest that in public health? Why don’t we invest that in infrastructure? Why do we want to invest it in worthless things we can’t use? What’s the point of that?

Robock: It’s really hard to get people’s attention, though. It’s a really depressing topic. When I’m asked to go give a talk somewhere I say I want to talk about this. “Well, could you maybe talk about something else?” And, as Mark Twain said, denial ain’t just a river in Egypt. It’s really hard to listen to this, it just hurts and people pretend it doesn’t exist and somebody else is going to take care of the problem. But I work a lot on global warming, which is a real problem that’s threatening us, much more slowly than this, it’s not instant climate change, it’s gradual climate change. We’ve got to solve this nuclear problem so we have the luxury of worrying about global warming.

Ariel: Alright, well Professor Robock and Professor Toon, thank you so much for talking with us today.

Toon: You’re welcome.

Robock: You’re welcome, I hope people listen.



The Historic UN Vote On Banning Nuclear Weapons

By Joe Cirincione

History was made at the United Nations today. For the first time in its 71 years, the global body voted to begin negotiations on a treaty to ban nuclear weapons.

Eight nations with nuclear arms (the United States, Russia, China, France, the United Kingdom, India, Pakistan, and Israel) opposed or abstained from the resolution, while North Korea voted yes. However, with a vote of 123 for, 38 against and 16 abstaining, the First Assembly decided “to convene in 2017 a United Nations conference to negotiate a legally binding instrument to prohibit nuclear weapons, leading towards their total elimination.”

The resolution effort, led by Mexico, Austria, Brazil Ireland, Nigeria and South Africa, was joined by scores of others.

“There comes a time when choices have to be made and this is one of those times,” said Helena Nolan, Ireland’s director of Disarmament and Non-Proliferation, “Given the clear risks associated with the continued existence of nuclear weapons, this is now a choice between responsibility and irresponsibility. Governance requires accountability and governance requires leadership.”

The Obama Administration was in fierce opposition. It lobbied all nations, particularly its allies, to vote no. “How can a state that relies on nuclear weapons for its security possibly join a negotiation meant to stigmatize and eliminate them?” argued Ambassador Robert Wood, the U.S. special representative to the UN Conference on Disarmament in Geneva, “The ban treaty runs the risk of undermining regional security.”

The U.S. opposition is a profound mistake. Ambassador Wood is a career foreign service officer and a good man who has worked hard for our country. But this position is indefensible.

Every president since Harry Truman has sought the elimination of nuclear weapons. Ronald Reagan famously said in his 1984 State of the Union:

“A nuclear war cannot be won and must never be fought. The only value in our two nations possessing nuclear weapons is to make sure they will never be used. But then would it not be better to do away with them entirely?”

In case there was any doubt as to his intentions, he affirmed in his second inaugural address that, “We seek the total elimination one day of nuclear weapons from the face of the Earth.”

President Barack Obama himself stigmatized these weapons, most recently in his speech in Hiroshima this May:

“The memory of the morning of Aug. 6, 1945, must never fade. That memory allows us to fight complacency. It fuels our moral imagination. It allows us to change,” he said, “We may not be able to eliminate man’s capacity to do evil, so nations and the alliances that we form must possess the means to defend ourselves. But among those nations like my own that hold nuclear stockpiles, we must have the courage to escape the logic of fear and pursue a world without them.”

The idea of a treaty to ban nuclear weapons is inspired by similar, successful treaties to ban biological weapons, chemical weapons, and landmines. All started with grave doubts. Many in the United States opposed these treaties. But when President Richard Nixon began the process to ban biological weapons and President George H.W. Bush began talks to ban chemical weapons, other nations rallied to their leadership. These agreements have not yet entirely eliminated these deadly arsenals (indeed, the United States is still not a party to the landmine treaty) but they stigmatized them, hugely increased the taboo against their use or possession, and convinced the majority of countries to destroy their stockpiles.I am engaged in real, honest debates among nuclear security experts on the pros and cons of this ban treaty. Does it really matter if a hundred-plus countries sign a treaty to ban nuclear weapons but none of the countries with nuclear weapons join? Will this be a serious distraction from the hard work of stopping new, dangerous weapons systems, cutting nuclear budgets, or ratifying the nuclear test ban treaty?

The ban treaty idea did not originate in the United States, nor was it championed by many U.S. groups, nor is within U.S. power to control the process. Indeed, this last seems to be one of the major reasons the administration opposes the talks.

But this movement is gaining strength. Two years ago, I covered the last of the three conferences held on the humanitarian impact of nuclear weapons for Defense One. Whatever experts and officials thought about the goals of the effort, I said, “the Vienna conference signals the maturing of a new, significant current in the nuclear policy debate. Government policy makers would be wise to take this new factor into account.”

What began as sincere concerns about the horrendous humanitarian consequences of using nuclear weapons has now become a diplomatic process driving towards a new global accord. It is fueled less by ideology than by fear.

The movement reflects widespread fears that the world is moving closer to a nuclear catastrophe — and that the nuclear-armed powers are not serious about reducing these risks or their arsenals. If anything, these states are increasing the danger by pouring hundreds of billions of dollars into new Cold War nuclear weapons programs.

The fears in the United States that, if elected, Donald Trump would have unfettered control of thousands of nuclear weapons has rippled out from the domestic political debate to exacerbate these fears. Rising US-Russian tensions, new NATO military deployments on the Russian border, a Russian aircraft carrier cruising through the Straits of Gibraltar, the shock at the Trump candidacy and the realization — exposed by Trump’s loose talk of using nuclear weapons – that any US leader can unleash a nuclear war with one command, without debate, deliberation or restraint, have combined to convince many nations that dramatic action is needed before it is too late.

As journalist Bill Press said as we discussed these developments on his show, “He scared the hell out of them.”

There is still time for the United States to shift gears. We should not squander the opportunity to join a process already in motion and to help guide it to a productive outcome. It is a Washington trope that you cannot defeat something with nothing. Right now, the US has nothing positive to offer. The disarmament process is dead and this lack of progress undermines global support for the Non-Proliferation Treaty and broader efforts to stop the spread of nuclear weapons.

The new presidential administration must make a determined effort to mount new initiatives that reduce these weapons, reduce these risks. It should also support the ban treaty process as a powerful way to build global support for a long-standing American national security goal. We must, as President John F. Kennedy said, eliminate these weapons before they eliminate us.

This article was originally posted on the Huffington Post.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Supervising AI Growth

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The software’s imperfections highlight the clear limitations of current AI: today’s machine intelligence can’t understand the varied and changing needs and preferences of human life.

However, as artificial intelligence advances, experts believe that intelligent machines will eventually – and probably soon – understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.


Semi-supervised Learning

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: “One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.”

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. “If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second,” says Christiano.

For example, suppose an amateur chess player wants to understand a better chess player’s previous move. Merely spending a few minutes evaluating this move won’t be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better player’s moves.

Fortunately for researchers, they don’t need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose “the machine’s most interesting and informative decisions, where getting feedback would most reduce our uncertainty,“ Christiano explains.

“Say your phone pinged you about a calendar event while you were on a phone call,” he elaborates, “That event is not analogous to anything else it has done before, so it’s not sure whether it is good or bad.” Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, “we need to handle the case where AI systems surpass human performance at basically everything.”

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to “enlist the help of more AI systems,” Christiano explains.


Using AIs to Evaluate Smarter AIs

When a phone pings a user while he is on a call, the user’s reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, “if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human ‘should the phone have interrupted you right then?’” The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the user’s life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AI’s decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI – System 3 – which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.


Can We Ensure that an AI Holds Human Values?

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, “it’s effectively just one machine evaluating another machine’s behavior.”

Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldn’t like.

In order to address these control issues, Christiano is working on an “end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant.” His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christiano’s work here.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

The Ethical Questions Behind Artificial Intelligence

What do philosophers and ethicists worry about when they consider the long-term future of artificial intelligence? Well, to start, though most people involved in the field of artificial intelligence are excited about its development, many worry that without proper planning an advanced AI could destroy all of humanity.

And no, this does not mean they’re worried about Skynet.

At a recent NYU conference, the Ethics of Artificial Intelligence, Eliezer Yudkowsky from the Machine Intelligence Research Institute explained that AI run amok was less likely to look like the Terminator and more likely to resemble the overeager broom that Mickey Mouse brings to life in the Sorcerer’s Apprentice in Fantasia. The broom has one goal and not only does it remain focused, regardless of what Mickey does, it multiplies itself and becomes even more efficient. Concerns about a poorly designed AI are similar — except with artificial intelligence, there will be no sorcerer to stop the mayhem at the end.

To help visualize how an overly competent advanced AI could go wrong, Oxford philosopher Nick Bostrom came up with a thought experiment about a deadly paper-clip-making machine. If you are in the business of selling paper clips then making a paper-clip-maximizing artificial intelligence seems harmless enough. However, with this as its only goal, an intelligent AI might keep making paper clips at the expense of everything else you care about. When it runs out of materials, it will figure out how to break everything around it down to molecular components and reassemble the molecules into paper clips. Soon it will have destroyed life on earth, the earth itself, the solar system, and possibly even the universe — all in an unstoppable quest to build more and more paper clips.

This might seem like a silly concern, but who hasn’t had some weird experience with their computer or some other technology when it went on the fritz? Consider the number of times you’ve sent bizarre messages thanks to autocorrect or, more seriously, the Flash Crash of 2010. Now imagine how such a naively designed — yet very complex — program could be exponentially worse if that system were to manage the power grid or oversee weapons systems.

Even now, with only very narrow AI systems, researchers are discovering that simple biases lead to increases in racism and sexism in the tech world; that cyberattacks are growing in strength and numbers; and that a military AI arms race may be underway.

At the conference, Bostrom explained that there are two types of problems that AI development could encounter: the mistakes that can be fixed later on, and the mistakes that will only be made once. He’s worried about the latter. Yudkowsky also summarized this concern when he said, “AI … is difficult like space probes are difficult: Once you’ve launched it, it’s out there.”

AI researcher and philosopher Wendell Wallach added, “We are building technology that we can’t effectively test.”

As artificial intelligence gets closer to human-level intelligence, how can AI designers ensure their creations are ethical and behave appropriately from the start? It turns out this question only begets more questions.

What does beneficial AI look like? Will AI benefit all people or only some? Will it increase income inequality? What are the ethics behind creating an AI that can feel pain? Can a conscious AI be developed without a concrete definition of consciousness? What is the ultimate goal of artificial intelligence? Will it help businesses? Will it help people? Will AI make us happy?

“If we have no clue what we want, we’re less likely to get it,” said MIT physicist Max Tegmark.

Stephen Peterson, a philosopher from Niagara University, summed up the gist of all of the questions when he encouraged the audience to wonder not only what the “final goal” for artificial intelligence is, but also how to get there. Scrooge, whom Peterson used as an example, always wanted happiness: the ghosts of Christmases past, present, and future just helped him realize that friends and family would help him achieve this goal more than money would.

Facebook’s Director of AI Research, Yann LeCun, believes that such advanced artificial intelligence is still a very long way off. He compared the current state of AI development to a chocolate cake. “We know how to make the icing and the cherry,” he said, “but we have no idea how to make the cake.”

But if AI development is like baking a cake, it seems AI ethics will require the delicate balance and attention of perfecting a soufflé. And most participants at the two-day event agreed that the only way to ensure permanent AI mistakes aren’t made, regardless of when advanced AI is finally developed, is to start addressing ethical and safety concerns now.

This is not to say that the participants of the conference aren’t also excited about artificial intelligence. As mentioned above, they are. The number of lives that could be saved and improved as humans and artificial intelligence work together is tremendous. The key is to understand what problems could arise and what questions need to be answered so that AI is developed beneficially.

“When it comes to AI,” said University of Connecticut philosopher, Susan Schneider, “philosophy is a matter of life and death.”

OpenAI Unconference on Machine Learning

The following post originally appeared here.

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.

To my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.


AI safety discussion group (image courtesy of Been Kim)

One value alignment question was how to incorporate a diverse set of values that represents all of humanity in the AI’s objective function. We pointed out that there are two complementary problems: 1) getting the AI’s values to be in the small part of values-space that’s human-compatible, and 2) averaging over that space in a representative way. People generally focus on the ways in which human values differ from each other, which leads them to underestimate the difficulty of the first problem and overestimate the difficulty of the second. We also agreed on the importance of allowing for moral progress by not locking in the values of AI systems.

Nate mentioned some alternatives to goal-optimizing agents – quantilizers and approval-directed agents. We also discussed the limitations of using blacklisting/whitelisting in the AI’s objective function: blacklisting is vulnerable to unforeseen shortcuts and usually doesn’t work from a security perspective, and whitelisting hampers the system’s ability to come up with creative solutions (e.g. the controversial move 37 by AlphaGo in the second game against Sedol).

Been Kim brought up the recent EU regulation on the right to explanation for algorithmic decisions. This seems easy to game due to lack of good metrics for explanations. One proposed metric was that a human would be able to predict future model outputs from the explanation. This might fail for better-than-human systems by penalizing creative solutions if applied globally, but seems promising as a local heuristic.

Ian Goodfellow mentioned the difficulties posed by adversarial examples: an imperceptible adversarial perturbation to an image can make a convolutional network misclassify it with very high confidence. There might be some kind of No Free Lunch theorem where making a system more resistant to adversarial examples would trade off with performance on non-adversarial data.

We also talked about dual-use AI technologies, e.g. advances in deep reinforcement learning for robotics that could end up being used for military purposes. It was unclear whether corporations or governments are more trustworthy with using these technologies ethically: corporations have a profit motive, while governments are more likely to weaponize the technology.


More detailed notes by Janos coming soon! For a detailed overview of technical AI safety research areas, I highly recommend reading Concrete Problems in AI Safety.

FLI July, 2016 Newsletter

FLI September, 2016 Newsletter

How Can AI Learn to Be Safe?

As artificial intelligence improves, machines will soon be equipped with intellectual and practical capabilities that surpass the smartest humans. But not only will machines be more capable than people, they will also be able to make themselves better. That is, these machines will understand their own design and how to improve it – or they could create entirely new machines that are even more capable.

The human creators of AIs must be able to trust these machines to remain safe and beneficial even as they self-improve and adapt to the real world.

Recursive Self-Improvement

This idea of an autonomous agent making increasingly better modifications to its own code is called recursive self-improvement. Through recursive self-improvement, a machine can adapt to new circumstances and learn how to deal with new situations.

To a certain extent, the human brain does this as well. As a person develops and repeats new habits, connections in their brains can change. The connections grow stronger and more effective over time, making the new, desired action easier to perform (e.g. changing one’s diet or learning a new language). In machines though, this ability to self-improve is much more drastic.

An AI agent can process information much faster than a human, and if it does not properly understand how its actions impact people, then its self-modifications could quickly fall out of line with human values.

For Bas Steunebrink, a researcher at the Swiss AI lab IDSIA, solving this problem is a crucial step toward achieving safe and beneficial AI.

Building AI in a Complex World

Because the world is so complex, many researchers begin AI projects by developing AI in carefully controlled environments. Then they create mathematical proofs that can assure them that the AI will achieve success in this specified space.

But Steunebrink worries that this approach puts too much responsibility on the designers and too much faith in the proof, especially when dealing with machines that can learn through recursive self-improvement. He explains, “We cannot accurately describe the environment in all its complexity; we cannot foresee what environments the agent will find itself in in the future; and an agent will not have enough resources (energy, time, inputs) to do the optimal thing.”

If the machine encounters an unforeseen circumstance, then that proof the designer relied on in the controlled environment may not apply. Says Steunebrink, “We have no assurance about the safe behavior of the [AI].”

Experience-based Artificial Intelligence

Instead, Steunebrink uses an approach called EXPAI (experience-based artificial intelligence). EXPAI are “self-improving systems that make tentative, additive, reversible, very fine-grained modifications, without prior self-reasoning; instead, self-modifications are tested over time against experiential evidences and slowly phased in when vindicated, or dismissed when falsified.”

Instead of trusting only a mathematical proof, researchers can ensure that the AI develops safe and benevolent behaviors by teaching and testing the machine in complex, unforeseen environments that challenge its function and goals.

With EXPAI, AI machines will learn from interactive experience, and therefore monitoring their growth period is crucial. As Steunebrink posits, the focus shifts from asking, “What is the behavior of an agent that is very intelligent and capable of self-modification, and how do we control it?” to asking, “How do we grow an agent from baby beginnings such that it gains both robust understanding and proper values?”

Consider how children grow and learn to navigate the world independently. If provided with a stable and healthy childhood, children learn to adopt values and understand their relation to the external world through trial and error, and by examples. Childhood is a time of growth and learning, of making mistakes, of building on success – all to help prepare the child to grow into a competent adult who can navigate unforeseen circumstances.

Steunebrink believes that researchers can ensure safe AI through a similar, gradual process of experience-based learning. In an architectural blueprint developed by Steunebrink and his colleagues, the AI is constructed “starting from only a small amount of designer-specific code – a seed.” Like a child, the beginnings of the machine will be less competent and less intelligent, but it will self-improve over time, as it learns from teachers and real-world experience.

As Steunebrink’s approach focuses on the growth period of an autonomous agent, the teachers, not the programmers, are most responsible for creating a robust and benevolent AI. Meanwhile, the developmental stage gives researchers time to observe and correct an AI’s behavior in a controlled setting where the stakes are still low.

The Future of EXPAI

Steunebrink and his colleagues are currently creating what he describes as a “pedagogy to determine what kind of things to teach to agents and in what order, how to test what the agents understand from being taught, and, depending on the results of such tests, decide whether we can proceed to the next steps of teaching or whether we should reteach the agent or go back to the drawing board.”

A major issue Steunebrink faces is that his method of experience-based learning diverges from the most popular methods for improving AI. Instead of doing the intellectual work of crafting a proof-backed optimal learning algorithm on a computer, EXPAI requires extensive in-person work with the machine to teach it like a child.

Creating safe artificial intelligence might prove to be more a process of teaching and growth rather than a function of creating the perfect mathematical proof. While such a shift in responsibility may be more time-consuming, it could also help establish a far more comprehensive understanding of an AI before it is released into the real world.

Steunebrink explains, “A lot of work remains to move beyond the agent implementation level, towards developing the teaching and testing methodologies that enable us to grow an agent’s understanding of ethical values, and to ensure that the agent is compelled to protect and adhere to them.”

The process is daunting, he admits, “but it is not as daunting as the consequences of getting AI safety wrong.”

If you would like to learn more about Bas Steunebrink’s research, you can read about his project here, or visit He is also the co-founder of NNAISENSE, which you can learn about at

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

MIRI October 2016 Newsletter

The following newsletter was originally posted on MIRI’s website.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s 2016 fundraiser is also live, and runs through the end of October.

Research updates

General updates

  • We wrote up a more detailed fundraiser post for the Effective Altruism Forum, outlining our research methodology and the basic case for MIRI.
  • We’ll be running an “Ask MIRI Anything” on the EA Forum this Wednesday, Oct. 12.
  • The Open Philanthropy Project has awarded MIRI a one-year $500,000 grant to expand our research program. See also Holden Karnofsky’s account of how his views on EA and AI have changed.

News and links

Sam Harris TED Talk: Can We Build AI Without Losing Control Over It?

The threat of uncontrolled artificial intelligence, Sam Harris argues in a recently released TED Talk, is one of the most pressing issues of our time. Yet most people “seem unable to marshal an appropriate emotional response to the dangers that lie ahead.”

Harris, a neuroscientist, philosopher, and best-selling author, has thought a lot about this issue. In the talk, he clarifies that it’s not likely armies of malicious robots will wreak havoc on civilization like many movies and caricatures portray. He likens this machine-human relationship to the way humans treat ants. “We don’t hate [ants],” he explains, “but whenever their presence seriously conflicts with one of our goals … we annihilate them without a qualm. The concern is that we will one day build machines that, whether they are conscious or not, could treat us with similar disregard.”

Harris explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:

  1. Intelligence is a product of information processing in physical systems.
  2. We will continue to improve our intelligent machines.
  3. We do not stand on the peak of intelligence or anywhere near it.

Humans have already created systems with narrow intelligence that exceeds human intelligence (such as computers). And since mere matter can give rise to general intelligence (as in the human brain), there is nothing, in principle, preventing advanced general intelligence in machines, which are also made of matter.

But Harris says the third assumption is “the crucial insight” that “makes our situation so precarious.” If machines surpass human intelligence and can improve themselves, they will be more capable than even the smartest humans—in unimaginable ways.

Even if a machine is no smarter than a team of researchers at MIT, “electronic circuits function about a million times faster than biochemical ones,” Harris explains. “So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.”

Harris wonders, “how could we even understand, much less constrain, a mind making this sort of progress?”

Harris also worries that the power of superintelligent AI will be abused, furthering wealth inequality and increasing the risk of war. “This is a winner-take-all scenario,” he explains. Given the speed that these machines can process information, “to be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.”

If governments and companies perceive themselves to be in an arms race against one another, they could develop strong incentives to create superintelligent AI first—or attack whoever is on the brink of creating it.

Though some researchers argue that superintelligent AI will not be created for another 50-100 years, Harris points out, “Fifty years is not that much time to meet one of the greatest challenges our species will ever face.”

Harris warns that if his three basic assumptions are correct, “then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.”


 Photo credit to Bret Hartman from TED. Illustration credit to Paul Lachine. You can see more of Paul’s illustrations at

Obama’s Nuclear Legacy

The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.