On February 1, a little more than 30 years after it went into effect, the United States announced that it is suspending the Intermediate-Range Nuclear Forces (INF) Treaty. Less than 24 hours later, Russia announced that it was also suspending the treaty.
It stands (or stood) as one of the last major nuclear arms control treaties between the U.S. and Russia, and its collapse signals the most serious nuclear arms crisis since the 1980s. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, said to The Guardian, “If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”
The INF treaty, which went into effect in 1988, was the first nuclear agreement to outlaw an entire class of weapons. It banned all ground-launched ballistic and cruise missiles — nuclear, conventional, and “exotic”— with a range of 500 km to 5500 km (310 to 3400 miles), leading to the immediate elimination of 2,692 short- and medium-range weapons. But more than that, the treaty served as a turning point that helped thaw the icy stalemate between the U.S. and Russia. Ultimately, the trust that it fostered established a framework for future treaties and, in this way, played a critical part in ending the Cold War.
Now, all of that may be undone.
The Blame Game Part 1: Russia vs. U.S.
In defense of the suspension, President Donald Trump said that the Russian government has deployed new missiles that violate the terms of the INF treaty — missiles that could deliver nuclear warheads to European targets, including U.S. military bases. President Trump also said that, despite repeated warnings, President Vladimir Putin has refused to destroy these warheads. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to,” he said.
In a statement announcing the suspension of the treaty, Secretary of State Mike Pompeo said that countries must be held accountable when they violate a treaty. “Russia has jeopardized the United States’ security interests,” he said, “and we can no longer be restricted by the treaty while Russia shamelessly violates it.” Pompeo continued by noting that Russia’s posturing is a clear signal that the nation is returning to its old Cold War mentality, and that the U.S. must make similar preparations in light of these developments. “As we remain hopeful of a fundamental shift in Russia’s posture, the United States will continue to do what is best for our people and those of our allies,” he concluded.
The controversy about whether Russia is in violation hinges on whether the 9M729 missile can fly more than 500km. The U.S. claims to have provided evidence of this to Russia, but has not made this evidence public, and further claims that violations have continued since at least 2014. Although none of the U.S.-based policy experts interviewed for this article dispute that Russia is in violation, many caution that this suspension will create a far more unstable environment and that the U.S. shares much of the blame for not doing more to preserve the treaty.
In an emailed statement to the Future of Life Institute, Martin Hellman, an Adjunct Senior Fellow for Nuclear Risk Analysis at the Federation of American Scientists and Professor Emeritus of Electrical Engineering at Stanford University, was clear in his censure of the Trump administration’s decision and reasoning, noting that it follows a well-established pattern of duplicity and double-dealing:
The INF Treaty was a crucial step in ending the arms race. Our withdrawing from it in such a precipitous manner is a grave mistake. In a sense, treaties are the beginning of negotiations, not the end. When differences in perspective arise, including on what constitutes a violation, the first step is to meet and negotiate. Only if that process fails, should withdrawal be contemplated. In the same way, any faults in a treaty should first be approached via corrective negotiations.
Withdrawing in this precipitous manner from the INF treaty will add to concerns that our adversaries already have about our trustworthiness on future agreements, such as North Korea’s potential nuclear disarmament. Earlier actions of ours which laid that foundation of mistrust include George W. Bush killing the 1994 Agreed Framework with North Korea “for domestic political reasons,” Obama attacking Libya after Bush had promised that giving up its WMD programs “can regain [Libya] a secure and respected place among the nations,” and Trump tearing up the Iran agreement even though Iran was in compliance and had taken steps that considerably set back its nuclear program.
In an article published by CNN, Eliot Engel, chairman of the House Committee on Foreign Affairs, and Adam Smith, chairman of the House Committee on Armed Services, echo these sentiments and add that the U.S. government greatly contributed to the erosion of the treaty, clarifying that the suspension could have been avoided if President Trump had collaborated with NATO allies to pressure Russia into ensuring compliance. “[U.S.] allies told our offices directly that the Trump administration blocked NATO discussion regarding the INF treaty and provided only the sparest information throughout the process….This is the latest step in the Trump administration’s pattern of abandoning the diplomatic tools that have prevented nuclear war for 70 years. It also follows the administration’s unilateral decision to withdraw from the Paris climate agreement,” they said.
Russia has also complained about the alleged lack of U.S. diplomacy. In January 2019, Russian diplomats proposed a path to resolution, stating that they would display their missile system and demonstrate that it didn’t violate the INF treaty if the U.S. did the same with their MK-41 launchers in Romania. The Russians felt that this was a fair compromise, as they have long argued that the Aegis missile defense system, which the U.S. deployed in Romania and Poland, violates the INF treaty. The U.S. rejected Russia’s offer, stating that a Russian controlled inspection would not permit the kind of unfettered access that U.S. representatives would need to verify their conclusions. And ultimately, they insisted that the only path forward was for Russia to destroy the missiles, launchers, and supporting infrastructure.
In response, Russian foreign minister Sergei Lavrov accused the U.S. of being obstinate. “U.S. representatives arrived with a prepared position that was based on an ultimatum and centered on a demand for us to destroy this rocket, its launchers and all related equipment under US supervision,” he said.
The most devastating military threat arguably comes from a nuclear war started not intentionally but by accident or miscalculation. Accidental nuclear war has almost happened many times already, and with 15,000 nuclear weapons worldwide — thousands on hair-trigger alert and ready to launch at a moment’s notice — an accident is bound to occur eventually.
The Blame Game Part 2: China
Other experts, such as Mark Fitzpatrick, Director of the non-proliferation program at the International Institute for Strategic Studies, assert that the “real reason” for the U.S. pullout lies elsewhere — in China.
This belief is bolstered by previous statements made by President Trump. Most notably, during a rally in the Fall of 2018, the President told reporters that it is unfair that China faces no limits when it comes to developing and deploying intermediate-range nuclear missiles. “Unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable,” he said.
According to a 2019 report published for congress, China has some 2,000 ballistic and cruise missiles in its inventory, and 95% of these would violate the INF treaty if Beijing were a signatory. It should be noted that both Russia and the U.S. are estimated to have over 6,000 nuclear warheads, while China has approximately 280. Nevertheless, the report states, “The sheer number of Chinese missiles and the speed with which they could be fired constitutes a critical Chinese military advantage that would prove difficult for a regional ally or partner to manage absent intervention by the United States,” adding, “The Chinese government has also officially stated its opposition to Beijing joining the INF Treaty.” Consequently, President Trump stated that the U.S. has no choice but to suspend the treaty.
Along these lines, John Bolton, who became the National Security Adviser in April, has long argued that the kinds of missiles banned by the INF treaty would be an invaluable resource when it comes to defending Western nations against what he argues is an increasing military threat from China’s.
Pranay Vaddi, a fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace, feels differently. Although he does not deny that China poses a serious military challenge to the U.S., Vaddi asserts that withdrawing from the INF treaty is not a viable solution, and he says that proponents of the suspension “ignore the very real political challenges associated with deploying U.S. GBIRs in the Asia Pacific region. They also ignore specific military challenges, including the potential for a missile race and long-term regional and strategic instability.” He concludes, “Before withdrawing from the INF Treaty, the United States should consult with its Asian allies on the threat posed by China, the defenses required, and the consequences of introducing U.S. offensive missiles into the region, including potentially on allied territory.”
The National Security Archives recently published a declassified list of U.S. nuclear targets from 1956, which spanned 1,100 locations across Eastern Europe, Russia, China, and North Korea. This map shows all 1,100 nuclear targets from that list, demonstrating how catastrophic a nuclear exchange between the United States and Russia could be.
Six Months and Counting
Regardless of how much blame each respective nation shares, the present course has been set, and if things don’t change soon, we may find ourselves in a very different world a few months from now.
According to the terms of the treaty, if one of the parties breaches the agreement then the other party has the option to terminate or suspend it. It was on this basis that, back in October of 2018, President Trump stated he would be terminating the INF treaty altogether. Today’s suspension announcement is an update to these plans.
Notably, a suspension doesn’t follow the same course as a withdrawal. A suspension means that the treaty continues to exist for a set period. As a result, starting Feb. 1, the U.S. began a six-month notice period. If the two nations don’t reach an agreement and decide to restore the treaty within this window, on August 2nd, the Treaty will go out of effect. At that juncture, both the U.S. and Russia will be free to develop and deploy the previously banned nuclear missiles with no oversight or transparency.
The situation is dire, and experts assert that we must immediately reopen negotiations. On Friday, before the official U.S. announcement, German Chancellor Angela Merkel said that if the United States announced it would suspend compliance with the treaty, Germany would use the six-month formal withdrawal period to hold further discussions. “If it does come to a cancellation today, we will do everything possible to use the six-month window to hold further talks,” she said.
Following the US announcement, German Foreign Minister Heiko Maas tweeted, “there will be less security without the treaty.” Likewise, Laura Rockwood, executive director at the Vienna Center for Disarmament and Non-Proliferation, noted that the suspension is a troubling move that will increase — not decrease — tension and conflict. “It would be best to keep the INF in place. You don’t throw the baby out with the bathwater. It’s been an extraordinarily successful arms control treaty,” she said.
Carl Bildt, a co-chair of the European Council on Foreign Relations, agreed with these sentiments, noting in a tweet that the INF treaty’s demise puts many lives in peril. “Russia can now also deploy its Kaliber cruise missiles with ranges around 1.500 km from ground launchers. This would quickly cover all of Europe with an additional threat,” he said.
And it looks like many of these fears are already being realized. In a televised meeting over the weekend, President Putin stated that Russia will actively begin building weapons that were previously banned under the treaty. President Putin also made it clear that none of his departments would initiate talks with the U.S. on any matters related to nuclear arms control. “I suggest that we wait until our partners are ready to engage in equal and meaningful dialogue,” he said.
The photo for this article is from wiki commons: by Mil.ru, CC BY 4.0, https://commons.wikimedia.org/
Click here to see this page in other languages: Russian
Last week, U.S. President Donald Trump confirmed that the United States will be pulling out of the landmark Intermediate-Range Nuclear Forces Treaty (INF). The INF treaty, which went into effect in 1987, banned ground-launched nuclear missiles that have a range of 500 km to 5,500 km (310 to 3,400 miles). Although the agreement covers land-based missiles that carry both nuclear and conventional warheads, it doesn’t cover any air-launched or sea-launched weapons.
Nonetheless, when it was signed into effect by Former U.S. President Ronald Reagan and Former Soviet President Mikhail Gorbachev, it led to the elimination of nearly 2,700 short- and medium-range missiles. More significantly, it helped bring an end to a dangerous nuclear standoff between the two nations, and the trust that it fostered played a critical part in defusing the Cold War.
Now, as a result of the recent announcements from the Trump administration, all of this may be undone. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, stated in an interview with The Guardian, “This is the most severe crisis in nuclear arms control since the 1980s. If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”
Of course, the U.S. isn’t the only player that’s contributing to unravelling an arms treaty that helped curb competition and contributed to bringing an end to the Cold War.
Reports indicate that Russia has been violating the INF treaty since at least 2014, a fact that was previously acknowledged by the Obama administration and which President Trump cited in his INF withdrawal announcement last week. “Russia has violated the agreement. They’ve been violating it for many years, and I don’t know why President Obama didn’t negotiate or pull out,” Trump stated. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to.…so we’re going to terminate the agreement. We’re going to pull out,” he continued.
Trump also noted that China played a significant role in his decision to pull the U.S. out of the INF treaty. Since China was not a part of the negotiations and is not a signatory, the country faces no limits when it comes to developing and deploying intermediate-range nuclear missiles — a fact that China has exploited in order to amass a robust missile arsenal. Trump noted that the U.S. will have to develop those weapons, “unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable.”
A Growing Concern
Concerns over Russian missile systems that breach the INF treaty are real and valid. Equally valid are the concerns over China’s weapons strategy. However, experts note that President Trump’s decision to leave the INF treaty doesn’t set us on the path to the negotiating table, but rather, toward another nuclear arms race.
Russian officials have been clear in this regard, with Leonid Slutsky, who chairs the foreign affairs committee in Russia’s lower house of parliament, stating this week that a U.S. withdrawal from the INF agreement “would mean a real new Cold War and an arms race with 100 percent probability” and “a collapse of the planet’s entire nonproliferation and disarmament regime.”
This is precisely why many policy experts assert that withdrawal is not a viable option and, in order to achieve a successful resolution, negotiations must continue. Wolfgang Ischinger, the former German ambassador to the United States, is one such expert. In a statement issued over the weekend, he noted that he is “deeply worried” about President Trump’s plans to dismantle the INF treaty and urged the U.S. government to, instead, work to expand the treaty. “Multilateralizing this agreement would be a lot better than terminating it,” he wrote on Twitter.
Even if the U.S. government is entirely disinterested in negotiating, and the Trump administration seeks only to respond with increased weaponry, policy experts assert that withdrawing from the INF treaty is still an unavailing and unnecessary move. As Jeffrey Lewis, the director of the East Asia nonproliferation program at the Middlebury Institute of International Studies at Monterey, notes, the INF doesn’t prohibit sea- or air-based systems. Consequently, the U.S. could respond to Russian and Chinese political maneuverings with increased armament without escalating international tensions by upending longstanding treaties.
Indeed, since President Trump made his announcement, a number of experts have condemned the move and called for further negotiations. EU spokeswoman Maja Kocijancic said that the U.S. and Russia “need to remain in a constructive dialogue to preserve this treaty” as it “contributed to the end of the Cold War, to the end of the nuclear arms race and is one of the cornerstones of European security architecture.”
Most notably, in a statement that was issued Monday, the European Union cautioned the U.S. against withdrawing from the INF treaty, saying, “The world doesn’t need a new arms race that would benefit no one and on the contrary, would bring even more instability.”
In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.
Topics discussed in this episode include:
- The sophisticated military robots developed by Soviets during the Cold War
- How technology shapes human decision-making in war
- “Automation bias” and why having a “human in the loop” is much trickier than it sounds
- The United States’ stance on automation with nuclear weapons
- Why weaker countries might have more incentive to build AI into warfare
- How the US and Russia perceive first-strike capabilities
- “Deep fakes” and other ways AI could sow instability and provoke crisis
- The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
- The perceived obstacles to reducing nuclear arsenals
Publications discussed in this episode include:
- Treaty on the Prohibition of Nuclear Weapons
- Scott Sagan’s book The Limits of Safety: Organizations, Accidents, and Nuclear Weapons
- Phil Reiner on “deep fakes” and preventing nuclear catastrophe
- RAND Report: How Might Artificial Intelligence Affect the Risk of Nuclear War?
- SIPRI’s grant from the Carnegie Corporation on emerging threats in nuclear stability
Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.
September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.
From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.
Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.
It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers – which have not signed the treaty – are taking steps to modernize their nuclear arsenals.
This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.
Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.
Paul and Mike first, thank you so much for joining me today.
Paul: Thank you, thanks for having us.
Mike: Yeah, excited for the conversation.
Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.
Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?
The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.
But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?
If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.
For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.
We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.
In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could – there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.
Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.
You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.
Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.
The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act – it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.
Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?
Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.
I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.
In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.
Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.
The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.
The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than – if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.
Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.
Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.
Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.
If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so – that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.
Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.
I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.
And that’s a lesson the United States by the way has taught the world over and over, over the last few decades – you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.
Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.
I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.
Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.
I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?
Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.
Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.
The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.
I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.
Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.
There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?
We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”
I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.
When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.
Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.
Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side – the American and Soviet side – during the Cold War and immediately afterwards.
If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.
But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.
Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.
Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.
Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?
Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.
For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.
Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.
Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.
But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.
But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.
Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.
Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?
Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.
I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security – but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.
Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.
Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.
If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.
I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.
Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.
That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.
I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?
Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.
To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.
You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.
During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat – this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.
In the case of INF Treaty, Russia looking at what China is building – not a signatory to INF – and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.
Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging – what specifically do you worry about if you’re thinking about destabilization? What does that look like?
Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.
But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.
Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.
That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.
For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.
While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-
Mike: Right, the US has actively moved to not do this, basically.
Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.
I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.
Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.
Paul: I knew you were going to say that.
Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.
Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.
Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.
Mike: That’s totally fair.
Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.
Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.
Mike: Now the world is safer. Major conventional war.
Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.
Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.
Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.
Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.
I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.
Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.
There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.
From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.
I do worry that that’s not a major feature when countries are looking at modernization – that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.
Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.
Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.
Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.
In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.
That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.
Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.
Paul: Thank you.
Mike: Thanks a lot. Happy to do it and happy to come back anytime.
Paul: Yeah, thanks for having us. We really appreciate it.
[end of recorded material]
To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.
Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”
Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)
Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.
One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.
In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.
But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.
Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”
University College London Mathematics Professor Hannah Fry, author of the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.
The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.
Stanislav Petrov around the time he helped avert WWIII
With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? (Update: The North Korea summit happened! But to understand what the future might look like with North Korea and Iran, it’s still helpful to understand the past.)
To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month’s podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include:
- the progression of North Korea’s quest for nukes,
- what happened and what’s next regarding the Iran deal,
- how to use open-source data to monitor nuclear weapons testing, and
- how younger generations can tackle nuclear risk.
In light of the on-again/off-again situation regarding the North Korea Summit, Melissa sent us a quote after the podcast was recorded, saying:
“Regardless of whether the summit in Singapore takes place, we all need to set expectations appropriately for disarmament. North Korea is not agreeing to give up nuclear weapons anytime soon. They are interested in a phased approach that will take more than a decade, multiple parties, new legal instruments, and new technical verification tools.”
Links you might be interested in after listening to the podcast:
You can listen to the podcast above or read the transcript below.
Ariel: Hello. I am Ariel Conn with the Future of Life Institute. This last month has been a rather big month concerning nuclear weapons, with the US pulling out of the Iran deal and the on again off again summit with North Korea.
I have personally been doing my best to keep up with the news but I wanted to learn more about what’s actually going on with these countries, some of the history behind the nuclear weapons issues related to these countries, and just how big a risk nuclear programs in these countries could become.
Today I have with me Melissa Hanham and Dave Schmerler, who are nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies. They both research weapons of mass destruction with a focus on North Korea. Melissa and Dave, thank you so much for joining us today.
Dave: Thanks for having us on.
Melissa: Yeah, thanks for having us.
Ariel: I just said that you guys are both experts in North Korea, so naturally what I want to do is start with Iran. That has been the bigger news story of the two countries this month because the US did just pull out of the Iran deal. Before we get any further, can you just, if it’s possible, briefly explain what was the Iran deal first? Then we’ll get into other questions about it.
Melissa: Sure. The Iran deal was an agreement made between the … It’s formally known as the JCPOA and it was an agreement made between Iran and several countries around the world including the European Union as well. The goal was to freeze Iran’s nuclear program before they achieved nuclear weapons while still allowing them civilian access to medical isotopes, and power, and so on.
At the same time, the agreement would be that the US and others would roll back sanctions on Iran. The way that they verified that agreement was through a procurement channel, if-needed onsite inspections, and regular reporting from Iran. As you mentioned, the US has withdrawn from the Iran deal, which is really just, they have violated the terms of the Iran deal, and Iran and European Union and others have said that they wish to continue in the JCPOA.
Ariel: If I’ve been reading correctly, the argument on the US side is that Iran wasn’t holding up their side of the bargain. Was there actually any evidence for that?
Dave: I think the American side for pulling out was more based on them lying about having a nuclear weapons program at one point in time, leading up to the deal, which is strange, because that was the motivation for the deal in the first place, was to stop them from continuing their nuclear weapons, their research and investment. So, I’m not quite sure how else to frame it outside of that.
Melissa: Yeah, Israeli President Netanyahu, made this presentation where he revealed all these different archived documents in Iran, and mostly what they indicated was that Iran had an ongoing nuclear weapons program before the JCPOA, which is what we knew, and that they were planning on executing that program. For people like me, I felt like that was the justification for the JCPOA in the first place.
Ariel: And so, you both deal a lot with, at least Melissa I know you deal a lot with monitoring. Dave, I believe you do, too. With something like the Iran deal, if we had continued with it, what is the process involved in making sure the weapons aren’t being created? How do we monitor that?
Melissa: It’s a really difficult multilayered technical and legal proposition. You have to get the parties involved to agree to the terms, and then you have to be able to technically and logistically implement the terms. In the Iran deal, there were some things that were included and some things that were not included. Not because it was not technically possible, but because Iran or the other parties would not agree to it.
It’s kind of a strange marriage between diplomacy and technology, in order to execute these agreements. One of the criticisms of the Iran deal was that missiles weren’t included, so sure enough, Dave was monitoring many, many missile launches, and our colleague, Shea Cotton, even made a database of North Korean missile launches, and Americans really hated that Iran was launching these missiles, and we could see that they were happening. But the bottom line was that they were not part of the JCPOA agreement. That agreement focused only on nuclear, and the reason it did was because Iran refused to include missiles or human rights and these other kinds of things.
Dave: That’s right. Negotiating Iran’s missile program is a bit of another issue entirely. Iran’s missile program began before their nuclear program did. It’s accelerated, development has corresponded to their own security concerns within the region, and they have at the moment, a conventional ballistic missile force. The Iranians look at that program as being a completely different issue.
Ariel: Just quickly, how do you monitor a missile test? What’s involved in that? What do you look for? How can you tell they’re happening? Is it really obvious, or is there some sort of secret data you access?
Dave: A lot of the work that we do — Melissa and I, Shea Cotton, Jeffrey Lewis, and some other colleagues — is entirely based on information from the public. It’s all open source research, so if you know what you’re looking for, you can pull all the same information that we do from various sources of free information. The Iranians will often put propaganda or promo videos of their missile tests and launches as a way to demonstrate that they’re becoming a more sophisticated, technologically modern, ballistic missile producing nation.
We also get reports from the US government that are published in news sources. Whether from the US government themselves, or from reporters who have connections or access to the inside, and we take all this information, and Melissa will probably speak to this a bit further, but we fuse it together with satellite imagery of known missile test locations. We’ll reconstruct a much larger, more detailed chain of events as to what happened when Iran does missile testing.
Melissa: I have to admit, there’s just more open source information available about missile tests, because they’re so spread out over large areas and they have very large physical attributes to the sites, and of course, something lights up and ignites, and it takes off into the air where everyone can see it. So, monitoring a missile launch is easier than monitoring a specific facility in a larger network of facilities, for a nuclear program.
Ariel: So now that Trump has pulled out of the Iran deal, what happens next with them?
Melissa: Well, I think it’s probably a pretty bad sign. What I’ve heard from colleagues who work in or around the Trump administration is that confidence was extremely high on progress with North Korea, and so they felt that they didn’t need the Iran deal anymore. And in part, the reason that they violated it was because they felt that they had so much already going in North Korea, and those hopes were really false. There was a huge gap between reality and those hopes. It can be frustrating as an open source analyst who says these things all the time on Twitter, or in reports, that clearly nobody reads them. But no, things are not going well in North Korea. North Korea is not unilaterally giving over their nuclear weapons, and if anything, violating the Iran deal has made North Korea more suspicious of the US.
Ariel: I’m going to use that to transition to North Korea here in just a minute, but I guess I hadn’t realized that there was a connection between things seeming to go well in North Korea and the US pulling out of the Iran deal. You talk about hopes that the Iran deal is now necessary for North Korea, but what is the connection there? How does that work?
Melissa: Well, so the Iran deal represented diplomatic negotiation with an outcome among many parties that came to a concrete result. It happened under the Obama administration, which I think is why there is some distaste for it under the Trump administration. That doesn’t matter to North Korea. That doesn’t matter to other states. What matters is whether the United States appears to be able to follow through on a promise that may pass one administration to another.
The US has in a way, violated some norms about diplomatic behavior, by withdrawing from this agreement. That’s not to say that the US hasn’t done it before. I remember Clinton signing the, I think Rome Treaty, for the International Criminal Accord, then Bush unsigning it, it never got ratified. But it’s bad for our reputation. It makes us look like we’re not using international law the way other countries expect us to.
Ariel: All right. So before we move officially to North Korea, is there anything else, Melissa and Dave, that either of you want to mention about Iran that you think is either important for people to know about, that they don’t already, or that is important to reiterate?
Melissa: No. I guess let’s go to North Korea. That’s our bread and butter.
Ariel: All right. Okay, so yeah, North Korea’s been in the news for a while now. Before we get to what’s going on right now, I was hoping you could both talk a little bit about some of the background with North Korea, and how we got to this point. North Korea was once part of the Non-Proliferation Treaty, and they pulled out. Why were they in it in the first place? What prompted them to pull out? We’ll go from there.
Melissa: Okay, I’ll jump in, although Dave should really tell me if I keep talking over him. North Korea withdrew from the NPT, or so it said. It’s actually diplomatically very complex what they did, but North Korea either was or is a member of the Nuclear Non-Proliferation Treaty, the NPT, depending on who you ask. That is in large part because they were, and then they announced their withdrawal in 2003, and eventually we no longer think of them as officially being a member of the NPT, but of course, there were some small gaps over the notification period that they gave in order to withdraw, so I think my understanding is that some of the organizations involved actually keep a little North Korean nameplate for them.
But no, we don’t really think of them as being a member of an NPT, or IAEA. Sadly, while that may not be a legally settled, they’re out, they’re not abiding by traditional regimes or norms on this issue.
Ariel: And can you talk a little bit about, or do we know what prompted them to withdraw?
Melissa: Yeah. I think they really, really wanted nuclear weapons. I mean, I’m sorry to be glib about it, but … Yeah, they were seeking nuclear weapons since the ’50s. Kim Il-sung said he wanted nuclear weapons, he saw the power of the US’ weapons that were dropped on Japan. The US threatened North Korea during the Korean War with use of nuclear weapons, so yeah, they had physicists working on this issue for a long time.
They joined the NPT, they wanted access to the peaceful uses of nuclear power, they were very duplicitous in their work, but no, they kept working towards nuclear weapons. I think they reached a point where they probably thought that they had the technical capability, and they were dissatisfied with the norms and status as a pariah state, so yeah, they announced they were withdrawing, and then they exploded something three years later.
Ariel: Now that they’ve had a program in place then I guess for, what? Roughly 15 years then?
Melissa: Oh, my gosh. Math. Yeah. No, so I was sitting in Seoul. Dave, do you remember where you were when they had their first nuclear test?
Dave: This was-
Dave: A long time ago. I think I was still in high school.
Melissa: I mean, this is a challenge to our whole field, right? Is that there are generations passing through, so there are people who remember 1945. I don’t. But I’m not going to reveal my age. I was fresh out of grad school, and working in Seoul when North Korea tested its first nuclear device.
It was like cognitive dissonance around the world. I remember the just shock of the response out of pretty much every country. I think China had a few minutes notice ahead of everybody else, but not much. So yes, we did see the reactor getting built, yes, we did see activity happening at Yongbyon, no we deeply misunderstood and underestimated North Korea’s capabilities.
So, when that explosion happened, it was surprising, to people in the open source anyways. People scrambled. I mean, that was my first major gig. That’s why I still do this today, was we had an office at the International Crisis Group, of about six people, and all our Korean speakers were immediately sucked into other responsibilities, and so it was up to me to try to take out all these little puzzle pieces, about the seismic information, about the radionuclides that were actually leaked in that first explosion, and figure out what a Constant Phoenix was, and who was collecting what, and put it all together to try to understand what kind of warhead that they may or may not have exploded, if it was even a warhead at that point.
Ariel: I’m hoping that you can explain how monitoring works. I’m an ex-seismologist, so I actually do know a little bit about the seismic side of monitoring nuclear weapons testing, but I’m assuming a lot of listeners do not. I’m not as familiar with things like the radionuclide testing, or the Phoenix that you mentioned was a new phrase for me as well. I was hoping you could explain what you go through to monitor and confirm whether or not a nuclear weapon has been tested, and before you do that real quick — so did you actually see that first … Could you see the explosion?
Melissa: No. I was in Seoul, so I was a long ways away, and I didn’t really … Of course, I did not see or feel anything. I was in an office in downtown Seoul, so I remember actually how casual the citizens of Seoul were that day. I remember feeling kind of nervous about the whole thing. I was registered with the Canadian embassy in Seoul, and we actually had, when you registered with the embassy, we had instructions of what to do in case of an emergency.
I remember thinking, “Gosh, I wonder if this is an emergency,” because I was young and fresh out of school. But no, I mean, as I looked down out of our office windows, sure enough at noon, the doors opened up and all my Korean colleagues streamed out to lunch together, and really behaved pretty traditionally, the way everyone normally does.
South Koreans have always been very stoic about these tests, and I think they’re taken more anxiously by foreigners like me. But I do also remember there were these aerial sirens going off that day, and I actually never got an explanation of why there were sirens going off that day. I remember they tested them when I lived there, but I’m not sure why the sirens were going off that day.
Ariel: Okay. Let’s go back to how the monitoring works, and Dave, I don’t know if this is something that you can also jump in on?
Dave: Yeah, sure. I think I’ll let Melissa start and I’ll try to fill in any gaps, if there are any.
Melissa: So, the Comprehensive Test Ban Treaty Organization is an organization based in Vienna, but they have stations all over the world, and they’re continually monitoring for nuclear explosions. The Constant Phoenix is a WC-135. It’s a US Air Force vehicle, and so the information coming out of it is not open source and I don’t get to see it, but what I can do, or what journalists, investigative journalists sometimes do, is, say, when it’s taking off from Guam, or an Air Force Base, and then I know at least that the US Air Force is thinking it’s going to be sensing something, so this is like a specialty vehicle. I mean, it’s basically an airplane, but it has many, many interesting sensor arrays all over it that sniff the air. What they’re trying to detect are xenon isotopes, and these are isotopes that are possibly released from an underground nuclear test, depending on how well the tunnel was sealed.
In that very first nuclear explosion in 2006, some noble gases were released and I think that they were detected by the WC-135. I also remember back then, although this was a long time ago, that there were a few sensing stations in South Korea that detected them as well. What I remember from that time is that the ratio of xenon isotopes was definitely telling us that this was a nuclear weapon. This wasn’t like a big hoax that they’d exploded a bunch of dynamite or something like that, which actually would be a really big hoax, and hard to pull off. But we could see that it was a nuclear test, it was probably a fission device. The challenge with detecting these gases is that they decay very quickly, so we have, 1) not always sensed radionuclides after North Korea’s nuclear tests, and, 2) if we do sense them, sometimes they’re decayed enough that we can’t get anything more than it was a nuclear test, and not a chemical explosion test.
Dave: Yeah, so I might be able to offer, because Melissa did a great job of explaining how the process works, is maybe a bit more of a recent mechanism and how we interact with these tests as they occur. Usually most of the people in our field follow a set number of seismic-linked Twitter accounts that will give you updates on when some part of the world is shaking for some reason or another.
They’ll put a tweet or maybe you’ll get an email update saying, “There was an earthquake in California,” because we get earthquakes all the time, or in Japan. Then, all of a sudden you hear there’s an earthquake in North Korea and everyone pauses. You look at this little tweet, I guess, or email, you can also get them sent to your phone via text message, if you sign up for whichever region of the world you’re interested in, and you look for what province was this earthquake in?
If it registers in the right province, you’re like, “Okay.” What’s next is we’ll look at the data that comes out immediately. CTBTO will come out with information, usually within a couple of days, if not immediately after, and we’ll look at the seismic waves. While I don’t study these waves, the type of seismic signature you get from a nuclear explosion is like a fingerprint. It’s very unique and different from the type of seismic signature you get from an earthquake of varying degrees.
We’ll take that and compare those to previous tests, which the United States and Russia have done infinitely more than any other country in the world. And we’ll see if those match. And as North Korea has tested more nuclear devices, the signatures started coming more consistent. If that matches up, we’ll have a soft confirmation that they did it, and then we’ll wait for government news, press releases to give us the final nail confirming that there was a nuclear test.
Melissa: Yeah, so as Dave said, as a citizen scientist, I love just setting up the USGS alert, and then if there’s an earthquake near the village of Punggye-ri, I’m like, “Ah-hah, I got you” because it’s not a very seismically active area. When the earthquakes happen that are related to an underground nuclear test, they’re shallow. They’re not deep, geological events.
Yeah, there’s some giveaways like, people like to do them on the hour, or the half hour, and mother nature doesn’t care. But some resources for your listeners, if they want to get involved and see, is you can go to the USGS website and set up your own alert. The CTBTO has not just seismic stations, but the radionuclide stations I mentioned, as well as infrasound and hydroacoustic, and other types of facilities all over the world. There’s a really cool map on their website where they show the over… I think it’s nearly 300 stations all around the world now, that are devoted exclusively to monitoring nuclear tests.
They get their information out, I think in seven minutes, and I don’t get that information necessarily in the first seven minutes, because I’m not a state member, a state party. But they will give out information very soon afterwards, and actually based on the seismic data, our colleagues, Jeffrey Lewis and some other young, smart people of the world, actually threw together a map, not using CTBTO data, but using the seismic stations of I think Iran, China, Japan, South Korea, and so if you go to their website, it’s called SleuthingFromTheInternet.com, you can set up little alerts there too, or scale for all the activities that are happening.
That was really just intended I think to be a little bit transparent with the seismic data and try to see data from different country stations, and in part, it was conceived because I think the USGS was deleting some of their explosions from the database and someone noticed. So now the idea is that you take a little bit of data from all these different countries, and that you can compare it to each other.
The last place I would suggest is to go to the IRIS seismic monitoring station, because just as Dave was mentioning, each seismic event has a different P wave, and so it shows up differently, like a fingerprint. And so, when IRIS puts out information, you can very quickly see how the different explosions in North Korea compare to each other, relatively, and so that can be really useful, too.
Dave: I will say, though, that sometimes you might get a false alarm. I believe it was with the last nuclear test. There was one reporting station, their automatic alert system that was put up out of the UK, that didn’t report it. No one caught that it didn’t, and then it did report it like a week later. So, for all of half an hour until we figured it out, there was a bit of a pause because there was some concern they might have done another test again, which would have been the seventh, but it turned out just being a delayed reporting.
Dave: Most of the time these things work out really well, but you always have to look for secondary and third sources of confirmation when these types of events happen.
Ariel: So a quick aside, we will have links to everything that you both just brought up in the transcript, so anyone interested in following up with any of these options, will be able to. I’m also going to share a fun fact that I learned, and that was, we originally had a global seismic network in order to monitor nuclear weapons testing. That’s why it was set up. And it’s only because we set that up that we actually were able to prove the plate tectonics theory.
Melissa: Oh, cool.
Dave: That’s really cool.
Melissa: Yeah. No, the CTBTO is really interesting, because even though the treaty isn’t enforced yet, they have these amazing scientific resources, and they’ve done all kinds of things. Like, they can hear whales moving around with their hydroacoustic technology, and when Iran had an explosion, a major explosion at their solid motor missile facility, they detected that as well.
Ariel: Yeah. It’s fun. Like I said, I did seismology a while ago so I’m signed up for lots of fun alerts. It’s always fun to learn about where things are blowing up in the earth’s surface.
Melissa: Well, that’s really the magic of open source to me. I mean, it used to be that a government came out and said, “Okay, this is what happened, and this is what we’re going to do about it.” But the idea that me, like a regular person in the world, can actually look up this primary information in the moments that it happens, and make a determination for myself, is really empowering. It makes me feel like I have the agency I want to have in understanding the world, and so I have to admit, that day in South Korea, when I was sitting there in the office tower and it was like, “Okay, all hands on deck, everyone’s got to write a report” and I was trying to figure it out, I was like, “I can’t believe I’m doing this. I can’t believe I can do this.” It’s such a different world already.
Ariel: Yeah. That is really amazing. I like your description. It’s really empowering to know that we have access to this information. So, I do want to move on and with access to this information, what do we know about what’s going on in North Korea right now? What can you tell us about what their plans are? Do we think the summit will happen? I guess I haven’t kept up with whatever the most recent news is. Do we think that they will actually do anything to get rid of their nuclear weapons?
Dave: I think at this point, the North Koreans feel really comfortable with the amount of information and progress they’ve made in their nuclear weapons program. That’s why they’re willing to talk. This program was primarily as a means to create a security assurance for the North Koreans because the Americans and South Koreans and whatnot have always been interested in regime change, removing North Korea from the equation, trying to end the thing that started in the 1950s, the Korean War, right? So there’d just be one Korea, we wouldn’t have to worry about North Korea, or this mysterious Hermit Kingdom, above the 38th parallel.
With that said, there’s been a lot of speculation as to why the North Koreans are willing to talk to us now. Some people have been floating around the idea that maximum pressure, I think that was the word used, with sanctions and whatnot, has brought the North Koreans to their knees, and now they’re willing to give up their nukes, as we’ve been hearing about.
But the way the North Koreans use denuclearization is very important. Because on one hand, that could mean that they’re willing to give up their nuclear weapons, and to denuclearize the state itself, but the way the North Koreans use it is much broader. It’s more used in the way of denuclearizing the peninsula. It’s not specifically reflective onto them.
Now that they’ve finally achieved some type of reasonable success with their nuclear weapons program, they’re more in a position where they think they can talk to the United States as equals, and denuclearization falls into the terminology that it’s used by other nuclear weapons states, where it’s a, “In a better world we won’t need these types of horrible weapons, but we don’t live in that world today, so we will stand behind the effort to denuclearize, but not right now.”
Melissa: Yeah, I think we can say that if we look at North Korea’s capabilities first, and then why they’re talking now, we can see that in the time when Dave and I were cutting our teeth, they were really ramping up their nuclear and missile capabilities. It wasn’t immediately obvious, because a lot of what was happening was inside a laboratory or inside a building, but then eventually they started doing nuclear tests and then they did more and more missile tests.
It used to be that a missile test was just a short range missile off the coast, sometimes it was a political grandstanding. But if you look, our colleague, Shea Cotton, made a missile database that shows every North Korean missile test, and you can see that in the time under Kim Jong-un, those tests really started to ramp up. I think Dave, you started at CNS in like 2014?
Dave: Right around then.
Melissa: Right around then, so they jumped up to like 19 missile tests that year. I can say this because I’m looking at the database right now, and they started doing really more interesting things than ever before, too. Even though diplomatically and politically we were still thinking of them as being backwards, as not having a very good capability, if we looked at it quantitatively, we could say, “Well, they’re really working on something.”
So Dave actually was really excellent at geolocating. When they did engine tests, we could measure the bell of the engine and get a sense of what those engines were about. We could see solid fuel motors being tested, and so this went all the way up until ICBM launched last fall, and then they were satisfied.
Ariel: So when you say engine testing, what does that mean? What engine?
Dave: The North Korean ballistic missile fleet used to be entirely tied to this really old Soviet missile called the Scud. If anyone’s played video games in the late ’90s or early 2000s, that was the small missile that you always had to take out or something along that line, and it was fairly primitive. It was a design that the North Koreans hadn’t demonstrated they were able to move beyond, that’s why then the last three years started to kick in, the North Koreans started to field more complicated missiles instead of showing that they were doing engine tests with more experimental, more advanced designs that we had seen in other parts of the world previously. Some people were a bit speculative or doubting that the North Koreans were actually making serious progress. Then last year, they tested their first intermediate range ballistic missile which can hit Guam, which is something that they’ve been trying to do for a while, but it hadn’t worked out. Then, they made that missile larger, they made their first ICBM.
Then they made that missile even larger, came up with a much more ambitious engine design using two engines instead of one. They had a much more advanced steering system, and they came up with the Hwasong-15 which is their longest range ICBM. It’s a huge shift from the way we were having this conversation 5 to 10 years ago, where we were looking at their space launch vehicles, which were, again, modified Scuds that were stretched out and essentially tied together, to an actual functioning ICBM fleet.
The technological shift in pair with their nuclear weapons developments have really demonstrated that the North Koreans are no longer this 10 to 20 year, around the corner threat, that they actually possess the ability to launch nuclear weapons at the United States.
Melissa: And back when they had their first nuclear test in 2006, people were like, “It’s a device.” I think for years, we still call it a device. But back then, the US and others kept moving the goalposts. They were saying, “Well, all right. They had a nuclear device explode. We don’t know how big it was, they have no way of delivering it. We don’t know what the yield was. It probably fizzled.” It was dismissive.
So, from that period, 2006 to today, it’s a real remarkable challenge. Almost every criticism that North Korea has faced, right down to their heat shield on their ICBM, has been addressed vociferously with propaganda, photos and videos that we in turn can analyze. And yeah, I think they have demonstrated essentially that they can explode something, they can launch a missile that can carry something that can explode.
The only thing they haven’t done, and Dave can chime in here, is explode a nuclear weapon on the tip of a missile. Other countries have done this, and it’s terrifying, and because Dave is such a geographically visual person, I’ll let him describe what that might look like. But if we keep goading them, if we keep telling them they’re backwards, eventually they’re going to want to prove it.
Dave: Yeah, so off of Melissa’s point, this is something that I believe Jeffrey might have coined. It’s called the Juche Bird, which is a playoff of Frigate Bird, which was a live nuclear warhead test that the Americans conducted. The North Koreans, in order to prove that the system in its entirety — the nuclear device, the missile, the reentry shield — all work and it’s not just small random successes in different parts of a much larger program, is they would take a live nuclear weapon, put it on the end of a long range missile, launch it in the air, and detonate it at a specific location to show that they have the ability to actually use the purported weapon system.
Melissa: So if you’re sitting in Japan or South Korea, but especially Japan, and you imagine North Korea launching an intermediate range or intercontinental ballistic missile over your country, with a nuclear weapon on it, in order to execute an atmospheric test, that makes you extremely nervous. Extremely nervous, and we all should be a little bit nervous, because it’s really hard for anyone in the open source, and I would argue in the intelligence community, to know, “Well, this is just an atmospheric test. This isn’t the beginning of a war.”
We would have to trust that they pick up the trajectory of that missile really fast and determine that it’s not heading anywhere. That’s the challenge with all of these missile tests, is no one can tell if there’s a warhead on it, or not a warhead on it, and then we start playing games with ballistic missile defense, and that is a whole new can of worms.
Ariel: What do you guys think is the risk that North Korea or any other country for that matter, would intentionally launch a nuclear weapon at another country?
Melissa: For me, it’s accidents, and an accident can unfold a couple of different ways. One way would be perhaps the US is performing joint exercises. North Korea has some sensing equipment up on peaks of mountains, and Dave has found every single one probably, but it’s not perfect. It’s not great, and if the picture comes back to them, it’s a little fuzzy, maybe this is no longer a joint exercise. This is the beginning of an attack. They will decide to engage.
They’ve long said that they believe that a war will start based on the pretext of a joint exercise. In reverse scenario, what if North Korea does launch an ICBM with a nuclear warhead, in order to perform a test, and the US or Japan or South Korea think, “Well, this is it. This is the war.” And so it’s those accidental scenarios that I worry about, or even perhaps what happens if a test goes badly? Or, someone is harmed in some way?
I worry that these states would have a hard time politically rolling back where they feel they have to be, based on these high stakes.
Dave: I agree with Melissa. I think the highest risk we have is also depending on our nuclear posture in accident. There have been accidents that have happened in the past where someone in a monitoring base picks up a bunch of bleeps on a radar, and people start initiating the game on protocol, and luckily we’ve been able to avoid that to its completion in the past.
Now, with the North Koreans, this could also work in their direction, as well. I can’t imagine that their sensing technology is up to par with what the United States has, or had, back when these accidents were a real thing and they happened. So if the North Koreans see a military exercise that they don’t feel comfortable with, or they have some type of technical glitch on their side, they might notionally launch something, and that would be the start of a conflict.
Ariel: One of the final questions that I have for both of you. I’ve read that while nuclear weapons are scary, the greater threat with North Korea could actually be their conventional weapons. Could either of you speak to that?
Dave: Yeah, sure. North Korea has a very large conventional army. Some people might try to make jokes about how modern that army is, but military force only needs to be so modern with the type of geographical game that’s in play on the Korean Peninsula. Seoul is really not that far from the DMZ, and it’s a widely known fact that North Korea has tons of artillery pointed at Seoul. They’ve had these things pointed there since the end of the Korean War, and they’re all entrenched.
You might be able to hit some of them, but you’re not going to hit all of them. This type of artillery, in connection with their conventional ballistic missile force, we’re talking about things that aren’t carrying a WMD, it’s a real big threat for some type of conventional action.
Seoul is a huge city. The metropolitan area at least has a population of over 20 million people. I’m not sure if you’ve ever been to Seoul, it’s a great, beautiful city, but traffic is horrible, and if everyone’s trying to leave the city when something happens, everyone north of the river is screwed, and congestion on the south side, it would just be a total disaster. Outside of the whole nuclear aspect of this dangerous relationship, the conventional forces North Korea has are equally as terrifying.
Melissa: I think Dave’s bang on, but the only thing I would add is that one of the things that’s concerning about having both nuclear and conventional forces is how you use your conventional forces with that extra nuclear guarantee. This is something that our boss, Jeffrey Lewis, has written about extensively. But do you use that extra measure of security and just preserve it, save it? Does Kim Jong-un go home at night to his family and say, “Yes, I feel extra safe today because I have my nuclear security?”
Or do you use that extra nuclear security in order to increase the number of provocations that you do conventionally? Because we’ve had theses crises break out over the sinking of the Cheonan naval vessel, or the shelling of Yeonpyeong, near the border. In both cases, South Koreans died, but the question is will North Korea feel emboldened by its nuclear security, and will it carry out more conventional provocations?
Ariel: Okay, and so for the last question that I want to ask, we’ve talked about all these things that could go wrong, and there’s really just never anything that positive about a nuclear weapons discussion, but I still want to end with is there anything that gives you hope about this situation?
Dave: That’s a tough question. I mean, on one side, we have a nuclear armed North Korea, and this is something that we knew was coming for quite some time. I think if anything, this is one thing that I know I have and I believe Melissa has been advocating as well, is conversation and dialogue between North [Korea] and all the other associated parties, including the United States, is a way to begin some type of line of communication, hopefully so that accidents don’t happen.
‘Cause North Korea’s not going to be giving up their nukes anytime soon. Even though the talks that you may be having aren’t going to be as productive as you would want them to be, I believe conversation is critical at this moment, because the other alternatives are pretty bad.
Melissa: I guess I’ll add on that we have Dave now, and I know it sounds like I’m teasing my colleague, but it’s true. Things are bad, things are bad, but we’re turning out generation after generation of young, brilliant, enthusiastic people. Before 2014, we didn’t have a Dave, and now we have a Dave, and Dave is making more Daves, and every year we’re matriculating students who care about this issue, who are finding new ways to engage with this issue, that are disrupting entrenched thinking on this issue.
Nuclear weapons are old. They are scary, they are the biggest explosion that humans have ever made, but they are physical and finite, and the technology is aging, and I do think with new creative, engaging ways, the next generation’s going to come along and they’re going to be able to address this issue with new hacks. These can be technical hacks, they can be along the side of verification and trust building. These can be diplomatic hacks.
The grassroots movements we see all around the world, that are taking place to ban nuclear weapons, those are largely motivated by young people. I’m on this bridge where I get to see… I remember the Berlin Wall coming down, I also get to see the students who don’t remember 9/11, and it’s a nice vantage point to be able to see how history’s changing, and while it feels very scary and dark in this moment, in this administration, we’ve been in dark administrations before. We’ve faced much more terrifying adversaries than North Korea, and I think it’s going to be generations ahead who are going to help crack this problem.
Ariel: Excellent. That was a really wonderful answer. Thank you. Well, thank you both so much for being here today. I’ve really enjoyed talking with you.
Melissa: Thanks for having us.
Dave: Yeah, thanks for having us on.
Ariel: For listeners, as I mentioned earlier, we will have links to anything we discussed on the podcast in the transcript of the podcast, which you can find from the homepage of FutureOfLife.org. So, thanks again for listening, like the podcast if you enjoyed it, subscribe to hear more, and we will be back again next month.
[end of recorded material]
What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing.
On this month’s podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.
Topics discussed in this episode include:
- the most hair-raising nuclear close calls in history
- whether we face a greater risk from accidental or intentional nuclear war
- China’s secrecy vs the United States’ transparency about nuclear weapons
- Robert’s first-hand experience with the false missile alert in Hawaii
- and how researchers can help us understand nuclear war and craft better policy
Links you might be interested in after listening to the podcast:
- A Model for the Impacts of Nuclear War
- Some actions you can take to reduce the risk of nuclear weapons
- The cost of nuclear weapons: what do we miss out on to upgrade the arsenal?
You can listen to this podcast above or read the transcript below.
Ariel: Hello, I’m Ariel Conn with the Future of Life Institute. If you’ve been listening to our previous podcasts, welcome back. If this is new for you, also welcome, but in any case, please take a moment to follow us, like the podcast, and maybe even share the podcast.
Today, I am excited to present Seth Baum and Robert de Neufville with the Global Catastrophic Risk Institute (GCRI). Seth is the Executive Director and Robert is the Director of Communications, he is also a super forecaster, and they have recently written a report called A Model for the Probability of Nuclear War. This was a really interesting paper that looks at 60 historical incidents that could have escalated to nuclear war and it basically presents a model for how we can determine what the odds are that we could have some type of nuclear war in the future. So, Seth and Robert, thank you so much for joining us today.
Seth: Thanks for having me.
Robert: Thanks, Ariel.
Ariel: Okay, so before we get too far into this, I was hoping that one or both of you could just talk a little bit about what the paper is and what prompted you to do this research, and then we’ll go into more specifics about the paper itself.
Seth: Sure, I can talk about that a little bit. So the paper is a broad overview of the probability of nuclear war, and it has three main parts. One is a detailed background on how to think about the probability, explaining differences between the concept of probability versus the concept of frequency and related background in probability theory that’s relevant for thinking about nuclear war. Then there is a model that scans across a wide range, maybe the entire range, but at least a very wide range of scenarios that could end up in nuclear war. And then finally, is a data set of historical incidents that at least had some potential to lead to nuclear war, and those incidents are organized in terms of the scenarios that are in the model. The historical incidents give us at least some indication of how likely each of those scenario types are to be.
Ariel: Okay. At the very, very start of the paper, you guys say that nuclear war doesn’t get enough scholarly attention, and so I was wondering if you could explain why that’s the case and what role this type of risk analysis can play in nuclear weapons policy.
Seth: Sure, I can talk to that. The paper, I believe, specifically says that the probability of nuclear war does not get much scholarly attention. In fact, we put a fair bit of time into trying to find every previous study that we could, and there was really, really little that we were able to find, and maybe we missed a few things, but my guess is that this is just about all that’s out there and it’s really not very much at all. We can only speculate on why there has not been more research of this type, my best guess is that the people who have studied nuclear war — and there’s a much larger literature on other aspects of nuclear war — they just do not approach it from a risk perspective as we do, that they are inclined to think about nuclear war from other perspectives and focus on other aspects of it.
So the intersection of people who are both interested in studying nuclear war and tend to think in quantitative risk terms is a relatively small population of scholars, which is why there’s been so little research, is at least my best guess.
Robert: Yeah, it’s a really interesting question. I think that the tendency has been to think about it strategically, something we have control over, somebody makes a choice to push a button or not, and that makes sense from some perspective. I think there’s also a way in which we want to think about it as something unthinkable. There hasn’t been a nuclear detonation in a long time and we hope that there will never be another one, but I think that it’s important to think about it this way so that we can find the ways that we can mitigate the risk. I think that’s something that’s been neglected.
Seth: Just one quick clarification, there have been very recent nuclear detonations, but those have all been tests detonations, not detonations in conflict.
Robert: Fair enough. Right, not a use in anger.
Ariel: That actually brings up a question that I have. As you guys point out in the paper, we’ve had one nuclear war and that was World War II, so we essentially have one data point. How do you address probability with so little actual data?
Seth: I would say “carefully,” and this is why the paper itself is very cautious with respect to quantification. We don’t actually include any numbers for the probability of nuclear war in this paper.
The easy thing to do for calculating probabilities is when you have a large data set of that type of event. If you want to calculate the probability of dying in a car crash, for example, there’s lots of data on that because it’s something that happens with a fairly high frequency. Nuclear war, there’s just one data point and it was under circumstances that are very different from what we have right now, World War II. Maybe there would be another world war, but no two world wars are the same. So we have to, instead, look at all the different types of evidence that we can bring in to get some understanding for how nuclear war could occur, which includes evidence about the process of going from calm into periods of tension, or the thought of going to nuclear war all the way to the actual decision to initiate nuclear war. And then also look at a wider set of historical data, which is something we did in this paper, looking at incidents that did not end up as nuclear wars, but pushed at least a little bit in that direction, to see what we can learn about how likely it is for things to go in the direction of nuclear war, which tells us at least something about how likely it is to get there all the way.
Ariel: Robert, I wanted to turn to you on that note, you were the person who did a lot of work figuring out what these 60 historical events were. How did you choose them?
Robert: Well, I wouldn’t really say I chose them, I tried to just find every event that was there. There are a few things that we left out because we thought it falls below some threshold of the seriousness of the incident, but in theory you could probably expand it in the scope even a little wider than we did. But to some extent we just looked at what’s publicly known. I think the data set is really valuable, I hope it’s valuable, but one of the issues with it is it’s kind of a convenience sample of the things that we know about, and some areas, some parts of history, are much better reported on than others. For example, we know a lot about the Cuban Missile Crisis in the 1960s, a lot of research has been done on that, there are the times when the US government has been fairly transparent about incidents, but we know less about other periods and other countries as well. We don’t have incidents from China’s nuclear program, but that doesn’t mean there weren’t any, it just means it’s hard to figure out, and that scenario would be really interesting to do more research on.
Ariel: So, what was the threshold you were looking at to say, “Okay, I think this could have gone nuclear”?
Robert: Yeah, that’s a really good question. It’s somewhat hard to say. I think that a lot of these things are judgment calls. If you look at the history of incidents, I think a number of them have been blown a little bit out of proportion. As they’ve been retold, people like to say we came close to nuclear war, and that’s not always true. There are other incidents which are genuinely hair-raising and then there are some incidents that seem very minor, that you could say maybe it could have gotten to a nuclear war. But there was some safety incident on an Air Force Base and they didn’t follow procedures, and you could maybe tell yourself a story in which that led to a nuclear war, but at some point you make a judgment call and say, well, that doesn’t seem like a serious issue.
But it wasn’t like we have a really clear, well-defined line. In some ways, we’d like to broaden the data set so that we can include even smaller incidents just because the more incidents, the better as far as understanding, not the more incidents the better as far as being safe.
Ariel: Right. I’d like this question to go to both of you, as you were looking through these historical events, you mentioned that they were already public records so they’re not new per se, but were there any that surprised you, and which were one or two that you found the most hair-raising?
Robert: Well, I would say one that surprised me, and this may just be because of my ignorance of certain parts of geopolitical history, but there was an incident with the USS Liberty in the Mediterranean, in which the Israelis mistook it for an Egyptian destroyer and they decided to take it out, essentially, not realizing it was actually an American research vessel, and they did, and what happened was the US scrambled planes to respond. The problem was that most of the planes, or the ordinary planes they would have ordinarily scrambled, were out on some other sorties, some exercise, something like that, and they ended up scrambling planes which had a nuclear payload on them. These planes were recalled pretty quickly. They mentioned this to Washington and the Secretary of Defense got on the line and said, “No, recall those planes,” so it didn’t get that far necessarily, but I found it a really shocking incident because it was a friendly fire confusion, essentially, and there were a number of cases like that in which nuclear weapons were involved because they happened to be on equipment where they shouldn’t have been that was used to respond to some kind of a real or false emergency. That seems like a bigger issue than I would’ve at first expected, that just the fact that nuclear weapons are lying around somewhere where they could be involved with something.
Ariel: Wow, okay. And Seth?
Seth: Yeah. For me this was a really eye-opening experience. I had some familiarity with the history of incidents involving nuclear weapons, but there turned out to be much more that’s gone on over the years than I really had any sense for. Some of it is because I’m not a historian, this is not my specialty, but there were any number of events that it appears that the nuclear weapons were, at least may have been, seriously considered for use in a conflict.
Just to pick one example, in 1954 and 1955 was known as the first Taiwan Straits Crisis, and the second crisis, by the way, in 1958, also included plans for nuclear weapons use. But in the first one there were plans made up by the United States, the Joint Chiefs of Staff allegedly recommended that nuclear weapons be used against China if the conflict intensified and that President Eisenhower was apparently pretty receptive to this idea. In the end, there was a ceasefire negotiated so it didn’t come to that, but had that ceasefire not been made, my sense is that … The historical record is not clear on whether the US would’ve used nuclear weapons or not, maybe even the US leadership hadn’t made any final decisions on this matter, but there any number of these events, especially earlier in the years or decades after World War II when nuclear weapons were still relatively new, in which the use of nuclear weapons in conflict seemed to at least get a serious consideration that I might not have expected.
I’m accustomed to thinking of nuclear weapons as having a fairly substantial taboo attached to them, but I feel like the taboo has perhaps strengthened over the years, such that leadership now is less inclined to give the use of nuclear weapons serious consideration than it was back then. That may be mistaken, but that’s the impression that I get and that we may be perhaps more fortunate to have gotten through the first couple decades after World War II without an additional nuclear war. But it might be less likely at this time, though still not entirely impossible by any means.
Ariel: Are you saying that you think the risk is higher now?
Seth: I think the risk is probably higher now. I think I would probably say that the risk is higher now than it was, say, 10 years ago because various relations between nuclear armed states have gotten worse, certainly including between the United States and Russia, but whether the probability of nuclear war is higher now versus in, say, the ’50s or the ’60s, that’s much harder to say. That’s a degree of detail that I don’t think we can really comment on conclusively based on the research that we have at this point.
Ariel: Okay. In a little while I’m going to want to come back to current events and ask about that, but before I do that I want to touch first on the model itself, which lists four steps to a potential nuclear war: initiating the event, crisis, nuclear weapon use and full-scale nuclear war. Could you talk about what each of those four steps might be? And then I’m going to have follow-up questions about that next.
Seth: I can say a little bit about that. The model you’re describing is a model that was used by our colleague, Martin Hellman, in a paper that he did on the probability of nuclear war, and that was probably the first paper that develops the study of the probability of nuclear war using the sort of methodology that we use in this paper, which is to develop nuclear war scenarios.
So the four steps in this model are four steps to go from a period of calm into a full-scale nuclear war. His paper was looking at the probability of nuclear war based on an event that is similar to the Cuban Missile Crisis, and what’s distinctive about the Cuban Missile Crisis is we may have come close to going directly to nuclear war without any other type of conflicts in the first place. So that’s where the initiating event and the crisis in this model comes from, it’s this idea that there will be some of event that leads to a crisis, and the crisis will go straight to nuclear weapons use which could then scale to a full-scale nuclear war. The value of breaking it into those four steps is then you can look at each step in turn, think through the conditions for each of them to occur and maybe the probability of going from one step to the next, which you can use to evaluate the overall probability of that type of nuclear war. That’s for one specific type of nuclear war. Our paper then tries to scan across the full range of different types of nuclear war, different nuclear war scenarios, and put that all into one broader model.
Ariel: Okay. Yeah, your paper talks about 14 scenarios, correct?
Seth: That’s correct, yes.
Ariel: Okay, yeah. So I guess I have two questions for you: one, how did you come up with these 14 scenarios, and are there maybe a couple that you think are most worrisome?
Seth: So the first question we can definitely answer, we came up with them through our read of the nuclear war literature and our overall understanding of the risk and then iterating as we put the model together, thinking through what makes the most sense for how to organize the different types of nuclear war scenarios, and through that process, that’s how we ended up with this model.
As far as which ones seem to be the most worrisome, I would say a big question is whether we should be more worried about intentional versus accidental, or inadvertent nuclear war. I feel like I still don’t actually have a good answer to that question. Basically, should we be more worried about nuclear war that happens when a nuclear armed country decides to go ahead and start that nuclear war versus one where there’s some type of accident or error, like a false alarm or the detonation of a nuclear weapon that was not intended to be an act of war? I still feel like I don’t have a good sense for that.
Maybe the one thing I do feel is that it seems less likely that we would end up in a nuclear war from a detonation of a nuclear weapon that was not intentionally an act of war just because it feels to me like those events are less likely to happen. This would be nuclear terrorism or the accidental detonation of nuclear weapons, and even if it did happen it’s relatively likely that they would be correctly diagnosed as not being an act of war. I’m not certain of this. I can think of some reasons why maybe we should be worried about that type of scenario, but especially looking at the historical data it felt like those historical incidents were a bit more of a stretch, a bit further away from actually ending up in nuclear war.
Robert, I’m actually curious, your reaction to that, if you agree or disagree with that.
Robert: Well, I don’t think that non-state actors using a nuclear weapon is the big risk right now. But as far as whether it’s more likely that we’re going to get into a nuclear war through some kind of human error or a technological mistake, or whether it will be a deliberate act of war, I can think of scary things that have happened on both sides. I mean, the major thing that looms in one’s mind when you think about this is the Cuban Missile Crisis, and that’s an example of a crisis in which there were a lot of incidents during the course of that crisis where you think, well, this could’ve gone really badly, this could’ve gone the other way. So a crisis like that where tensions escalate and each country, or in this case the US and Russia, each thought the other might seriously threaten the homeland, I think are very scary.
On the other hand, there are incidents like the 1995 Norwegian rocket incident, which I find fairly alarming. In that incident, what happened was Norway was launching a scientific research rocket for studying the weather and had informed Russia that they were going to do this, but somehow that message hadn’t got passed along to the radar technicians, so the radar technician saw what looked like a submarine launched ballistic missile that could have been used to do an EMP, a burst over Russia which would then maybe take out radar and could be the first move in a full-scale attack. So this is scary because this got passed up the chain and supposedly, President Boris Yeltsin, it was Yeltsin at the time, actually activated the nuclear football in case he needed to authorize a response.
Now, we don’t really have a great sense how close anyone came to this, this is a little hyperbole after the fact, but this kind of thing seems like you could get there. And 1995 wasn’t a time of big tension between the US and Russia, so this kind of thing is also pretty scary and I don’t really know, I think that which risk you would find scarier depends a little bit on the current geopolitical climate. Right now, I might be most worried that the US would launch a bloody-nose attack against North Korea and North Korea would respond with a nuclear weapon, so it depends a little bit. I don’t know the answer either, I guess, is my answer.
Ariel: Okay. You guys brought up a whole bunch of things that I had planned to ask about, which is good. I mean, one of my questions had been are you more worried about intentional or accidental nuclear war, and I guess the short answer is, you don’t know? Is that fair to say?
Seth: Yeah, that’s pretty fair to say. The short answer is, at least at this time, they both seem very much worth worrying about.
As far as which one we should be more worried about, this is actually a very important detail to try to resolve for policy purposes because this speaks directly to how we should manage our nuclear weapons. For example, if we are especially worried about accidental or inadvertent nuclear war, then we should keep nuclear weapons on a relatively low launch posture. They should not be on hair-trigger alert because when things are on a high-alert status, it takes relatively little for the nuclear weapons to be launched and makes it easier for a mistake to lead to a launch. Versus if we are more worried about intentional nuclear war, then there may be some value to having them on a high-alert status in order to have a more effective deterrence in order to convince the other side to not launch their nuclear weapons. So this is an important matter to try resolving, but at this point, based on the research that we have so far, it remains, I think, somewhat ambiguous.
Ariel: I do want to follow up with that. Everything I’ve read, there doesn’t seem to be any benefit really to having things like our intercontinental ballistic missiles on hair-trigger alert, which are the ones that are on hair-trigger alert is my understanding, because submarines and the bombers still have the capability to strike back. Do you disagree with that?
Seth: I can’t say for sure whether or not I do disagree with that because it’s not something that I have looked at closely enough, so I would hesitate to comment on that matter. My general understanding is that hair-trigger alert is used as a means to enhance deterrence in order to make it less likely that either side would use their nuclear weapons in the first place, but regarding the specifics of it, that’s not something that I’ve personally looked at closely enough to really be able to comment on.
Robert: I think Seth’s right that it’s a question that needs more research in a lot of ways and that we shouldn’t answer it in the context of… We didn’t figure out the answer to that in this paper. I will say, I would personally sleep better if they weren’t on hair-trigger alert. My suspicion is that the big risk is not that one side launches some kind of decapitating first strike, I don’t think that’s really a very high risk, so I’m not as concerned as someone else might be about how well we need to deter that, how quickly we need to be able to respond. Whereas, I am very concerned about the possibility of an accident because… I mean, readings these incidents will make you concerned about it, I think. Some of them are really frightening. So that’s my intuition, but, as Seth says, I don’t think we really know. There’s more, at least in terms of this model, there’s more studying we need to do.
Seth: If I may, to one of your earlier questions regarding motivations for doing this research in the first place, I feel like to try giving more rigorous answers to some of these very basic nuclear weapons policy questions, like “should nuclear weapons be on hair-trigger alert, is that safer or more dangerous,” we can talk a little bit about what the trade-offs might be, but we don’t really have much to say about how that trade-off actually would be resolved. This is where I think that it’s important for the international security community to be trying harder to analyze the risks in these structured and, perhaps, even quantitative terms so that we can try to answer these questions more rigorously than just, this is my intuition, this is your intuition. That’s really, I think, one of the main values for doing this type of research is to be able to answer these important policy questions with more confidence and also perhaps, more consensus across different points of view than we would otherwise be able to have.
Ariel: Right. I had wanted to continue with some of the risk questions, but while we’re on the points that you’re making, Seth, what do you see moving forward with this paper? I mean, it was a bummer to read the paper and not get what the probabilities of nuclear war actually are, just a model for how we can get there, how do you see either you, or other organizations, or researchers, moving forward to start calculating what the probability could actually be?
Seth: The paper does not give us final answers for what the probability would be, but it definitely makes some important steps in that direction. Additional steps that can be taken would include things like exploring the historical incidence data set more carefully to check to see if there may be important incidents that have been missed, to see for each of the incidents how close do we really think that that came to nuclear war? And this is something that the literature on these incidents actually diverges on. There are some people who look at these incidents and see them as being really close calls, other people look at them and see them as being evidence that the system works as it should, that, sure, there were some alarms but the alarms were handled the way that they should be handled and that the tools are in place to make sure that those don’t end in nuclear war. So exactly how close these various incidents got is one important way forward towards quantifying the probability.
Another one is to come up with some sense for what the actual population of historical incidences relative to the data set that we have, we are presumably missing some number of historical incidents, some of them might be smaller and less important, but there might be some big ones that maybe they happened and we don’t know about it because they are only in literatures in other languages, we only did research in English, or because all of the evidence about them is classified government records by whichever governments were involved in the incident, and so we need to-
Ariel: Actually, I do actually want to interrupt with a question real quick there, and my apologies for not having read this closer, I know there were incidents involving the US, Russia, and I think you guys had some about Israel. Were there incidents mentioning China or any of the European countries that have nuclear weapons?
Seth: Yeah, I think there were probably incidents involving all of the nuclear armed countries, certainly involving China. For example, China had a war with the Soviet Union over their border some years ago and there was at least some talk of nuclear weapons involved in that. Also, the one I mentioned earlier, the Taiwan Straits Crises, those involved China. Then there were multiple incidents between India and Pakistan, especially regarding the situation in Kashmir. With France, I believe we included one incident in which a French nuclear bomber got a faulty signal to take off in combat and then it was eventually recalled before it got too far. There might’ve been something with the UK also. Robert, do you recall if there were any with the UK?
Robert: Yes, there was, during the Falklands war, apparently, they left with nuclear depth charges. It’s actually not really, honestly clear to me why you would use a nuclear depth charge, but there’s not any evidence they ever intended to use them but they sent out nuclear armed ships, essentially, to deal with a crisis in the Falklands.
There’s also, I think, an incident in South Africa as well when South Africa was briefly a nuclear state.
Ariel: Okay. Thanks. It’s not at all disturbing.
Robert: It’s very disturbing. I will say, I think that China is the one we know the least about. Some of the incidents that Seth mentioned with China, the danger or the nuclear armed power that might have used nuclear weapons was the United States. So there is the Soviet-China incident, but we don’t really know a lot about the Chinese program and Chinese incidents. I think some of that is because it’s not reported in English and to some extent it’s also that it’s classified and the Chinese are not as open about what’s going on.
Seth: Yeah, the Chinese are definitely much, much less transparent than the United States, as are the Russians. I mean, the United States might be the most transparent out of all of the nuclear armed countries.
I remember some years ago when I was spending time at the United Nations I got the impression that the Russians and the Chinese were actually not quite sure what to make of the Americans’ transparency, that they found it hard to believe that the US government was not just putting out loads of propaganda and misinformation that it didn’t make sense to them that we just actually put out a lot of honest data about government activities here, and that’s just the standard and that you can actually trust this information, this data. So yeah, we may be significantly underestimating the number of incidents involving China and perhaps Russia and other countries because their governments are less transparent.
Ariel: Okay. That definitely addresses a question that I had, and my apologies for interrupting you earlier.
Seth: No, that’s fine. But this is one aspect of the research that still remains to be done that would help us figure out what the probabilities might be. It would be a mistake to just calculate them based on the data set as it currently stands, because this is likely to be only a portion of the actual historical incidents that may have ended in nuclear war.
So these are the sorts of details and nuances that were, unfortunately, beyond the scope of the project that we were able to do, but it would be important work for us or other research groups to do to take us closer to having good probability estimates.
Ariel: Okay. I want to ask a few questions that, again, are probably going to be you guys guessing as opposed to having good, hard information, and I also wanted to touch a little bit on some current events. So first, one of the things that I hear a lot is that if a nuclear war is going to happen, it’s much more likely to happen between India and Pakistan than, say, the US and Russia or US and … I don’t know about US and North Korea at this point, but I’m curious what your take on that is, do you feel that India and Pakistan are actually the greatest risk or do you think that’s up in the air?
Robert: I mean, it’s a really tough question. I would say that India and Pakistan is one of the scariest situations for sure. I don’t think they have actually come that close, but it’s not that difficult to imagine a scenario in which they would. I mean, these are nuclear powers that occasionally shoot at each other across the line of control, so I do think that’s very scary.
But I also think, and this is an intuition, this isn’t a conclusion that we have from the paper, but I also think that the danger of something happening between the United States and Russia is probably underestimated, because we’re not in the Cold War anymore, relations aren’t necessarily good, it’s not clear what relations are, but people will say things like, “Well, neither side wants a war.” Obviously neither side wants a war, but I think there’s a danger of the kind of inadvertent escalation, miscalculation, and that hasn’t really gone away. So that’s something I think is probably not given enough attention. I’m also concerned about the situation in North Korea. I think that that is now an issue which we have to take somewhat seriously.
Seth: I think the last five years or so have been a really good learning opportunity for all of us on these matters. I remember having conversations with people about this, maybe five years ago, and they thought the thought of a nuclear war between the United States and Russia was just ridiculous, that that’s antiquated Cold War talk, that the world has changed. And they were right and their characterization of the world as it was at that moment, but I was always uncomfortable with that because the world could change again. And sure enough, in the last five years, the world has changed very significantly that I think most people would agree makes the probability of nuclear war between the United States and Russia substantially higher than it was five years ago, especially starting with the Ukraine crisis.
There’s also just a lot of basic volatility in the international system that I think is maybe underappreciated, that we might like to think of it as being more deterministic, more logical than it actually is. The classic example is that World War I maybe almost didn’t happen, that it only happened because a very specific sequence of events happened that led to the assassination of Archduke Ferdinand and had that gone a little bit differently, he wouldn’t have been assassinated and World War I wouldn’t have happened and the world we live in now would be very different than what it is. Or, to take a more recent example, it’s entirely possible that had the 2016 FBI director not made an unusual decision regarding the disclosure of information regarding one candidate’s emails a couple weeks before the election, the outcome of the 2016 US election might’ve gone different and international politics would look quite different than it is right now. Who knows what will happen next year or the year after that.
So I think we can maybe make some generalizations about which conflicts seem more likely or less likely, especially at the moment, but we should be really cautious about what we think it’s going to be overall over 5, 10, 20, 30 year periods just because things really can change substantially in ways that may be hard to see in advance.
Robert: Yeah, for me, one of the lessons of World War I is not so much that it might not have happened, I think it probably would have anyway — although Seth is right, things can be very contingent — but it’s more that nobody really wanted World War I. I mean, at the time people thought it wouldn’t happen because it was sort of bad for everyone and no one thought, “Well, this is in our interest to pursue it,” but wars can happen that way where countries end up thinking, for one reason or another, they need to go, they need to do one thing or another that leads to war when in fact everyone would prefer to have gotten together and avoided it. It’s suboptimal equilibrium. So that’s one thing.
The other thing is that, as Seth says, things change. I’m not that concerned about what’s going on in the week that we’re recording this, but we had this week the Russian ambassador saying he would shoot down US missiles aimed at Syria and the United States’ president responding on Twitter, that they better get ready for his smart missiles. This is, I suspect, won’t escalate to a nuclear war. I’m not losing that much asleep about it. But this is the kind of thing that you would like to see a lot less of, this is the kind of thing that’s worrying and maybe you wouldn’t have anticipated this 10 years ago.
Seth: When you say you’re not losing much sleep on this, you’re speaking as someone who has, as I understand, it very recently, actually, literally lost sleep over the threat of nuclear war, correct?
Robert: That’s true. I was woken up early in the morning by an alert saying a ballistic missile was coming to my state, and that was very upsetting.
Ariel: Yes. So we should clarify, Robert lives in Hawaii.
Robert: I live in Hawaii. And because I take the risk of nuclear war seriously, I might’ve been more upset than some people, although I think that a large percentage of the population of Hawaii thought to themselves, “Maybe I’m going to die this morning. In fact, maybe, my family’s going to die and my neighbors and the people at the coffee shop, and our cats and the guests who are visiting us,” and it really brought home the danger, not that it should be obvious that nuclear war is unthinkable but when you actually face the idea … I also had relatively recently read Hiroshima, John Hersey’s account of, really, most of the aftermath of the bombing of Hiroshima, and it was easy to put myself in that and say, “Well, maybe I will be suffering from burns or looking for clean water,” and of course, obviously, again, none of us deserve it. We may be responsible for US policy in some way because the United States is a democracy, but my friends, my family, my cat, none of us want any part of this. We don’t want to get involved in a war with North Korea. So this really, I’d say, it really hit home.
Ariel: Well, I’m sorry you had to go through that.
Robert: Thank you.
Ariel: I hope you don’t have to deal with it again. I hope none of us have to deal with that.
I do want to touch on what you’ve both been talking about, though, in terms of trying to determine the probability of a nuclear war over the short term where we’re all saying, “Oh, it probably won’t happen in the next week,” but in the next hundred years it could. How do you look at the distinction in time in terms of figuring out the probability of whether something like this could happen?
Seth: That’s a good technical question. Arguably, we shouldn’t be talking about the probability of nuclear war as one thing. If anything, we should talk about the rate, or the frequency of it, that we might expect. If we’re going to talk about the probability of something, that something should be a fairly specific distinct event. For example, an example we use in the paper, what’s the probability of a given team, say, the Cleveland Indians, winning the World Series? It’s good to say what’s the probability of them winning the World Series in, say, 2018, but to say what’s the probability of them winning the World Series overall, well, if you wait long enough, even the Cleveland Indians will probably eventually win the World Series as long as they continue to play them. When we wrote the paper we actually looked it up, and it said that they have about a 17% chance of winning the 2018 World Series even though they haven’t won a World Series since like 1948. Poor Cleveland- sorry, I’m from Pittsburgh so I get to gloat a little bit.
But yeah, we should distinguish between saying what is the probability of any nuclear war happening this week or this year, versus how often we might expect nuclear wars to occur or what the total probability of any nuclear war happening over a century or whatever time period it might be.
Robert: Yeah. I think that over the course of the century, I mean, as I say, I’m probably not losing that much sleep on any given week, but over the course of a century if there’s a probability of something really catastrophic, you have to do everything you can to try to mitigate that risk.
I think, honestly, some terrible things are going to happen in 21st century. I don’t know what they are, but that’s just how life is. I don’t know which things they are. Maybe it will involve a nuclear war of some kind. But you can also differentiate among types of nuclear war. If one nuclear bomb is used in anger in the 21st century, that’s terrible, but wouldn’t be all that surprising or mean the destruction of the human race. But then there are the kinds nuclear wars that could potentially trigger a nuclear winter by kicking so much soot up into the atmosphere and blocking out the sun, and might actually threaten not just the people who were killed in the initial bombing, but the entire human race. That is something we need to look at, in some sense, even more seriously, even though the chance of that is probably a fair amount smaller than the chance of one nuclear weapon being used. Not that one nuclear weapon being used wouldn’t be an incredibly catastrophic event as well, but I think with that kind of risk you really need to be very careful to try to minimize it as much possible.
Ariel: Real quick, I got to do a podcast with Brian Toon and Alan Robock a little while ago on nuclear winter, so we’ll link to that in the transcript for anyone who wants to learn about nuclear winter, and you brought up a point that I was also curious about, and that is: what is the likelihood, do you guys think, of just one nuclear weapon being used and limited retaliation? Do you think that is actually possible or do you think if a nuclear weapon is used, it’s more likely to completely escalate into full-scale nuclear war?
Robert: I personally do think that’s possible because I think a number of the scenarios that would involve using a nuclear weapon or not between the United States and Russia, or even the United States and China, so I think that some scenarios involve a few nuclear weapons. If it were an incident with North Korea, you might worry that it would spread to Russia or China, but you can also see a scenario in which North Korea uses one or two nuclear weapons. Even with India and Pakistan, they don’t necessarily, I wouldn’t think they would necessarily, use all — what do they have each, like a hundred or so nuclear weapons — I wouldn’t necessarily assume they would use them all. So there are scenarios in which just one or a few nuclear weapons would be used. I suspect those are the most likely scenarios, but it’s really hard to know. We don’t know the answer to that question.
Seth: There are even scenarios between the United States and Russia that involve one or just a small number of nuclear weapons, and the Russian military has the concept of the de-escalatory nuclear strike, which is the idea that if there is a major conflict that is emerging and might not be going in a favorable way for Russia, especially since their conventional military is not as strong as ours, that they may use a single nuclear weapon, basically, to demonstrate their seriousness on the matter in hopes of persuading us to back down. Now, whether or not we would actually back down or escalate it into an all-out nuclear war, I don’t think that’s something that we can really know in advance, but it’s at least plausible. It’s certainly plausible that that’s what would happen and presumably, Russia considers this plausible which is why they talk about it in the first place. Not to just point fingers at Russia, this is essentially the same thing the NATO had in the earlier point in the Cold War when the Soviet Union had the larger conventional military and our plan was to use nuclear weapons in a limited basis in order to prevent the Soviet Union from conquering Western Europe with their military, so it is possible.
I think this is one of the biggest points of uncertainty for the overall risk, is if there is an initial use of nuclear weapons, how likely is it that additional nuclear weapons are used and how many and in what ways? I feel like despite having studied this a modest amount, I don’t really have a good answer to that question. This is something that may be hard to figure out in general because it could ultimately depend on things like the personalities involved in that particular conflict, who the political and military leadership are and what they think of all of this. That’s something that’s pretty hard for us as outside analysts to characterize. But I think, both possibilities, either no escalation or lots of escalation, are possible as is everything in between.
Ariel: All right, so we’ve gone through most of the questions that I had about this paper now, thank you very much for answering those. You guys have also published a working paper this month called A Model for the Impacts of Nuclear War, but I was hoping you could maybe give us a quick summary of what is covered in that paper and why we should read it.
Seth: Risk overall is commonly quantified as the probability of some type of event multiplied by the severity of the impacts. So our first paper was on the probability side, this one’s on the impact side, and it scans across the full range of different types of impacts that nuclear war could have looking at the five major impacts of nuclear weapons detonation, which is thermal radiation, blast, ionizing radiation, electromagnetic pulse and then finally, human perceptions, the ways that the detonation affects how people think and in turn, how we act. We, in this paper, built out a pretty detailed model that looks at all of the different details, or at least a lot of the various details, of what each of those five effects of nuclear weapons detonations would have and what that means in human terms.
Ariel: Were there any major or interesting findings from that that you want to share?
Seth: Well, the first thing that really struck me was, “Wow, there are a lot of ways of being killed by nuclear weapons.” Most of the time when we think about nuclear detonations and how you can get killed by them, you think about, all right, there’s the initial explosion and whether it’s the blast itself or the buildings falling on you, or the fire, it might be the fire, or maybe it’s a really high dose of radiation that you can get if you’re close enough to the detonation, that’s probably how you can die. In our world of talking about global catastrophic risks, we also will think about the risk of nuclear winter and in particular, the effect that that can have on global agriculture. But there’s a lot of other things that can happen too, especially related to the effect on physical infrastructure, or I should say civil infrastructure, roads, telecommunications, the overall economy when cities are destroyed in the war, those take out potentially major nodes in the global economy that can have any number of secondary effects, among other things.
It’s just a really wide array of effects, and that’s one thing that I’m happy for with this paper is that for, perhaps, the first time, it really tries to lay out all of these effects in one place and in a model form that can be used for a much more complete accounting of the total impact of nuclear war.
Ariel: Wow. Okay. Robert, was there anything you wanted to add there?
Robert: Well, I agree with Seth, it’s astounding what the range, the sheer panoply of bad things that could happen, but I think that once you get into a situation where cities are being destroyed by nuclear weapons, or really anything being destroyed by nuclear weapons, it can unpredictable really fast. You don’t know the effect on the global system. A lot of times, I think, when you talk about catastrophic risk, you’re not simply talking about the impact of the initial event, but the long-term consequences it could have — starting more wars, ongoing famines, a shock to the economic system that can cause political problems, so these are things that we need to look at more. I mean, it would be the same with any kind of thing we would call a catastrophic risk. If there were a pandemic disease, the main concern might not be the pandemic disease would wipe out everyone, but that the aftermath would cause so many problems that it would be difficult to recover from. I think that would be the same issue if there were a lot of nuclear weapons used.
Seth: Just to follow up on that, some important points here, one is that the secondary effects are more opaque. They’re less clear. It’s hard to know in advance what would happen. But then the second is the question of how much we should study them. A lot of people look at the secondary effect and say, “Oh, it’s too hard to study. It’s too unclear. Let’s focus our attention on these other things that are easier to study.” And maybe there’s something to be said for that where if there’s really just no way of knowing what might happen, then we should at least focus on the part that we are able to understand. I’m not convinced that that’s true, maybe it is, but I think it’s worth more effort than there has been to try to understand the secondary effects, see what we can say about them. I think there are a number of things that we can say about them. The various systems are not completely unknown, they’re the systems that we live in now and we can say at least a few intelligent things about what might happen to those after a nuclear war or after other types of events.
Ariel: Okay. My final question for both of you then is, as we’re talking about all these horrible things that could destroy humanity or at the very least, just kill and horribly maim way too many people, was there anything in your research that gave you hope?
Seth: That’s a good question. I feel like one thing that gave me some hope is that, when I was working on the probability paper, it seemed that at least some of the events and historical incidents that I had been worried about might not have actually come as close to nuclear war as I previously thought they had. Also, a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.
But the other is that as you lay out all the different elements of both the probability and the impacts and see it in full how it all works, that really often points to opportunities that may be out there to reduce the risk and hopefully, some of those opportunities can be taken.
Robert: Yeah, I’d agree with that. I’d say there were certainly things in the list of historical incidents that I found really frightening, but I also thought that in a large number of incidents, the system, more or less, worked the way it should have, they caught the error of whatever kind it was and fixed it quickly. It’s still alarming, I still would like there not to be incidents, and you can imagine that some of those could’ve not been fixed, but they were not all as bad as I had imagined at first. So that’s one thing.
I think the other thing is, and I think Seth you were sort of indicating this, there’s something we can do, we can think about how to reduce the risk, and we’re not the only ones doing this kind of work. I think that people are starting to take efforts to reduce the risk of really major catastrophes more seriously now, and that kind of work does give me hope.
Ariel: Excellent. I’m going to end on something that … It was just an interesting comment that I heard recently, and that was: Of all the existential risks that humanity faces, nuclear weapons actually seem the most hopeful because there’s something that we can so clearly do something about. If we just had no nuclear weapons, nuclear weapons wouldn’t be a risk, and I thought that was an interesting way to look at it.
Seth: I can actually comment on that idea. I would add that you would need not just to not have any nuclear weapons, but also not have the capability to make new nuclear weapons. There is some concern that if there aren’t any nuclear weapons, then in a crisis there may be a rush to build some in order to give that side the advantage. So in order to really eliminate the probability of nuclear war, you would need to eliminate both the weapons themselves and the capacity to create them, and you would probably also want to have some monitoring measures so that the various countries had confidence that the other sides weren’t cheating. I apologize for being a bit of a killjoy on that one.
Robert: I’m afraid you can’t totally reduce the risk of any catastrophe, but there are ways we can mitigate the risk of nuclear war and other major risks too. There’s work that can be done to reduce the risk.
Ariel: Okay, let’s end on that note. Thank you both very much!
Seth: Yeah. Thanks for having us.
Robert: Thanks, Ariel.
Ariel: If you’d like to read the papers discussed in this podcast or if you want to learn more about the threat of nuclear weapons and what you can do about it, please visit futureoflife.org and find this podcast on the homepage, where we’ll be sharing links in the introduction.
[end of recorded material]
- Continuing Dangers from Nuclear Weapons
- International Initiatives Toward Disarmament
- Political Initiatives
- Campus Organizing for Peace & Justice: What Works? What Doesn’t? Where Next?
- Actions for the Coming Period: Shout Heard Round the World
- Resisting the Trillion dollar Nuclear Weapons Escalation
- Congressional Budget-Civilian vs Pentagon
- Don’t Bank on the Bomb Divestment Campaigns
- Preventing Nuclear Weapons Use
Sunday Morning Planning Breakfast
Student-led session to design and implement programs enhancing existing campus groups, and organizing new ones; extending the network to campuses in Rhode Island, Connecticut, New Jersey, New Hampshire, Vermont and Maine.
For more information, contact Jonathan King at <firstname.lastname@example.org>, or call 617-354-2169
On Thursday, the Bulletin of Atomic Scientists inched their iconic Doomsday Clock forward another thirty seconds. It is now two minutes to midnight.
Citing the growing threats of climate change, increasing tensions between nuclear-armed countries, and a general loss of trust in government institutions, the Bulletin warned that we are “making the world security situation more dangerous than it was a year ago—and as dangerous as it has been since World War II.”
The Doomsday Clock hasn’t fallen this close to midnight since 1953, a year after the US and Russia tested the hydrogen bomb, a bomb up to 1000 times more powerful than the bombs dropped on Hiroshima and Nagasaki. And like 1953, this year’s announcement highlighted the increased global tensions around nuclear weapons.
As the Bulletin wrote in their statement, “To call the world nuclear situation dire is to understate the danger—and its immediacy.”
Between the US, Russia, North Korea, and Iran, the threats of aggravated nuclear war and accidental nuclear war both grew in 2017. As former Secretary of Defense William Perry said in a statement, “The events of the past year have only increased my concern that the danger of a nuclear catastrophe is increasingly real. We are failing to learn from the lessons of history as we find ourselves blundering headfirst towards a second cold war.”
The threat of nuclear war has hovered in the background since the weapons were invented, but with the end of the Cold War, many were pulled into what now appears to have been a false sense of security. In the last year, aggressive language and plans for new and upgraded nuclear weapons have reignited fears of nuclear armageddon. The recent false missile alerts in Hawaii and Japan were perhaps the starkest reminders of how close nuclear war feels, and how destructive it would be.
But the nuclear threat isn’t all the Bulletin looks at. 2017 also saw the growing risk of climate change, a breakdown of trust in government institutions, and the emergence of new technological threats.
Climate change won’t hit humanity as immediately as nuclear war, but with each year that the international community fails to drastically reduce carbon fossil fuel emissions, the threat of catastrophic climate change grows. In 2017, the US pulled out of the Paris Climate Agreement and global carbon emissions grew 2% after a two-year plateau. Meanwhile, NASA and NOAA confirmed that the past four years are the hottest four years they’ve ever recorded.
For emerging technological risks, such as widespread cyber attacks, the development of autonomous weaponry, and potential misuse of synthetic biology, the Bulletin calls for the international community to work together. They write, “world leaders also need to seek better collective methods of managing those advances, so the positive aspects of new technologies are encouraged and malign uses discovered and countered.”
Pointing to disinformation campaigns and “fake news”, the Bulletin’s Science and Security Board writes that they are “deeply concerned about the loss of public trust in political institutions, in the media, in science, and in facts themselves—a loss that the abuse of information technology has fostered.”
Turning Back the Clock
The Doomsday Clock is a poignant symbol of the threats facing human civilization, and it received broad media attention this week through British outlets like The Guardian and The Independent, Australian outlets such as ABC Online, and American outlets from Fox News to The New York Times.
“[The clock] is a tool,” explains Lawrence Krauss, a theoretical physicist at Arizona State University and member of the Bulletin’s Science and Security Board. “For one day a year, there are thousands of newspaper stories about the deep, existential threats that humanity faces.”
The Bulletin ends its report with a list of priorities to help turn back the Clock, chocked full of suggestions for government and industrial leaders. But the authors also insist that individual citizens have a crucial role in tackling humanity’s greatest risks.
“Leaders react when citizens insist they do so,” the authors explain. “Citizens around the world can use the power of the internet to improve the long-term prospects of their children and grandchildren. They can insist on facts, and discount nonsense. They can demand action to reduce the existential threat of nuclear war and unchecked climate change. They can seize the opportunity to make a safer and saner world.”
You can read the Bulletin’s full report here.
For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we’ve built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how we’ve honored one of civilization’s greatest heroes.
Ariel: I’m Ariel Conn with the Future of Life Institute. As you may have noticed, 2017 was quite the dramatic year. In fact, without me even mentioning anything specific, I’m willing to bet that you already have some examples forming in your mind of what a crazy year this was. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. But I’ll let Max Tegmark, president of FLI, tell you a little more about that.
Max: I think it’s important when we reflect back at the years news to understand how things are all connected. For example, the drama we’ve been following with Kim Jung Un and Donald Trump and Putin with nuclear weapons, is really very connected to all the developments in artificial intelligence because in both cases we have a technology which is so powerful that it’s not clear that we humans have sufficient wisdom to manage it well. And that’s why I think it’s so important that we all continue working towards developing this wisdom further, to make sure that we can use these powerful technologies like nuclear energy, like artificial intelligence, like biotechnology and so on to really help rather than to harm us.
Ariel: And it’s worth remembering that part of what made this such a dramatic year was that there were also some really positive things that happened. For example, in March of this year, I sat in a sweltering room in New York City, as a group of dedicated, caring individuals from around the world discussed how they planned to convince the United Nations to ban nuclear weapons once and for all. I don’t think anyone in the room that day realized that not only would they succeed, but by December of this year, the International Campaign to Abolish Nuclear Weapons, led by Beatrice Fihn would be awarded the Nobel Peace Prize for their efforts. And while we did what we could to help that effort, our own big story had to be the Beneficial AI Conference that we hosted in Asilomar California. Many of us at FLI were excited to talk about Asilomar, but I’ll let Anthony Aguirre, Max, and Victoria Krakovna start.
Anthony: I would say pretty unquestionably the big thing that I felt was most important and felt most excited about was the big meeting in Asilomar and centrally putting together the Asilomar Principles.
Max: I’m going to select the Asilomar conference that we organized early this year, whose output was the 23 Asilomar Principles, which has since been signed by over a thousand AI researchers around the world.
Vika: I was really excited about the Asilomar conference that we organized this year. This was the sequel to FLI’s Puerto Rico Conference, which was at the time a real game changer in terms of making AI safety more mainstream and connecting people working in AI safety with the machine learning community and integrating those two. I think Asilomar did a great job of continuing to build on that.
Max: I’m very excited about this because I feel that it really has helped mainstream AI safety work. Not just near term AI safety stuff, like how to transform today’s buggy and hackable computers into robust systems that you can really trust but also mainstream larger issues. The Asilomar Principles actually contain the word super intelligence, contain the phrase existential risk, contain the phrase recursive self improvement and yet they have been signed by really a who’s who in AI. So it’s from now on, it’s impossible for anyone to dismiss these kind of concerns, this kind of safety research. By saying, that’s just people who have no clue about AI.
Anthony: That was a process that started in 2016, brainstorming at FLI and then the wider community and then getting rounds of feedback and so on. But it was exciting both to see how much cohesion there was in the community and how much support there was for getting behind some sort of principles governing AI. But also, just to see the process unfold because one of the things that I’m quite frustrated about often is this sense that there’s this technology that’s just unrolling like a steam roller and it’s going to go where it’s going to go, and we don’t have any agency over where that is. And so to see people really putting thought into what is the world we would like there to be in ten, fifteen, twenty, fifty years and how can we distill what it is that we like about that world into principles like these…that felt really, really good. It felt like an incredibly useful thing for society as a whole but in this case, the people who are deeply engaged with AI, to be thinking through in a real way rather than just how can we put out the next fire, or how can we just turn the progress one more step forward, to really think about the destination.
Ariel: But what’s that next step? How do we transition from Principles that we all agree on to actions that we can also all get behind. Jessica Cussins joined FLI later in the year, but when asked what she was excited about as far as FLI was concerned, she immediately mentioned the implementation of things like the Asilomar Principles.
Jessica: I’m most excited about the developments we’ve seen over the last year related to safe, beneficial and ethical AI. I think FLI has been a really important player in this. We had the beneficial AI conference in January that resulted in the Asilomar AI Principles. It’s been really amazing to see how much traction those principles have gotten and to see a growing consensus around the importance of being thoughtful about the design of AI systems, the challenges of algorithmic bias of data control and manipulation and accountability and governance. So the thing I’m most excited about right now, is the growing number of initiatives we’re seeing around the world related to ethical and beneficial IA.
Anthony: What’s been great to see is the development of ideas both from FLI and from many other organizations of what policies might be good. What concrete legislative actions there might be or standards, organizations or non-profits, agreements between companies and so on might be interesting.
But I think, we’re only at the step of formulating those things and not that much action has been taken anywhere in terms of actually doing those things. Little bits of legislation here and there. But I think we’re getting to the point where lots of governments, lots of companies, lots of organizations are going to be publishing and creating and passing more and more of these things. I think seeing that play out and working really hard to ensure that it plays out in a way that’s favorable in as many ways and as many people as possible, I think is super important and something we’re excited to do.
Vika: I think that Asilomar principles are a great common point for the research community and others to agree what we are going for, what’s important.
Besides having the principles as an output, the event itself was really good for building connections between different people from interdisciplinary backgrounds, from different related fields who are interested in the questions of safety and ethics.
And we also had this workshop that was adjacent to Asilomar where our grant winners actually presented their work. I think it was great to have a concrete discussion of research and the progress we’ve made so far and not just abstract discussions of the future, and I hope that we can have more such technical events, discussing research progress and making the discussion of AI safety really concrete as time goes on.
Ariel: And what is the current state of AI safety research? Richard Mallah took on the task of answering that question for the Asilomar conference, while Tucker Davey has spent the last year interviewing various FLI grant winners to better understand their work.
Richard: I presented a landscape of technical AI safety research threads. This lays out hundreds of different types of research areas and how they are related to each other. All different areas that need a lot more research going into them than they have today to help keep AI safe and beneficent and robust. I was really excited to be at Asilomar and to have co-organized Asilomar and that so many really awesome people were there and collaborating on these different types of issues. And that they were using that landscape that I put together as sort of a touchpoint and way to coordinate. That was pretty exciting.
Tucker: I just found it really inspiring interviewing all of our AI grant recipients. It’s kind of been an ongoing project interviewing these researchers and writing about what they’re doing. Just for me, getting recently involved in AI, it’s been incredibly interesting to get either a half an hour, an hour with these researchers to talk in depth about their work and really to learn more about a research landscape that I hadn’t been aware of before working at FLI. Really, being a part of those interviews and learning more about the people we’re working with and these people that are really spearheading AI safety was really inspiring to be a part of.
Ariel: And with that, we have a big announcement.
Richard: So, FLI is launching a new grants program in 2018. This time around, we will be focusing more on artificial general intelligence, artificial super intelligence and ways that we can do technical research and other kinds of research today. On today’s systems or things that we can analyze today, things that we can model or make theoretical progress on today that are likely to actually still be relevant at the time, where AGI comes about. This is quite exciting and I’m excited to be part of the ideation and administration around that.
Max: I’m particularly excited about the new grants program that we’re launching for AI safety research. Since AI safety research itself has become so much more mainstream, since we did our last grants program three years ago, there’s now quite a bit of funding for a number of near term challenges. And I feel that we at FLI should focus on things more related to challenges and opportunities from super intelligence, since there is virtually no funding for that kind of safety research. It’s going to be really exciting to see what proposals come in and what research teams get selected by the review panels. Above all, how this kind of research hopefully will contribute to making sure that we can use this powerful technology to create a really awesome future.
Vika: I think this grant program could really build on the impact of our previous grant program. I’m really excited that it’s going to focus more on long term AI safety research, which is still the most neglected area.
AI safety has really caught on in the past two years, and there’s been a lot more work on that going on, which is great. And part of what this means is that the we at FLI can focus more on the long term. The long term work has also been getting more attention, and this grant program can help us build on that and make sure that the important problems get solved. This is really exciting.
Max: I just came back from spending a week at the NIPS Conference, the biggest artificial intelligence conference of the year. Its fascinating how rapidly everything is proceeding. AlphaZero has now defeated not just human chess players and Go players but it has also defeated human AI researchers, who after spending 30 years handcrafting artificial intelligence software to play computer chess, got all their work completely crushed by AlphaZero that just learned to do much better than that from scratch in four hours.
So, AI is really happening, whether we like it or not. The challenge we face is simply to compliment that through AI safety research and a lot of good thinking to make sure that this helps humanity flourish rather than flounder.
Ariel: In the spirit of flourishing, FLI also turned its attention this year to the movement to ban lethal autonomous weapons. While there is great debate around how to define autonomous weapons and whether or not they should be developed, more people tend to agree that the topic should at least come before the UN for negotiations. And so we helped create the video Slaughterbots to help drive this conversation. I’ll let Max take it from here.
Max: Slaughterbots, autonomous little drones that can go anonymously murder people without any human control. Fortunately, they don’t exist yet. We hope that an international treaty is going to keep it that way, even though we almost have the technology to do them already. Just need to integrate then mass produce tech we already have. So to help with this, we made this video called Slaughterbots. It was really impressive to see it get over forty million views and make the news throughout the world. I was very happy that Stewart Russell, whom we partnered with in this, also presented this to the diplomats at the United Nations in Geneva when they were discussing whether to move towards a treaty, drawing a line in the sand.
Anthony: Pushing on the autonomous weapons front, it’s been really scary, I would say to think through that issue. But a little bit like the issue of AI, in general, there’s a potential scary side but there’s also a potentially helpful side in that I think this is an issue that is a little bit tractable. Even a relatively small group of committed individuals can make difference. So I think, I’m excited to see how much movement we can get on the autonomous weapons front. It doesn’t seem at all like a hopeless issue to me and I think 2018 will be kind of a turning point — I hope that will be sort of a turning point for that issue. It’s kind of flown under the radar but it really is coming up now and it will be at least interesting. Hopefully, it will be exciting and happy and so on as well as interesting. It will at least be interesting to see how it plays out on the world stage.
Jessica: For 2018, I’m hopeful that we will see the continued growth of the global momentum against lethal autonomous weapons. Already, this year a lot has happened at the United Nations and across communities around the world, including thousands of AI and robotics researchers speaking out and saying they don’t want to see their work used to create these kinds of destabilizing weapons of mass destruction. One thing I’m really excited for 2018 is to see a louder, rallying call for an international ban of lethal autonomous weapons.
Ariel: Yet one of the biggest questions we face when trying to anticipate autonomous weapons and artificial intelligence in general, and even artificial general intelligence – one of the biggest questions is: when? When will these technologies be developed? If we could answer that, then solving problems around those technologies could become both more doable and possibly more pressing. This is an issue Anthony has been considering.
Anthony: Of most interest has been the overall set of projects to predict artificial intelligence timelines and milestones. This is something that I’ve been doing through this prediction website, Metaculus, which I’ve been a part of. And also something where I’ve took part in a very small workshop run by the Foresight Institute over the summer. It’s both a super important question because I think the overall urgency with which we have to deal with certain issues really depends on how far away they are. It’s also an instructive one, in that even posing the questions of what do we want to know exactly, really forces you to think through what is it that you care about, how would you estimate things, what different considerations are there in terms of this sort of big question.
We have this sort of big question, like when is really powerful AI going to appear? But when you dig into that, what exactly is really powerful, what exactly… What does appear mean? Does that mean in sort of an academic setting? Does it mean becomes part of everybody’s life?
So there are all kinds of nuances to that overall big question that lots of people asking. Just getting into refining the questions, trying to pin down what it is that mean — make them exact so that they can be things that people can make precise and numerical predictions about. I think its been really, really interesting and elucidating to me and in sort of understanding what all the issues are. I’m excited to see how that kind of continues to unfold as we get more questions and more predictions and more expertise focused on that. Also, a little but nervous because the timeline seemed to be getting shorter and shorter and the urgency of the issue seems to be getting greater and greater. So that’s a bit of a fire under us, I think, to keep acting and keep a lot of intense effort on making sure that as AI gets more powerful, we get better at managing it.
Ariel: One of the current questions AI researchers are struggling with is the problem of value alignment, especially when considering more powerful AI. Meia Chita-Tegmark and Lucas Perry recently co-organized an event to get more people thinking creatively about how to address this.
Meia: So we just organized a workshop about the ethics of value alignment together with a few partner organizations, the Berggruen Institute and also CFAR.
Lucas: This was a workshop recently that took place in California and just to remind everyone, value alignment is the process by which we bring AI’s actions, goals, and intention in alignment with and in accordance with what is deemed to be the good or what are human values and preferences and goals and intentions.
Meia: And we had a fantastic group of thinkers there. We had philosophers. We had social scientists, AI researchers, political scientists. We were all discussing this very important issue of how do we get an artificial intelligence that is aligned to our own goals and our own values.
It was really important to have the perspectives of ethicists and moral psychologists, for example, because this question is not just about the technical aspect of how do you actually implement it, but also about whose values do we want implemented and who should be part of the conversation and who gets excluded and what process do we want to establish to collect all the preferences and values that we want implemented in AI. That was really fantastic. It was a very nice start to what I hope will continue to be a really fruitful collaboration between different disciplines on this very important topic.
Lucas: I think one essential take-away from that was that value alignment is truly something that is interdisciplinary. It’s normally been something which has been couched and understood in the context of technical AI safety research, but value alignment, at least in my view, also inherently includes ethics and governance. It seems that the project of creating beneficial AI through efforts and value alignment can really only happen when we have lots of different people from lots of different disciplines working together on this supremely hard issue.
Meia: I think the issue with AI is something that … first of all, it concerns such a great number of people. It concerns all of us. It will impact, and it already is impacting all of our experiences. There’re different disciplines that look at this impact from different ways.
Of course, technical AI researchers will focus on developing this technology, but it’s very important to think about how does this technology co-evolve with us. For example, I’m a psychologist. I like to think about how does it impact our own psyche. How does it impact the way we act in the world, the way we behave. Stuart Russell many times likes to point out that one danger that can come with very intelligent machines is a subtle one, not necessarily what they will do, but what we will not do because of them. He calls this enfeeblement. What are the capacities that are being stifled because we no longer engage in some of the cognitive tasks that we’re now delegating to AIs.
So that’s just one example of how, for example, psychologists can help really bring more light and make us reflect on what is it that we want from our machines and how do we want to interact with them and how do we wanna design them such that they actually empower us rather than enfeeble us.
Lucas: Yeah, I think that one essential thing to FLI’s mission and goal is the generation of beneficial AI. To me, and I think many other people coming out of this Ethics of Value Alignment conference, you know, what beneficial exactly entails and what beneficial looks like is still a really open question both in the short term and in the long-term. I’d be really interested in seeing both FLI and other organizations pursue questions in value alignment more vigorously. Issues with regard to the ethics of AI and issues regarding value and the sort of world that we want to live in.
Ariel: And what sort of world do we want to live in? If you’ve made it this far through the podcast, you might be tempted to think that all we worry about is AI. And we do think a lot about AI. But our primary goal is to help society flourish. And so this year, we created the Future of Life Award to be presented to people who act heroically to ensure our survival and hopefully move us closer to that ideal world. Our inaugural award was presented in honor of Vasili Arkhipov who stood up to his commander on a Soviet submarine, and prevented the launch of a nuclear weapon during the height of tensions in the Cold War.
Tucker: One thing that particularly stuck out to me was our inaugural Future of Life Award and we presented this award to Vasili Arkhipov who was a Soviet officer in the Cold War and arguably saved the world and is the reason we’re all alive today. He’s now passed, but FLI presented a generous award to his daughter and his grandson. It was really cool to be a part of this because it seemed like the first award of its kind.
Meia: So, of course with FLI, we have all these big projects that take a lot of time. But I think for me, one of the more exciting and heartwarming and wonderful moments that I was able to experience due to our work here at FLI was a train ride from London to Cambridge with Elena and Sergei, the daughter and the grandson of Vasili Arkhipov. Vasili Arkhipov is this Russian naval officer that helped prevent a second world war in the Cuban missile crisis. The Future of Life Institute awarded him the Future of Life prize this year. He is now dead unfortunately, but his daughter and his grandson was there in London to receive it.
Vika: It was great to get to meet them in person and to all go on stage together and have them talk about their attitude towards the dilemma that Vasili Arkhipov has faced, and how it is relevant today, and how we should be really careful with nuclear weapons and protecting our future. It was really inspiring.
At that event, Max was giving his talk about his book, and then at the end we had the Arkhipovs come up on stage and it was kind of fun for me to translate their speech to the audience. I could not fully transmit all the eloquence, but thought it was a very special moment.
Meia: It was just so amazing to really listen to their stories about the father, the grandfather, and look at photos that they had brought all the way from Moscow. This person who has become the hero for so many people that are really concerned about this essential risk, it was nice to really imagine him in his capacity as a son, as a grandfather, as a husband, as a human being. It was very inspiring and touching.
One of the nice things was they showed a photo of him that had actually notes that he had written on the back of it. That was his favorite photo. And one of the comments he made is that he felt that that was the most beautiful photo of himself because there was no glint in his eyes. It was just this pure sort of concentration. I thought that said a lot about his character. He rarely smiled in photos, also. Also always looked very pensive. Very much like you’d imagine a hero who saved the world would be.
Tucker: It was especially interesting for me to work on the press release for this award and to reach out to people from different news outlets, like The Guardian and The Atlantic, and to actually see them write about this award.
I think something like the Future of Life Award is inspiring because it highlights people in the past that have done an incredible service to civilization, but I also think it’s interesting to look forward and think about who might be the future Vasili Arkhipov that saves the world.
Ariel: As Tucker just mentioned, this award was covered by news outlets like the Guardian and the Atlantic. And in fact, we’ve been incredibly fortunate to have many of our events covered by major news. However, there are even more projects we’ve worked on that we think are just as important and that we’re just as excited about that most people probably aren’t aware of.
Jessica: So people may not know that FLI recently joined the partnership on AI. This was the group that was founded by Google and Amazon, Facebook and Apple and others to think about issues like safety, and fairness and impact from AI systems. So I’m excited about this because I think it’s really great to see this kind of social commitment from industry, and it’s going to be critical to have the support and engagement from these players to really see AI being developed in a way that’s positive for everyone. So I’m really happy that FLI is now one of the partners of what will likely be an important initiative for AI.
Anthony: I attending the first meeting of the partnership on AI in October. And to see, at that meeting, so much discussion of some of the principles themselves directly but just in a broad sense. So much discussion from all of the key organizations that are engaged with AI, that almost all of whom had representation there, about how are we going to make these things happen. If we value transparency, if we value fairness, if we value safety and trust in AI systems, how are we going to actually get together and formulate best practices and policies, and groups and data sets and things to make all that happen. And to see the speed at which, I would say the field has moved from purely, wow, we can do this, to how are we going to do this right and how are we going to do this well and what does this all mean, has been a ray of hope I would say.
AI is moving so fast but it was good to see that I think the sort of wisdom race hasn’t been conceded entirely. That there are dedicated group of people that are working really hard to figure out how to do it well.
Ariel: And then there’s Dave Stanley, who has been the force around many of the behind-the-scenes projects that our volunteers have been working on that have helped FLI grow this year.
Dave: As for another project that has very much been ongoing and more relates to the website is basically our ongoing effort to make the English content on the website that’s been fairly influential in English speaking countries about AI safety and nuclear weapons, take that content and make it available in a lot of other languages to maximize the impact that it’s having.
Right now, thanks to the efforts of our volunteers, we have 55 translations available on our website right now in nine different languages, which are Russian, Chinese, French, Polish, Spanish, German, Hindi, Japanese, and Korean. All in all, this represents about 1000 hours of volunteer time put in by our volunteers. I’d just like to give a shoutout to some of the volunteers who have been involved. They are Alan Yan, Kevin Wang, Kazue Evans, Jake Beebe, Jason Orlosky, Li Na, Bena Lim, Alina Kovtun, Ben Peterson, Carolyn Wu, Zhaoran Joanna Wang, Mayumi Nakamura, Derek Su, Dipti Pandey, Marvin, Vera Koroleva, Grzegorz Orwiński, Szymon Radziszewicz, Natalia Berezovskaya, Vladimir Nimensky, Natalia Kuzmenko, George Godula, Eric Gastfriend, Olivier Grondin, Claire Park, Kristy Wen, Yishuai Du, and Revathi Vinoth Kumar.
Ariel: As we’ve worked to establish AI safety as a global effort, Dave and the volunteers were behind the trip Richard took to China, where he participated in the Global Mobile Internet Conference in Beijing earlier this year.
Dave: So basically, this was something that was actually prompted and largely organized by one of FLIs volunteers, George Godula, who’s based in Shanghai right now.
Basically, this is partially motivated by the fact that recently, China’s been promoting a lot of investment in artificial intelligence research, and they’ve made it a national objective to become a leader in AI research by 2025. So FLI and the team have been making some efforts to basically try to build connections with China and raise awareness about AI safety, at least our view on AI safety and engage in dialogue there.
It’s culminated with George organizing this trip for Richard, and A large portion of the FLI volunteer team participating in basically support for that trip. So identifying contacts for Richard to connect with over there and researching the landscape and providing general support for that. And then that’s been coupled with an effort to take some of the existing articles that FLI has on their website about AI safety and translate those to Chinese to make it accessible to that audience.
Ariel: In fact, Richard has spoken at many conferences, workshops and other events this year, and he’s noted a distinct shift in how AI researchers view AI safety.
Richard: This is a single example of many of these things I’ve done throughout the year. Yesterday I gave a talk to a bunch of machine learning and artificial intelligence researchers and entrepreneurs in Boston, here where I’m based about AI safety and beneficence. Every time I do this it’s really fulfilling that so many of these people who really are pushing the leading edge of what AI does in many respects. They realize that these are extremely valid concerns and there are new types of technical avenues to help just keep things better for the future. The facts that I’m not receiving push back anymore as compared to many years ago when I would talk about these things — that people really are trying to gauge and understand and kind of weave themselves into whatever is going to turn into the best outcome for humanity. Given the type of leverage that advanced AI will bring us. I think people are starting to really get what’s at stake.
Ariel: And this isn’t just the case among AI researchers. Throughout the year, we’ve seen this discussion about AI safety broaden into various groups outside of traditional AI circles, and we’re hopeful this trend will continue in 2018.
Meia: I think that 2017 has been fantastic to start this project of getting more thinkers from different disciplines to really engage with the topic of artificial intelligence, but I think we are just manage to scratch the surface of this topic in this collaboration. So I would really like to work more on strengthening this conversation and this flow of ideas between different disciplines. I think we can achieve so much more if we can make sure that we hear each other, that we go past our own disciplinary jargon, and that we truly are able to communicate and join each other in research projects where we can bring different tools and different skills to the table.
Ariel: The landscape on AI safety research that Richard presented at Asilomar at the start of the year was designed to enable greater understanding among researchers. Lucas rounded off the year with another version of the landscape. This one looking at ethics and value alignment with the goal, in part, of bringing more experts from other fields into the conversation.
Lucas: One thing that I’m also really excited about for next year is seeing our conceptual landscapes of both AI safety and value alignment being used in more educational context and in context in which they can foster interdisciplinary conversations regarding issues in AI. I think that their virtues are that they create a conceptual landscape of both AI safety and value alignment, but also include definitions and descriptions of jargon. Given this, it functions both as a means by which you can introduce people to AI safety and value alignment and AI risk, but it also serves as a means of introducing experts to sort of the conceptual mappings of the spaces that other experts are engaged with and so they can learn each other’s jargon and really have conversations that are fruitful and sort of streamlined.
Ariel: As we look to 2018, we hope to develop more programs, work on more projects, and participate in more events that will help draw greater attention to the various issues we care about. We hope to not only spread awareness, but also to empower people to take action to ensure that humanity continues to flourish in the future.
Dave: There’s a few things that are coming up that I’m really excited about. The first one is basically we’re going to be trying to release some new interactive apps on the website that’ll hopefully be pages that can gather a lot of attention and educate people about the issues that we’re focused on, mainly nuclear weapons, and answering questions to give people a better picture of what are the geopolitical and economic factors that motivate countries to keep their nuclear weapons and how does this relate to public support, based on polling data, for whether the general public wants to keep these weapons or not.
Meia: One thing that I think has made me also very excited in 2017, and I’m looking forward to seeing the evolution of in 2018 was the public’s engagement with this topic. I’ve had the luck to be in the audience for many of the book talks that Max has given for his book “Life 3.0: Being Human in the Age of Artificial Intelligence,” and it was fascinating just listening to the questions. They’ve become so much more sophisticated and nuanced than a few years ago. I’m very curious to see how this evolves in 2018, and I hope that FLI will contribute to this conversation and making it more rich. I think I’d like people in general to get engaged with this topic much more, and refine their understanding of it.
Tucker: Well, I think in general it’s been amazing to watch FLI this year because we’ve made big splashes in so many different things with the Asilomar conference, with our Slaughterbots video, helping with the nuclear ban, but I think one thing that I’m particularly interested in is working more this coming year to I guess engage my generation more on these topics. I sometimes sense a lot of defeatism and hopelessness with people in my generation. Kind of feeling like there’s nothing we can do to solve civilization’s biggest problems. I think being at FLI has kind of given me the opposite perspective. Sometimes I’m still subject to that defeatism, but working here really gives me a sense that we can actually do a lot to solve these problems. I’d really like to just find ways to engage more people in my generation to make them feel like they actually have some sense of agency to solve a lot of our biggest challenges.
Ariel: Learn about these issues and more, join the conversation, and find out how you can get involved by visiting futureoflife.org.
The following policy memo was written and posted by the Stanley Foundation.
How might a nuclear crisis play out in today’s media environment? What dynamics in this information ecosystem—with social media increasing the velocity and reach of information, disrupting journalistic models, creating potent vectors for disinformation, and changing how political leaders interact with constituencies—might challenge decision making during crises between nuclear-armed states?
This memo discusses facets of the modern information ecosystem and how they might affect decision making involving the use of nuclear weapons, based on insights from a multidisciplinary roundtable. The memo concludes with more questions than answers. Because the impact of social media on international crisis stability is recent, there are few cases from which to draw conclusions. But because the catastrophic impact of a nuclear exchange is so great, there is a need to further investigate the mechanisms by which the current information ecosystem could influence decisions about the use of these weapons. To that end, the memo poses a series of questions to inspire future research to better understand new—or newly important—dynamics in the information ecosystem and international security environment.
The following article was written by Dr. Lisbeth Gronlund and originally posted on the Union of Concerned Scientists blog.
The July 2015 Iran Deal, which places strict, verified restrictions on Iran’s nuclear activities, is again under attack by President Trump. This time he’s kicked responsibility over to Congress to “fix” the agreement and promised that if Congress fails to do so, he will withdraw from it.
As the New York Times reported, in response to this development over 90 prominent scientists sent a letter to leading members of Congress yesterday urging them to support the Iran Deal—making the case that continued US participation will enhance US security.
Many of these scientists also signed a letter strongly supporting the Iran Deal to President Obama in August 2015, as well as a letter to President-elect Trump in January. In all three cases, the first signatory is Richard L. Garwin, a long-standing UCS board member who helped develop the H-bomb as a young man and has since advised the government on all matters of security issues. Last year, he was awarded a Presidential Medal of Freedom.
What’s the Deal?
If President Trump did pull out of the agreement, what would that mean? First, the Joint Comprehensive Plan of Action (JCPoA) (as it is formally named) is not an agreement between just Iran and the US—but also includes China, France, Germany, Russia, the UK, and the European Union. So the agreement will continue—unless Iran responds by quitting as well. (More on that later.)
The Iran Deal is not a treaty, and did not require Senate ratification. Instead, the United States participates in the JCPoA by presidential action. However, Congress wanted to get into the act and passed The Iran Agreement Review Act of 2015, which requires the president to certify every 90 days that Iran remains in compliance.
President Trump has done so twice, but declined to do so this month and instead called for Congress—and US allies—to work with the administration “to address the deal’s many serious flaws.” Among those supposed flaws is that the deal covering Iran’s nuclear activities does not also cover its missile activities!
Key House and Senate leaders are drafting legislation that would amend the Iran Nuclear Agreement Review Act to strengthen enforcement, prevent Iran from developing an inter– —this is so totally important—an intercontinental ballistic missile, and make all restrictions on Iran’s nuclear activity permanent under US law.
First, according to the International Atomic Energy Agency, which verifies the agreement, Iran remains in compliance. This was echoed by Norman Roule, who retired this month after working at the CIA for three decades. He served as the point person for US intelligence on Iran under multiple administrations. He told an NPR interviewer, “I believe we can have confidence in the International Atomic Energy Agency’s efforts.”
Second, the Iran Deal was the product of several years of negotiations. Not surprisingly, recent statements by the United Kingdom, France, Germany, the European Union, and Iran make clear that they will not agree to renegotiate the agreement. It just won’t happen. US allies are highly supportive of the Iran Deal.
Third, Congress can change US law by amending the Iran Nuclear Agreement Review Act, but this will have no effect on the terms of the Iran Deal. This may be a face-saving way for President Trump to stay with the agreement—for now. However, such amendments will lay the groundwork for a future withdrawal and give credence to President Trump’s claims that the agreement is a “bad deal.” That’s why the scientists urged Congress to support the Iran Deal as it is.
The End of a Good Deal?
If President Trump pulls out of the Iran Deal and reimposes sanctions against Iran, our allies will urge Iran to stay with the deal. But Iran has its own hardliners who want to leave the deal—and a US withdrawal is exactly what they are hoping for.
If Iran leaves the agreement, President Trump will have a lot to answer for. Here is an agreement that significantly extends the time it would take for Iran to produce enough material for a nuclear weapon, and that would give the world an alarm if they started to do so. For the United States to throw that out the window would be deeply irresponsible. It would not just undermine its own security, but that of Iran’s neighbors and the rest of the world.
Congress should do all it can to prevent this outcome. The scientists sent their letter to Senators Corker and Cardin, who are the Chairman and Ranking Member of the Senate Foreign Relations Committee, and to Representatives Royce and Engel, who are the Chairman and Ranking Member of the House Foreign Affairs Committee, because these men have a special responsibility on issues like these.
Let’s hope these four men will do what’s needed to prevent the end of a good deal—a very good deal.
Click here to see this page in other languages: Russian
London, UK – On October 27, 1962, a soft-spoken naval officer named Vasili Arkhipov single-handedly prevented nuclear war during the height of the Cuban Missile Crisis. Arkhipov’s submarine captain, thinking their sub was under attack by American forces, wanted to launch a nuclear weapon at the ships above. Arkhipov, with the power of veto, said no, thus averting nuclear war.
Now, 55 years after his courageous actions, the Future of Life Institute has presented the Arkhipov family with the inaugural Future of Life Award to honor humanity’s late hero.
Arkhipov’s surviving family members, represented by his daughter Elena and grandson Sergei, flew into London for the ceremony, which was held at the Institute of Engineering & Technology. After explaining Arkhipov’s heroics to the audience, Max Tegmark, president of FLI, presented the Arkhipov family with their award and $50,000. Elena and Sergei were both honored by the gesture and by the overall message of the award.
Elena explained that her father “always thought that he did what he had to do and never consider his actions as heroism. … Our family is grateful for the prize and considers it as a recognition of his work and heroism. He did his part for the future so that everyone can live on our planet.”
The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. Arkhipov, whose courage and composure potentially saved billions of lives, was an obvious choice for the inaugural event.
“Vasili Arkhipov is arguably the most important person in modern history, thanks to whom October 27 2017 isn’t the 55th anniversary of World War III,” FLI president Max Tegmark explained. “We’re showing our gratitude in a way he’d have appreciated, by supporting his loved ones.”
The award also aims to foster a dialogue about the growing existential risks that humanity faces, and the people that work to mitigate them.
Jaan Tallinn, co-founder of FLI, said: “Given that this century will likely bring technologies that can be even more dangerous than nukes, we will badly need more people like Arkhipov — people who will represent humanity’s interests even in the heated moments of a crisis.”
On October 27 1962, during the Cuban Missile Crisis, eleven US Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the US “quarantine” area. Arkhipov was one of the officers on board. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them which, unbeknownst to the crew, they’d informed Moscow were merely meant to force the sub to surface and leave.
“We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”
What the Americans didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. As the depth charges intensified and temperatures onboard climbed above 45ºC (113ºF), many crew members fainted from carbon dioxide poisoning, and in the midst of this panic, Captain Savitsky decided to launch their nuclear weapon.
“Maybe the war has already started up there,” he shouted. “We’re gonna blast them now! We will die, but we will sink them all – we will not disgrace our Navy!”
The combination of depth charges, extreme heat, stress, and isolation from the outside world almost lit the fuse of full-scale nuclear war. But it didn’t. The decision to launch a nuclear weapon had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no.
Amidst the panic, the 34-year old Arkhipov remained calm and tried to talk Captain Savitsky down. He eventually convinced Savitsky that these depth charges were signals for the Soviet submarine to surface, and the sub surfaced safely and headed north, back to the Soviet Union.
It is sobering that very few have heard of Arkhipov, although his decision was perhaps the most valuable individual contribution to human survival in modern history. PBS made a documentary, The Man Who Saved the World, documenting Arkhipov’s moving heroism, and National Geographic profiled him as well in an article titled – You (and almost everyone you know) Owe Your Life to This Man.
The Cold War never became a hot war, in large part thanks to Arkhipov, but the threat of nuclear war remains high. Beatrice Fihn, Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN) and this year’s recipient of the Nobel Peace Prize, hopes that the Future of Life Award will help draw attention to the current threat of nuclear weapons and encourage more people to stand up to that threat. Fihn explains: “Arkhipov’s story shows how close to nuclear catastrophe we have been in the past. And as the risk of nuclear war is on the rise right now, all states must urgently join the Treaty on the Prohibition of Nuclear Weapons to prevent such catastrophe.”
Of her father’s role in preventing nuclear catastrophe, Elena explained: “We must strive so that the powerful people around the world learn from Vasili’s example. Everybody with power and influence should act within their competence for world peace.”
We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons.
Over 70 years have passed since the bombs were first dropped on Hiroshima and Nagasaki, but finally, on July 7 of this year, 122 countries came together at the United Nations to establish a treaty outlawing nuclear weapons. Behind the effort was the small, dedicated team at ICAN, led by Beatrice Fihn. They coordinated with hundreds of NGOs in 100 countries to guide a global discussion and build international support for the ban.
In a statement, they said: “By harnessing the power of the people, we have worked to bring an end to the most destructive weapon ever created – the only weapon that poses an existential threat to all humanity.”
There’s still more work to be done to decrease nuclear stockpiles and rid the world of nuclear threats, but this incredible achievement by ICAN provides the hope and inspiration we need to make the world a safer place.
Perhaps most striking, as seen below in many of the comments by FLI members, is how such a small, passionate group was able to make such a huge difference in the world. Congratulations to everyone at ICAN!
Statements by members of FLI:
Anthony Aguirre: “The work of Bea inspiringly shows that a passionate and committed group of people working to make the world safer can actually succeed!”
Ariel Conn: “Fear and tragedy might monopolize the news lately, but behind the scenes, groups like ICAN are changing the world for the better. Bea and her small team represent great hope for the future, and they are truly an inspiration.”
Tucker Davey: “It’s easy to feel hopeless about the nuclear threat, but Bea and the dedicated ICAN team have clearly demonstrated that a small group can make a difference. Passing the nuclear ban treaty is a huge step towards a safer world, and I hope ICAN’s Nobel Prize inspires others to tackle this urgent threat.”
Victoria Krakovna: “Bea’s dedicated efforts to protect humanity from itself are an inspiration to us all.”
Richard Mallah: “Bea and ICAN have shown such dedication in working to curb the ability of a handful of us to kill most of the rest of us.”
Lucas Perry: “For me, Bea and ICAN have beautifully proven and embodied Margaret Mead’s famous quote, ‘Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has.’”
David Stanley: “The work taken on by ICAN’s team is often not glamorous, yet they have acted tirelessly for the past 10 years to protect us all from these abhorrent weapons. They are the few to whom so much is owed.”
Max Tegmark: “It’s been an honor and a pleasure collaborating with ICAN, and the attention brought by this Nobel Prize will help the urgently needed efforts to stigmatize the new nuclear arms race.”
Learn more about the treaty here.
If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer.
To learn more, I spoke with Rob Wiblin and Brenton Mayer of 80,000 Hours. The following are highlights of the interview, but you can listen to the full podcast above or read the transcript here.
Can you give us some background about 80,000 Hours?
Rob: 80,000 Hours has been around for about six years and started when Benjamin Todd and Will MacAskill wanted to figure out how they could do as much good as possible. They started looking into things like the odds of becoming an MP in the UK or if you became a doctor, how many lives would you save. Pretty quickly, they were learning things that no one else had investigated.
They decided to start 80,000 Hours, which would conduct this research in a more systematic way and share it with people who wanted to do more good with their career.
80,000 hours is roughly the number of hours that you’d work in a full-time professional career. That’s a lot of time, so it pays off to spend quite a while thinking about what you’re going to do with that time.
On the other hand, 80,000 hours is not that long relative to the scale of the problems that the world faces. You can’t tackle everything. You’ve only got one career, so you should be judicious about what problems you try to solve and how you go about solving them.
How do you help people have more of an impact with their careers?
Brenton: The main thing is a career guide. We’ll talk about how to have satisfying careers, how to work on one of the world’s most important problems, how to set yourself up early so that later on you can have a really large impact.
The second part that we do is do career coaching and try to apply advice to individuals.
What is earning to give?
Rob: Earning to give is the career approach where you try to make a lot of money and give it to organizations that can use it to have a really large positive impact. I know people who can make millions of dollars a year doing the thing they love and donate most of that to effective nonprofits, supporting 5, 10, 15, possibly even 20 people to do direct work in their place.
Can you talk about research you’ve been doing regarding the world’s most pressing problems?
Rob: One of the first things we realized is that if you’re trying to help people alive today, your money can go further in the developing world. We just need to scale up solutions to basic health problems and economic issues that have been resolved elsewhere.
Moving beyond that, what other groups in the world are extremely neglected? Factory farmed animals really stand out. There’s very little funding focused on improving farm animal welfare.
The next big idea was, of all the people that we could help, what fraction are alive today? We think that it’s only a small fraction. There’s every reason to think humanity could live for another 100 generations on Earth and possibly even have our descendants alive on other planets.
We worry a lot about existential risks and ways that civilization can go off track and never recover. Thinking about the long-term future of humanity is where a lot of our attention goes and where I think people can have the largest impact with their career.
Regarding artificial intelligence safety, nuclear weapons, biotechnology and climate change, can you consider different ways that people could pursue either careers or “earn to give” options for these fields?
Rob: One would be to specialize in machine learning or other technical work and use those skills to figure out how can we make artificial intelligence aligned with human interests. How do we make the AI do what we want and not things that we don’t intend?
Then there’s the policy and strategy side, trying to answer questions like how do we prevent an AI arms race? Do we want artificial intelligence running military robots? Do we want the government to be more involved in regulating artificial intelligence or less involved? You can also approach this if you have a good understanding of politics, policy, and economics. You can potentially work in government, military or think tanks.
Things like communications, marketing, organization, project management, and fundraising operations — those kinds of things can be quite hard to find skilled, reliable people for. And it can be surprisingly hard to find people who can handle media or do art and design. If you have those skills, you should seriously consider applying to whatever organizations you admire.
[For nuclear weapons] I’m interested in anything that can promote peace between the United States and Russia and China. A war between those groups or an accidental nuclear incident seems like the most likely thing to throw us back to the stone age or even pre-stone age.
I would focus on ensuring that they don’t get false alarms; trying to increase trust between the countries in general and the communication lines so that if there are false alarms, they can quickly diffuse the situation.
The best opportunities [in biotech] are in early surveillance of new diseases. If there’s a new disease coming out, a new flu for example, it takes a long time to figure out what’s happened.
And when it comes to controlling new diseases, time is really of the essence. If you can pick it up within a few days or weeks, then you have a reasonable shot at quarantining the people and following up with everyone that they’ve met and containing it. Any technologies that we can invent or any policies that will allow us to identify new diseases before they’ve spread to too many people is going to help with both natural pandemics, and also any kind of synthetic biology risks, or accidental releases of diseases from biological researchers.
Brenton: A Wagner and Weitzman paper suggests that there’s about a 10% chance of warming larger than 4.8 degrees Celsius, or a 3% chance of more than 6 degrees Celsius. These are really disastrous outcomes. If you’re interested in climate change, we’re pretty excited about you working on these very bad scenarios. Sensible things to do would be improving our ability to forecast; thinking about the positive feedback loops that might be inherent in Earth’s climate; thinking about how to enhance international cooperation.
Rob: It does seem like solar power and storage of energy from solar power is going to have the biggest impact on emissions over at least the next 50 years. Anything that can speed up that transition makes a pretty big contribution.
Rob, can you explain your interest in long-term multigenerational indirect effects and what that means?
Rob: If you’re trying to help people and animals thousands of years in the future, you have to help them through a causal chain that involves changing the behavior of someone today and then that’ll help the next generation and so on.
One way to improve the long-term future of humanity is to do very broad things that improve human capabilities like reducing poverty, improving people’s health, making schools better.
But in a world where the more science and technology we develop, the more power we have to destroy civilization, it becomes less clear that broadly improving human capabilities is a great way to make the future go better. If you improve science and technology, you both improve our ability to solve problems and create new problems.
I think about what technologies can we invent that disproportionately make the world safer rather than more risky. It’s great to improve the technology to discover new diseases quickly and to produce vaccines for them quickly, but I’m less excited about generically pushing forward the life sciences because there’s a lot of potential downsides there as well.
Another way that we can robustly prepare humanity to deal with the long-term future is to have better foresight about the problems that we’re going to face. That’s a very concrete thing you can do that puts humanity in a better position to tackle problems in the future — just being able to anticipate those problems well ahead of time so that we can dedicate resources to averting those problems.
For the past 25 years, a series of treaties have allowed the US and Russia to greatly reduce their nuclear arsenals—from well over 10,000 each to fewer than 2,000 deployed long-range weapons each. These Strategic Arms Reduction Treaties (START) have enhanced US security by reducing the nuclear threat, providing valuable information about Russia’s nuclear arsenal, and improving predictability and stability in the US-Russia strategic relationship.
Twenty-five years ago, US policy-makers of both parties recognized the benefits of the first START agreement: on October 1, 1992, the Senate voted overwhelmingly—93 to 6—in favor of ratifying START I.
The end of START?
With increased tensions between the US and Russia and an expanded range of security threats for the US to worry about, this longstanding foundation is now more valuable than ever.
The most recent agreement—New START—will expire in early February 2021, but can be extended for another five years if the US and Russian presidents agree to do so. In a January 28 phone call with President Trump, Russian President Putin reportedly raised the possibility of extending the treaty. But instead of being extended, or even maintained, the START framework is now in danger of being abandoned.
President Trump has called New START “one-sided” and “a bad deal,” and has even suggested the US might withdraw from the treaty. His advisors are clearly opposed to doing so. Secretary of State Rex Tillerson expressed support for New START in his confirmation hearing. Secretary of Defense James Mattis, while recently stating that the administration is currently reviewing the treaty “to determine whether it’s a good idea,” has previously also expressed support, as have the head of US Strategic Command and other military officials.
Withdrawal seems unlikely, especially given recent anonymous comments by administration officials saying that the US still sees value in New START and is not looking to discard it. But given the president’s attitude toward the treaty, it may still take some serious pushing from Mattis and other military officials to convince him to extend it. Worse, even if Trump is not re-elected, and the incoming president is more supportive of the treaty, there will be little time for a new administration, taking office in late January 2021, to do an assessment and sign on to an extension before the deadline. While UCS and other treaty supporters will urge the incoming administration to act quickly, if the Trump administration does not extend the treaty, it is quite possible that New START—and the security benefits it provides—will lapse.
The Beginning: The Basics and Benefits of START I
The overwhelming bipartisan support for a treaty cutting US nuclear weapons demonstrated by the START I ratification vote today seems unbelievable. At the time, however, both Democrats and Republicans in Congress, as well as the first President Bush, recognized the importance of the historic agreement, the first to require an actual reduction, rather than simply a limitation, in the number of US and Russian strategic nuclear weapons.
By the end of the Cold War, the US had about 23,000 nuclear warheads in its arsenal, and the Soviet Union had roughly 40,000. These numbers included about 12,000 US and 11,000 Soviet deployed strategic warheads—those mounted on long-range missiles and bombers. The treaty limited each country to 1,600 strategic missiles and bombers and 6,000 warheads, and established procedures for verifying these limits.
The limits on missiles and bombers, in addition to limits on the warheads themselves, were significant because START required the verifiable destruction of any excess delivery vehicles, which gave each side confidence that the reductions could not be quickly or easily reversed. To do this, the treaty established a robust verification regime with an unprecedented level of intrusiveness, including on-site inspections and exchanges of data about missile telemetry.
Though the groundwork for START I was laid during the Reagan administration, ratification and implementation took place during the first President Bush’s term. The treaty was one among several measures taken by the elder Bush that reduced the US nuclear stockpile by nearly 50 percent during his time in office.
START I entered into force in 1994 and had a 15-year lifetime; it required the US and Russia to complete reductions by 2001, and maintain those reductions until 2009. However, both countries actually continued reductions after reaching the START I limits. By the end of the Bush I administration, the US had already reduced its arsenal to just over 7,000 deployed strategic warheads. By the time the treaty expired, this number had fallen to roughly 3,900.
The Legacy of START I
Building on the success of START I, the US and Russia negotiated a follow-on treaty—START II—that required further cuts in deployed strategic weapons. These reductions were to be carried out in two steps, but when fully implemented would limit each country to 3,500 deployed strategic warheads, with no more than 1,750 of these on submarine-launched ballistic missiles.
Phase II also required the complete elimination of independently targetable re-entry vehicles (MIRVs) on intercontinental ballistic missiles. This marked a major step forward, because MIRVs were a particularly destabilizing configuration. Since just one incoming warhead could destroy all the warheads on a MIRVed land-based missile, MIRVs create pressure to “use them or lose them”—an incentive to strike first in a crisis. Otherwise, a country risked losing its ability to use those missiles to retaliate in the case of a first strike against it.
While both sides ratified START II, it was a long and contentious process, and entry into force was complicated by provisions attached by both the US Senate and Russian Duma. The US withdrawal from the Anti-Ballistic Missile (ABM) treaty in 2002 was the kiss of death for START II. The ABM treaty had strictly limited missile defenses. Removing this limit created a situation in which either side might feel it had to deploy more and more weapons to be sure it could overcome the other’s defense. But the George W. Bush administration was now committed to building a larger-scale defense, regardless of Russia’s vocal opposition and clear statements that doing so would undermine arms control progress.
Russia responded by announcing its withdrawal from START II, finally ending efforts to bring the treaty into force. A proposed START III treaty, which would have called for further reductions to 2,000 to 2,500 warheads on each side, never materialized; negotiations had been planned to begin after entry into force of START II.
After the failure of START II, the US and Russia negotiated the Strategic Offensive Reductions Treaty (SORT, often called the “Moscow Treaty”). SORT required each party to reduce to 1,700 to 2,200 deployed strategic warheads, but was a much less formal treaty than START. It did not include the same kind of extensive verification regime and, in fact, did not even define what was considered a “strategic warhead,” instead leaving each party to decide for itself what it would count. This meant that although SORT did encourage further progress to lower numbers of weapons, overall it did not provide the same kind of benefits for the US as START had.
Recognizing the deficiencies of the minimal SORT agreement, the Obama administration made negotiation of New START an early priority, and the treaty was ratified in 2010.
New START limits each party to 1,550 deployed strategic nuclear warheads by February 2018. The treaty also limits the number of deployed intercontinental ballistic missiles, submarine-launched ballistic missiles, and long-range bombers equipped to carry nuclear weapons to no more than 700 on each side. Altogether, no more than 800 deployed and non-deployed missiles and bombers are allowed for each side.
In reality, each country will deploy somewhat more than 1,550 warheads—probably around 1,800 each—because of a change in the way New START counts warheads carried by long-range bombers. START I assigned a number of warheads to each bomber based on its capabilities. New START simply counts each long-range bomber as a single warhead, regardless of the actual number it does or could carry. The less stringent limits on bombers are possible because bombers are considered less destabilizing than missiles. The bombers’ detectability and long flight times—measured in hours vs. the roughly thirty minutes it takes for a missile to fly between the United States and Russia—mean that neither side is likely to use them to launch a first strike.
Both the United States and Russia have been moving toward compliance with the New START limits, and as of July 1, 2017—when the most recent official exchange of data took place—both are under the limit for deployed strategic delivery vehicles and close to meeting the limit for deployed and non-deployed strategic delivery vehicles. The data show that the United States is currently slightly under the limit for deployed strategic warheads, at 1,411, while Russia, with 1,765, still has some cuts to make to reach this limit.
Even in the increasingly partisan atmosphere of the 2000s, New START gained support from a wide range of senators, as well as military leaders and national security experts. The treaty passed in the Senate with a vote of 71 to 26; thirteen Republicans joined all Democratic senators in voting in favor. While this is significantly closer than the START I vote, as then-Senator John F. Kerry noted at the time, “in today’s Senate, 70 votes is yesterday’s 95.”
And the treaty continues to have strong support—including from Air Force General John Hyten, commander of US Strategic Command, which is responsible for all US nuclear forces. In Congressional testimony earlier this year, Hyten called himself “a big supporter” of New START and said that “when it comes to nuclear weapons and nuclear capabilities, that bilateral, verifiable arms control agreements are essential to our ability to provide an effective deterrent.” Another Air Force general, Paul Selva, vice chair of the Joint Chiefs of Staff, agreed, saying in the same hearing that when New START was ratified in 2010, “the Joint Chiefs reviewed the components of the treaty—and endorsed it. It is a bilateral, verifiable agreement that gives us some degree of predictability on what our potential adversaries look like.”
The military understands the benefits of New START. That President Trump has the power to withdraw from the treaty despite support from those who are most directly affected by it is, as he would say, “SAD.”
That the US president fails to understand the value of US-Russian nuclear weapon treaties that have helped to maintain stability for more than two decades is a travesty.
Update 9/25/17: 53 countries have now signed and 3 have ratified.
Today, 50 countries took an important step toward a nuclear-free world by signing the United Nations Treaty on the Prohibition of Nuclear Weapons. This is the first treaty to legally ban nuclear weapons, just as we’ve seen done previously with chemical and biological weapons.
A Long Time in the Making
In 1933, Leo Szilard first came up with the idea of a nuclear chain reaction. Only a few years later, the Manhattan Project was underway, culminating in the nuclear attacks against Hiroshima and Nagasaki in 1945. In the following decades of the Cold War, the U.S. and Russia amassed arsenals that peaked at over 70,000 nuclear weapons in total, though that number is significantly less today. The U.K, France, China, Israel, India, Pakistan, and North Korea have also built up their own, much smaller arsenals.
Over the decades, the United Nations has established many treaties relating to nuclear weapons, including the non-proliferation treaty, START I, START II, the Comprehensive Nuclear Test Ban Treaty, and New START. Though a few other countries began nuclear weapons programs, most of those were abandoned, and the majority of the world’s countries have rejected nuclear weapons outright.
Now, over 70 years since the bombs were first dropped on Japan, the United Nations finally has a treaty outlawing nuclear weapons.
The Treaty on the Prohibition of Nuclear Weapons was adopted on July 7, with a vote of approval from 122 countries. As part of the treaty, the states who sign agree that they will never “[d]evelop, test, produce, manufacture, otherwise acquire, possess or stockpile nuclear weapons or other nuclear explosive devices.” Signatories also promise not to assist other countries with such efforts, and no signatory will “[a]llow any stationing, installation or deployment of any nuclear weapons or other nuclear explosive devices in its territory or at any place under its jurisdiction or control.”
Not only had 50 countries signed the treaty at the time this article was written, but 3 of them also already ratified it. The treaty will enter into force 90 days after it’s ratified by 50 countries.
The International Campaign to Abolish Nuclear Weapons (ICAN) is tracking progress of the treaty, with a list of countries that have signed and ratified it so far.
At the ceremony, UN Secretary General António Guterres said, “The Treaty on the Prohibition of Nuclear Weapons is the product of increasing concerns over the risk posed by the continued existence of nuclear weapons, including the catastrophic humanitarian and environmental consequences of their use.”
Still More to Do
Though countries that don’t currently have nuclear weapons are eager to see the treaty ratified, no one is foolish enough to think that will magically rid the world of nuclear weapons.
“Today we rightfully celebrate a milestone. Now we must continue along the hard road towards the elimination of nuclear arsenals,” Guterres added in his statement.
There are still over 15,000 nuclear weapons in the world today. While that’s significantly less than we’ve had in the past, it’s still more than enough to kill most people on earth.
The U.S. and Russia hold most of these weapons, but as we’re seeing from the news out of North Korea, a country doesn’t need to have thousands of nuclear weapons to present a destabilizing threat.
Susi Snyder, author of Pax’s Don’t Bank on the Bomb and a leading advocate of the treaty, told FLI:
“The countries signing the treaty are the responsible actors we need in these times of uncertainty, fire, fury, and devastating threats. They show it is possible and preferable to choose diplomacy over war.
Earlier this summer, some of the world’s leading scientists also came together in support of the nuclear ban with this video that was presented to the United Nations:
The signing of the treaty has occurred within a week of both the news of the death of Stanislav Petrov, as well as of Petrov day. On September 26, 1983, Petrov chose to follow his gut rather than rely on what turned out to be faulty satellite data. In doing so, he prevented what could have easily escalated into full-scale global nuclear war.
September 23, 1983: Soviet Union Detects Incoming Missiles
A Soviet early warning satellite showed that the United States had launched five land-based missiles at the Soviet Union. The alert came at a time of high tension between the two countries, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. In addition, earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people. Stanislav Petrov, the Soviet officer on duty, had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflection of the sun on the tops of clouds had fooled the satellite into thinking it was detecting missile launches (Accidental Nuclear War: a Timeline of Close Calls).
Petrov is widely credited for having saved millions if not billions of people with his decision to ignore satellite reports, preventing accidental escalation into what could have become a full-scale nuclear war. This event was turned into the movie “The Man Who Saved the World,” and Petrov was honored at the United Nations and given the World Citizen Award.
All of us at FLI were saddened to learn that Stanislav Petrov passed away this past May. News of his death was announced this weekend. Petrov was to be honored during the release of a new documentary, also called The Man Who Saved the World, in February of 2018. Stephen Mao, who is an executive producer of this documentary, told FLI that though they had originally planned to honor Petrov in person at February’s Russian theatrical premier, “this will now be an event where we will eulogize and remember Stanislav for his contribution to the world.”
Jakob Staberg, the movie’s producer, said:
“Stanislav saved the world but lost everything and was left alone. Taking part in our film, The Man Who Saved the World, his name and story came out to the whole world. Hopefully the actions of Stanislav will inspire other people to take a stand for good and not to forget that the nuclear threat is still very real. I will remember Stanislav’s own humble words about his actions: ‘I just was at the right place at the right time’. Yes, you were Stanislav. And even though you probably would argue that I am wrong, I am happy it was YOU who was there in that moment. Not many people would have the courage to do what you did. Thank you.”
You can read more about Petrov’s life and heroic actions in the New York Times obituary.
By Kirsten Gronlund
Late last month, North Korea launched a ballistic missile test whose trajectory arced over Japan. And this past weekend, Pyongyang flaunted its nuclear capabilities with an underground test of what it claims was a hydrogen bomb: a more complicated—and powerful—alternative to the atomic bombs it has previously tested.
Though North Korea has launched rockets over its eastern neighbor twice before—in 1998 and 2009—those previous launches carried satellites, not warheads. And the reasoning behind those two previous launches was seemingly innocuous: eastern-directed launches use the earth’s spin to most effectively put a satellite in orbit. Since 2009, North Korea has taken to launching its satellites southward, sacrificing maximal launch conditions to keep the peace with Japan. This most recent launch, however, seemed intentionally designed to aggravate tensions not only with Japan but also with the U.S. And while there is no way to verify North Korea’s claim that it tested a hydrogen bomb, in such a tense environment the claim itself is enough to provoke Washington.
What We Know
In light of these and other recent developments, I spoke with Dr. David Wright, an expert on North Korean nuclear missiles at the Union of Concerned Scientists, to better understand the real risks associated with North Korea’s nuclear program. He described what he calls the “big question”: now that its missile program is advancing rapidly, can North Korea build good enough—that is, small enough, light enough, and rugged enough—nuclear weapons to be carried by these missiles?
Pyongyang has now successfully detonated nuclear weapons in six underground tests, but these tests have been carried out in ideal conditions, far from the reality of a ballistic launch. Wright and others believe that North Korea likely has warheads that can be delivered via short-range missiles that can reach South Korea or Japan. They have deployed such missiles for years. But it remains unclear whether North Korean warheads would be deliverable via long-range missiles.
Until last Monday’s launch, North Korea has sought to avoid provoking its neighbors by not conducting missile tests that would pass over other countries. Instead it has tested its missiles by shooting them upwards on highly lofted trajectories that land them in the Sea of Japan. This has caused some confusion about the range that North Korean missiles have achieved. Wright, however, uses height data from these launches to calculate the potential range that its missiles would reach on standard trajectories.
To date, North Korea’s farthest test launch—in July of this year—had the range to reach large cities in the U.S. mainland. That range, however, depends on the weight of the warhead used in the tests, a factor that remains unknown. Thus while North Korea is capable of launching missiles that would hit the U.S., it is unclear whether such missiles could actually deliver a nuclear warhead to that range.
A second key question, according to Wright, is one of numbers: how many missiles and warheads do the North Koreans have? Dr. Siegfried Hecker, former head of Los Alamos weapons laboratory, makes the following estimates based in part on visits he has made to North Korea’s Yongbyon laboratory. In terms of nuclear material, Hecker suggests that the North Koreans have “20 to 40 kilograms plutonium and 200 to 450 kilograms highly enriched uranium.” This material, he estimates, would “suffice for perhaps 20 to 25 nuclear weapons, not the 60 reported in the leaked intelligence estimate.” Based on past underground tests, it was estimated that the biggest yield of a North Korean warhead was about the size of the bomb that destroyed Hiroshima—which, though potentially devastating, is still about 20 times smaller than most U.S. warheads. The test this past weekend outsized its largest previous yield by a factor of five or more.
As for missiles, Wright says estimates suggest that North Korea may have a few hundred short- and medium-range missiles. The number of long-range missiles, however, is unknown—as is the speed with which new ones could be built. In the near term, Wright believes the number is likely to be small.
What seems clear is that Kim Jong Un, following his father’s death, began pouring money and resources into developing weapons technology and expertise. Since Kim Jong Un has taken power, the country’s rate of missile tests has skyrocketed: since last June, it has performed roughly 30 tests.
It has also unveiled a surprising number of new types of missiles. For years, the longest-range North Korean missiles reached about 1300 km—just putting Japan within range. In mid-May of this year, however, North Korea launched a missile with a potential range (depending on its payload) of more than 4000 km, for the first time putting Guam—which is 3500 km from North Korea—in reach. Then in July, that range increased again. The first launch in that month could reach 7000 km; the second—their current record—could travel more than 10,000 km, about the distance from North Korea to Chicago.
An Existential Risk?
On its own, the North Korean nuclear arsenal does not pose an existential risk—it is too small. According to Wright, the consequences of a North Korean nuclear strike, if successful, would be catastrophic—but not on an existential scale. He worries, though, about how the U.S. might respond. As Wright puts it, “When people start talking about using nuclear weapons, there’s a huge uncertainty about how countries will react.”
That said, the U.S. has overwhelming conventional military capabilities that could devastate North Korea. A nuclear response would not be necessary to neutralize any further threat from Pyongyang. But there are people who would argue that failure to launch a nuclear response would weaken deterrence. “I think,” says Wright, “that if North Korea launched a nuclear missile against its neighbors or the United States, there would be tremendous pressure to respond with nuclear weapons.”
Wright notes that moments of crisis have been shown to produce unpredictable responses: “There would be no reason for the U.S. to use nuclear weapons, but there is evidence to suggest that in high pressure situations, people don’t always think these things through. For example, we know that there have been war simulations that the U.S. has done where the adversary using anti-satellite weapons against the United States has led to the U.S. using nuclear weapons.”
Wright also worries about accidents, errors, and misinterpretations. While North Korea does not have the ability to detect launches or incoming missiles, it does have a lot of anti-aircraft radar. Wright offers the following example of a misinterpretation that could stem from North Korean detection of U.S. aircraft.
The U.S. has repeatedly said that it is keeping all options on the table—including a nuclear strike. It also talks about preemptive military strikes against North Korean launch sites and support areas, which would include targets in the Pyongyang area. North Korea knows this.
The aircraft that it would use in such a strike are likely its B-1 bombers. The B-1 once carried nuclear weapons but, per a treaty with Russia, has been modified to rid it of its nuclear capabilities. Despite U.S. attempts to emphasize this fact, however, Wright says that “statements we’ve seen from North Korea make you wonder whether it really has confidence that the B-1s haven’t been re-modified to carry nuclear weapons again”; the North Koreans, for example, repeatedly refer to the B-1 as nuclear-capable.
Now imagine that U.S. intelligence detects launch preparations of several North Korean missiles. The U.S. interprets this as the precursor to a launch toward Guam, which North Korea has previously threatened. The U.S. then sends a conventional preemptive strike to destroy those missiles using B-1s. In such a crisis, Wright reminds us, “Tensions are very high, people are making worst-case assumptions, they’re making fast decisions, and they’re worried about being caught by surprise.” It is feasible that, having detected the incoming B-1 bombers flying toward Pyongyang, North Korea would assume them to be carrying nuclear weapons. Under this assumption, they might fire short-range ballistic missiles at South Korea. This illustrates how misinterpretations might drive a crisis.
“Presumably,” says Wright, “the U.S. understands the risk of military attacks and such a scenario is unlikely.” He remains hopeful that “the two sides will find a way to step back from the brink.”