Skip to content
All Podcast Episodes

Philip Reiner on Nuclear Command, Control, and Communications

Published
6 October, 2022
Video

00:00:00 Introduction

00:00:50 Nuclear command, control, and communications

00:03:52 Old technology in nuclear systems

00:12:18 Incentives for nuclear states

00:15:04 Selectively enhancing security

00:17:34 Unilateral de-escalation

00:18:04 Nuclear communications

00:24:08 The CATALINK System

00:31:25 AI in nuclear command, control, and communications

00:40:27 Russia's war in Ukraine

You can listen to the podcast above or read the transcript below. 

Transcript

Gus Docker: Welcome to the Future of Life Institute Podcast. I'm the new host and my name is Gus Docker. The nuclear arsenals of the world are controlled by complex systems, and if these systems fail, billions of lives around the world are in danger. So on this episode of the podcast, I talk with Philip Reiner about what he's learned about these systems throughout his decades-long career spanning the White House, the Pentagon, and now as CEO of the Institute for Security and Technology. We talk about how these systems work, how they might fail and what we could do to improve them. Towards the end of the episode, Philip also shares his thoughts about what's going on in Ukraine and what this means for nuclear risk.

Philip, welcome to the podcast.

Philip Reiner: Thank you for having me. It's great to be here.

Nuclear command, control, and communications

Gus Docker: What are NC3s?

Philip Reiner: Nuclear command control and communications. So NC3. This is actually a fairly American way of referring to the systems that are in place that allow for the management of nuclear operations.

And so it's really, NC3 in and of itself is a very American characterization of what all nuclear weapon states have to maintain in order to ensure that they are, one, able to utilize their nuclear arsenals when they need to. But then also that those systems won't be used for nefarious purposes when they aren't being turned to for actual nuclear deployment. So, the easiest way to summarize it is that NC3 is the means by which presidential authority is executed.

And I think it's really important to, to note here, right? As an American, as a former Pentagon Official, as a former White House official, that is the way we think of it here in the United States. Other nuclear weapon states don't necessarily think of it in those same terms. Each approaches it in, in very much their own way. I think it's also worth noting actually, if you look back at the history of NC3 in the United States, you can find probably 15 different variations on the characterization, right?

It's very much contingent on, you know, for instance, you think about the example of the UK. They have a submarine based deterrent. Their NC3 is going to be vastly different than what is required here in the United States. If you look at say Pakistan, which, smaller arsenal, much different command authority in terms of who's actually in charge of the maintenance and utilization of these things in the lead up to a conflict. Military versus civilian. All of those sorts of elements result in a different command and control architecture. I would argue that the United States probably has, I think it's safe to say one of the most sophisticated and complex systems of systems when it comes to NC3, and, we've seen that there are over, 200 different systems within the NC3 system. And so when you speak of NC3, it's not one monolithic thing, right? It's actually all these other incredible components, right? Like signals processing data analytics, lots of the communication elements.

So it's very different, say for North Korea, right? Which is only a nacent nuclear state where again, they haven't had the decades of thinking through these sorts of challenges like the United States has or say Russia has.

Old technology in nuclear systems

Gus Docker: When we're thinking about the technology involved in NC3 systems, what technology is involved at this point, because I think I have heard or read rumors that the technology involved is often very old, but that there might be reasons for having old technology.

Philip Reiner: I have heard a strategic command commander say that at this point in its history, the development of US NC3 might necessitate a reversion to analog in certain elements because of the risks inherent to going digital, right? Anybody who spends time in this domain we often assert, and I think it's fairly commonly accepted at this point, assume breach.

And if you have digital systems they're vulnerable. It's almost an inherent element from the silicon all the way up that someone is going to find a way to, to mess with it or break into your systems. And so when thinking about something as sensitive, right? Something as inherently dangerous as nuclear weapons.

You have to very carefully think about what you're going to digitize, whether it's the communications elements or whether it's the actual command and control elements. Now, some countries have more resources when it comes to that sort of thing.

When you think about, say again, Pakistan. They may have the imperative to move much more quickly towards systems that give them more immediate capability without having the luxury, right? They feel that imminent risk, they perceive that imminent risk from India, so they'll move down a path that allows them to be able to perhaps necessitate their digitization of their system so that they can be more able to move quickly or communicate more rapidly.

But what has that created in terms of the vulnerabilities inherent to those systems they rely on? So there is a real trade off and analysis that needs to go into any sort of command and control architecture. Now, it goes without saying, right? Any system that relies on legacy components is going to need to be not only maintained, but probably be retrofitted, updated at some point. And so what we see in the United States is, and I think this goes for Russia, very much so at the same time.

This is a vast architecture that was developed over the course of decades, right? Since the 19, really, since the early 1950s. And many of the integral components of this, as the combat commander has talked about publicly, aren't up to snuff. These components aren't being built anymore.

Gus Docker: I'm wondering whether there are trade offs between speed of implementation and security of the resultant system. So you mentioned the dilemma that Pakistan is in, where they are building up capacity but do not have the, we could call it luxury, of building out security in the in the same instance. How do you think about this trade off between security and speed?

Philip Reiner: It is a lesson that, that we've discussed in great detail. For instance, with a counterpart of ours, the former head of Of security and privacy at Google. Eric Gross, who, reminds us all of this adage of complexity is the enemy of security.

And these NC3 systems are so incredibly complex that even the commander right of us strategic command has noted publicly that even he doesn't really understand how it all works. Looking at the vastness of the system and understanding all the interconnected components. At the end of the day though, even, 20 some odd, 30 some odd years ago, I think you had the US National Security Advisor publicly saying he had three minutes to figure out what he was being told and to take it to the president who would then have four minutes, and that has, that cycle, that time compression has only intensified over the years since then.

So that question that you note of speed is absolutely an imperative in a world where those sorts of risks are what you face. I think a lot of the work that has been done and continues to be pushed now is how do we actually extend those time cycles? How do we actually build in more space?

So that those decision makers can actually have more freedom to think and just debate and not have to rush to, to deploy or potentially even use these. And so the, it's a real conundrum, right? Because you have to have these systems in place in order to move quickly the quickest, right? You have to be able to move faster than your adversary.

And I'll, let me come back to that. And it's not just to maintain an advantage, right? But let's come back to that in a second. But you also need to take into account that piece that you were talking about where if you're moving so quickly down the path of adopting these kinds of technologies, that you're actually making yourself less safe. You're actually making the entire dynamic less stable.

Gus Docker: What could be an example of moving too fast so that you're making yourself less safe?

Philip Reiner: Well, I think a very clear example that one could point to is Russian indications that they're moving down the path of having an automated autonomous nuclear torpedo, right?

And the realistic nature of those technologies and whether or not they'll get to a point that they can develop it and deploy it is a debate. Look at Ukraine. Some of what, everyone assumed Russia was going to be capable of was actually a bit of a paper tiger.

But their insinuation, right, is that they want to be able to move down that path so that they can have something that's basically on not only hair trigger, but it's it's off of the United States coastline and is, you an immediate nuclear risk. To US, not only military, but also civilian infrastructure.

That speed that tightening and the coupling of all of those decision making elements is I think their reaction to what they see as a disadvantage vis-à-vis, the United States technologically and even more broadly, right? Economically, et cetera. And so they'll revert to the adoption of these capabilities so that they can get inside of our decision cycle.

The US then starts to make changes in order to react to that sort of behavior on the Russian side, which actually probably at the end of the day, makes Russia less safe.

We're gonna take steps in order to remove the threat posed by that system. One of the lessons of the Cold War was the ability to actually communicate with your adversaries. So United States and USSR at the time were able to actually have in depth conversations and understand to a great degree what the other was capable of.

And so the speed at which you could move, was something that you actually discussed and put on the table, and there's a reason for that, which is that I am looking to deter you from moving first, right? Bolt out of the blue. Nuclear strikes are not something that folks really are super worried about today, but this was a very real, I mean, this is very realistic back in the day, right?

Where bolt outta the blue, meaning like someone who could, like out of nowhere, strike you and basically wipe you out so that you have no ability to strike back. There is an imperative, right? And it's somewhat counterintuitive. There is an imperative then if you're the United States or Russia, to be able to prove that you can survive a nuclear strike and strike back in order to deter said first strike inclination.

Man, that's messy, right? Like you you're basically telling me that you need to have more nuclear weapons in order to prevent nuclear war, and you need to have the systems in place in order to ensure that not just have the nuclear weapons but ensure that those weapons will work after there's a nuclear strike.

Incentives for nuclear states

Gus Docker: When you update and maintain these systems, you then incentivize other states to also update and maintain their systems in a cycle of escalation where no country is interested in escalating, but all countries are incentivized to escalate.

Philip Reiner: This is the security dilemma. Goes back thousands of years. I do the things that I see that are going to make me safer, but in the end, they actually make me less safe. This is not something that's ever been solved and I don't know that it ever will be, right? I think what's interesting about NC3 though that is often overlooked though, is if you are able to head down a path, and I'm convinced of this, where you can actually show that your systems are reliable and resilient and that they do the things that they're supposed to do, that is actually inherently stabilizing.

Let's say if you're India and Pakistan, and there isn't a whole lot of clarity as to how the systems will perform and whether decision makers will be able to rely on their systems when they need to. That is an inherently destabilizing reality.

We should really do everything within our power to ensure that states have these capabilities so that that risk of initial use goes way down. It's even further counterintuitive in my opinion. I think that this is often, again not something that's discussed very broadly.

Incredibly reliable NC3 systems actually could help reduce nuclear arsenals. So we could actually see a world with less nuclear weapons in it if you can actually make sure that these NC3 systems are actually going to work in the way that they're supposed to. If you think about at the end of the Cold War, that was one of the things that actually allowed for some of the dramatic reductions in nuclear war heads, because you could point to systems that would actually be more reliable and more resilient.

Therein, you don't need as many weapons. It's actually a confidence building measure, which is entirely counterintuitive at first, right? Really, you're gonna build out your ability to effectively and precisely use nuclear weapons. Yeah. Yeah. It can actually be inherently stabilizing.

So that's one of the reasons why you look at something like, or the situation, for instance, in North Korea. One of the reasons why it's so frightening, is they don't have these sorts of systems. They don't have the command and control and positive and negative controls that we would really hope for in a nuclear weapon state to make sure that things don't go wrong, that those accidents don't happen.

Selectively enhancing security

Gus Docker: Are there ways to selectively enhance safety in NC3 systems as opposed to enhancing both safety and capability?

Philip Reiner: It's a fascinating question, I think. Yeah, absolutely. Yeah. I think there are both positive and negative controls that can be decided upon that, that will improve the credibility and the reliability of these systems without necessarily giving some sort of military capability that would be destabilizing.

There's lists of things that, that you could go through and that could be part of a confidence building exercise. And I think the United States and Soviet Union went through that back during the Cold War, where they actually had discussions about what are the types of things that they were working toward and what would they stay away from.

Absolutely, I think that is possible. Necessity is the mother of all invention. That was because there were, tens of thousands of nuclear warhead pointing at each other, and it was, an imperative to have those dialogues. A lot of that sort of political space is a little bit harder to imagine today.

One could take the unilateral steps of heading in that path. I think to a certain degree the United States chooses to do so. It's somewhat further afield, but I think it's a good example of the US unilateral announcement that they would not develop ASAT capabilities.

Gus Docker: What are these capabilities you just mentioned?

This was a unilateral move on the United States as part, it was actually announced by the vice president that the United States would not pursue the development of anti-satellite technologies, right? That we would take off the table, the development of our ability to knock other's satellites out of... to destroy their satellite infrastructure. We've seen rather sophisticated moves in that direction by the Chinese, by way of example, in recent years. And so this was a move on the United States' part to say we're gonna set a norm that actually moves in the opposite direction.

There are those within, I think the US system who would very ardently argue that's incredibly risky. To unilaterally move in that direction. But the Biden administration has done so and is now working, as far as I understand it, working very earnestly at the UN to establish this as a norm that others are supportive of. To try and move away from that, because of all of the risks inherent to having those kinds of technologies.

Unilateral de-escalation

Gus Docker: These unilateral de-escalation moves, are they a general solution to the security problem?

Philip Reiner: It's one tool, right? It's one means of engaging in these sorts of confidence building steps. There's unilateral, there's mini lateral, there's multilateral, there's bilateral, there's a broad variety of ways that you can try to come at these sorts of dynamics and, you know, put ideas on the table that will help reduce pressure, reduce risk, enhance trust.

Nuclear communications

Gus Docker: We should talk about the communication part of NC3. It seems to me that communication is an almost pure good in these situations. If states can communicate more with each other, that seems perfectly great. What do you think of that? Is it the case that more communication is always better, or could there be downsides to having states communicate more effectively with each other?

Philip Reiner: One of the things that we have spent a great deal of time on is thinking about crisis communications and crisis communications solutions, and this harks back to something I mentioned earlier with Eric Gross from the former security head from Google. Something we've been thinking about is this CATALINK system. It's an attempt to, to pivot off of this idea that complexity is the enemy of security. So how do you actually build a very robust, resilient communications capability that is available to all nuclear heads of state to avoid crisis.

And so this, I think per your question, this is predicated off of the hypothesis that the ability to communicate with one another is absolutely a good thing. There's all sorts of different types of hotlines that historically have played a role in reducing conflict. But if you think about nuclear hotlines, you look back to what happened after the Cuban Missile Crisis and the US -Soviet hotline that was established after that. There were times when political actors attempted to use that hotline in a negative way, where they attempted to manipulate it and, manipulate their counterpart.

There's an element of trust here that, that is really necessary for those communications channels to, to be effective. If you also, look at the current dynamics, say with China, they won't pick up the phone. So even if you do have the system in place and it's trusted, it's reliable, it's secure, folks on the other end need to be authorized to actually pick up and speak or communicate whatever, text-based or phone-based, whatever it might be. Without a doubt, the ability to communicate with one another reduces risk. The ability to actually understand where the other person is coming from reduces, reduces risk, I think.

I always look back to my time at the White House when, we would go through cycles where we wouldn't be able to meet with our counterparts. And, would go months, six months at a time and just being able to go sit down with your counterparts would, it would change, it would just change the dynamic almost immediately.

You look at how long, for instance, Xi Jinping from China, didn't leave the country. That's incredibly worrying, right? You have got to be able to sit down, you have got to have communication. And in today's world, there are all sorts of different ways to communicate with one another.

Relying on WhatsApp may not necessarily be what you want nuclear heads of state, turning to, in order to communicate with one another. So that's why we've come up with this idea of CATALINK. Maybe that could be additive to existing command and control, additive to existing communications capabilities. But, without any doubt, having the ability to communicate with one another is absolutely essential to reducing risk.

Gus Docker: Could you tell us more about the current situation of communications between nuclear states? Is it actually the case that they're communicating using WhatsApp?

Philip Reiner: Well, I think there, there's all variety of different, secure messaging platforms that folks have become reliant on in both diplomatic circles, and elsewhere. WhatsApp tends to be the one you hear most often about, folks engaging in conversation via. So there's the day to day, there's intense, you know, diplomatic negotiations that may be happening over those types of platforms. But then there's the level of things that we're talking about when a crisis may actually be maybe brewing.

And what sorts of communications capabilities do you have in place? And there is a very reliable hotline between the United States and Russia. It is not only at the head of state level, but there are means of communication for potentially the Secretary of Defense for the United States to talk to his counterpart.

What we've seized upon in this current situation, current reality, is that those are solely bilateral means of communication, but we don't live in a bilateral reality. So if there is a potential nuclear crisis, like what we've seen as a result of what's going on in Ukraine, that is not a US-Russia problem.

You've got two other nuclear-armed states in the midst of that situation with France and the UK, not to mention the fact that China is almost guaranteed to be a part of that situation if Russia were to actually use nuclear weapons against Ukraine. How are all of those states going to be able to communicate with each other?

It's all bilateral, if at all. I think one of the very interesting things that was brought to our attention was that the ability for even for, NATO, Russia, and the UK, or NATO Russia and France to be able to communicate with each other simultaneously doesn't exist. You think about that, right?

You've got a group chat on Signal with folks in seven different countries and you guys can just, you can have a totally encrypted, no one's ever gonna know what's in that discussion, means of communicating with each other. But we can't, we currently don't have that means for heads of states from states with nuclear weapons. It's beyond simple to get after that challenge in our opinion, right? So I think, folks may default to WhatsApp because it's what's in their pocket. Why don't we work on giving them something else that they can rely on that's more secure and robust and dedicated, right?

Distinctly built and dedicated to this sort of challenge. Maybe WhatsApp and Signal serve a purpose on the day to day, but we'd need something a little bit more I think dedicated and robust for these kinds of challenges. That's what the CATALINK system is trying to get after.

The CATALINK System

Gus Docker: Let's walk through the different parts of the CATALINK system and maybe you could tell us what problems are solved by these parts.

Philip Reiner: The basics right of CATALINK is really three primary components. There's a device, we call it The Puck, and that's the communication device that you would rely on. It's a basic text-based messaging capability that is built from the bottom up. Again, this notion of complexity as the enemy of security, having worked with folks at Google and Intel, and elsewhere to think about how do you actually build something from the bottom up, from the silicon up that is actually verifiably secure. So that's The Puck. Then there's a what we call The Broker. So this is the interface through which the signals from The Puck would be translated into whatever sort of communications network you're relying on.

And the way that we're designing this is so that, that is very flexible. That you can actually determine whatever mechanism you want those messages to actually go. Whether that be satellite-based or cell-based, what have you.

Gus Docker: So that's redundancy in the network that transmits these messages.

Philip Reiner: Inherently. This is where it gets most interesting though, right? And incredibly challenging. In the event where you actually have an exchange, a nuclear exchange, or even in the lead up to an exchange. What you'll see in most planning, and you saw this to a certain degree just with Ukraine, communications architectures are what are going to be targeted first, almost a hundred percent of the time. Whether that be the satellite network, the cell towers, the internet. The ability to get information through is going to be one of the first victims in the lead up to any conflict. So if you're relying on this multinational CATALINK system, what is your fallback for if and when those other options are not available to you? And that's the third component that we've been working on.

We call it The ROCCS. The intention is to build a mesh network that will be available when everything else fails. And so that is, honestly, it's an incredibly complex technical challenge to get after. It's something we're working on. We love to hear from people who, who work on these kinds of things already and who would be interested in playing a part of any of these pieces.

I think one of the two, two or three things about this that, that we've been getting after that sets it apart, I think from other ideas that are out there. The code for The Puck is actually up on GitHub.

If folks are interested, they can then come to us and we can point up toward it. We want this to be open. We want this to be transparent and we want this to be something that folks really around the world can work on together. So that it actually is trusted and that it has integrity and that it's resilient, right?

So the folks can hack on it together and actually really try to break it. That's a big part of what we're trying to do. And I think one of the other pieces is we've worked with folks who do formal methods to think about how to actually, prove that something does what it's supposed to do.

And so getting to a point with the code for The Puck and The Broker, so that we can run it through that sort of verification process is actually an integral part of this as well. Lastly, I think one of the most important things, and this is what we've talked about with representatives from various states, is we have no interest in imposing this on somebody else.

We'll develop it and then you take it and integrate it into your systems however you see fit. I'm not gonna tell you what hardware to use. I'm not gonna tell you how you need to build it into your architecture. Those are things that you can figure out. Therein you would trust it even more because it's something that you had to figure out on your own how to actually do.

We'll, build the architecture, we'll explain to you how to do it, but then you can run with it. And I think that's an attempt on our part to kind of break with tradition a little bit. I think in doing so though, we'll build more trust and interest in participating by each of the states that are concerned.

Gus Docker: The code is open source and states might be able to verify that the code does what it says it does. What about the hardware level? Could other states be worried that the hardware has been tampered with and that this system is therefore not to be trusted?

Philip Reiner: The idea is that states will take this and they can pick and use their own hardware. We've had that very frank conversation with a number of leaders from states that have nuclear weapons and have made very clear that we, we work with folks who are building fairly secure devices from the bottom up, but we wouldn't necessarily say that's what you have to use. This would be something that you can take and you can integrate into your own system. You pick your own hardware, so that you can trust it to the utmost degree.. That's absolutely something that we talk about. I think trying to impose some form of hardware on, on folks is just, it's not gonna go anywhere. So we really wanna leave that up to the states themselves.

Gus Docker: What prevents the CATALINK system from being implemented?

Philip Reiner: We now have support and funding from the government of Switzerland and from the government of Germany. And we're very proud and happy to be working with them, both on the political but also the technical level and extremely grateful for their support.

I think the conversations by way of example that we had when we were up at the Review Conference for the NPT up in New York were incredibly encouraging in terms of the states that are supportive of this as a tangible example of risk reduction. Really, it's just the hard work of speaking to those who would need to use it so that they get involved and move from this idea prototype phase into its actual implementation.

It is arguably less a technical question as it is more a political question. I think it's fair to say that, any state with nuclear weapons, as soon as you start talking to them about giving them communications capabilities that involve nuclear decision making they're going to be somewhat incredulous, right?

At the notion that what, so this NGO in California has come up with some device that I want to give the president to use in these situations? Yeah that's the idea. And that we try to lever some of the capabilities of these technical communities to solve really what is in my opinion, what is a market failure.

This is not a problem that folks haven't known about. This is been a long standing challenge that if you look back at some of the literature from the 1980s, I think it was in an edited volume that Ash Carter and others put forward in, in 1987. This came up multiple times in there. How do you actually communicate in the midst of a crisis with your adversary when everything's being destroyed?

We don't talk enough about that part of risk reduction. I think we've seen some great success in our conversations to date. I think, honestly the biggest thing between here and there is just continuing to galvanize support for it and get the support from the technical end to build it out.

The ROCCS piece in particular, to build this out so that folks can turn to it and actually start thinking about how to integrate it.

AI in nuclear command, control, and communications

Gus Docker: Let's talk about the role of artificial intelligence in nuclear command control and communication systems. To what extent is AI being used today in these systems?

Philip Reiner: I think the literature that's out there around these things points to the use of technical capabilities that I think folks today don't even really think of as artificial intelligence. Going back decades for signals processing and data aggregation. There were tools that were deployed probably, I would argue, in the seventies that at the time were referred to as artificial intelligence.

But folks wouldn't even think of it that way anymore. I think what you're more getting at is the kind of conversation around machine learning, deep learning, and its potential integration into these systems. There are all number of conversations around the relative merits of integrating these capabilities into nuclear command control and communications.

There was an article that came out probably 2019, 2018 from Curtis McGibbon. The article basically made the argument that the United States needs to develop and deploy a dead hand system, right? So this automated system that would be responsive and launch a nuclear attack if a state were to be decapitated and the signals could not, messages, could not get through, et cetera.

And the thing that we spoke about at the time when that came out was to have two folks write that article who probably have been involved in related conversations in classified circles. Which you almost have to assume there is, that there are conversations around that sort of, there are debates around that sort of thing within classified circles, and you begin to wonder, to what extent is it really being considered?

And I think it's safe to say that at this point, at least to my knowledge, the considerations, I don't know, it's really very much opaque. I don't know the extent to which folks are moving down that path, but it gets to, to some of the questions that you were asking earlier in terms of that tightening decision cycle in which you need to be able to move quickly.

And I think what's important here to think about when you look at the signaling, for instance, again, we talked about it a little bit from the Russian side in terms of the capabilities they're discussing. You look at what Russia talks about, you look at what China is signaling that its intentions are when it comes to command and control and artificial intelligence supported system decision support systems, et cetera, that they're heading down that path.

In that they are looking for advantages to be gained by deploying machine learning capabilities, deep learning capabilities into all variety of command and control related sub-components. And so there you are in that dynamic again, right? If the United States sees China heading down that path, and it's not entirely clear where they are in their capabilities today and where they might be in a few years, you're almost being irresponsible to not head down that same path, to figure out what you're capable of and what you may need to be doing in order to stay ahead and/or deter another nation from getting ahead of you and putting you at risk.

To your question it's something that is, I think, getting a lot of attention and a lot of discussion. I think one of the pieces that we've pointed to is the discussion, at least the public discussion, has not been very specific. It's been very broad. Artificial intelligence. What are you talking about Specifically? Nuclear command and control and communications. What are you talking about?

There's over 200 subsystems within the NC3 system of systems. Where does deep learning even factor into that, potentially? We've gone through the lists of the sub-components that are at least publicly known. It's not insignificant the number of sub-components that could potentially be enhanced by integrating machine learning capabilities in terms of signal modulation or what have you. I think there are definite advantages to moving down that path. I think what folks tend to think about is the automation of end launch decisions.

So at this point we have not seen any indication from the United States by way of example, that they're heading down a path like that. Quite the opposite. They continue to publicly state and indicate that they will not turn those sorts of decisions over to machines, however powerful those machines become. That's a little less clear when it comes to Russia and China to be quite honest. It's not as clearly stated and some of their activities would point to otherwise to, to be honest.

Gus Docker: And how could this be dangerous? So how could using AI in NC3 systems be dangerous? I'm thinking about these systems could be unreliable, they might be open to being hacked. How could it be dangerous to implement AI in NC3s?

Philip Reiner: How much time do you have? There's so many layers to the risks that are inherent to this. Digital systems are almost inherently vulnerable. And I think, folks out there working on AI red teams will tell you that machine learning systems inherent cyber vulnerabilities that have not been solved elsewhere, right? And so that is not something that can be easily dismissed or overlooked. There's basically three tiers of consideration that you need to give this when thinking about the risk that it introduces: the strategic, the operational, and the tactical, right? And so we were just talking about the tactical level considerations which is perturbations. Like somebody's messing with a model and you're getting information out of that model that's telling you one thing that's actually completely not accurate. How do you actually deal with that in a system of systems that is so complex that you're not even entirely clear on what your system is?

So you're introducing a vulnerability, a layer of risk, that is difficult to actually ascertain. That's technical. That's almost scientific at its core. That maybe you can get after the error rates that then compound within that stack and better understand it, and then have a conversation with your adversary about that.

That's pretty tough just in and of itself. But then you start thinking about the operational level. These are incredibly tightly coupled systems. So what I mean by that, right, is that "signal, decision, action" has to be so tight that you don't really have time to stop This is the inherent risk, when you have systems that are so tightly coupled, there's going to be accidents, there's going to be errors, that are going to be false negatives, there's going to be mistakes. And when you introduce black box systems or systems that are incredibly brittle or systems that are overfitted or systems that, you know, all these other problems that really haven't been figured out.

When these systems are in the wild, you begin to have operational level considerations of risk that even when it comes to like, the airline industry, they haven't entirely wrapped their heads around, or autonomous vehicles, writ large, driving. I think there's a lot of emergent properties to those systems that folks never anticipated.

That's something you need to grapple with. And then of course, what we talked about earlier in terms of the strategic-level considerations. You are watching me, I'm watching you. The signaling that takes place and the credibility that's necessary with nuclear command control and communications.

That I am going to be able to do what I say I'm going to be able to do. I trust that you are, and if those things are inherently unsound and unreliable, that's inherently destabilizing. And so as we head down a path where these sorts of capabilities might be integrated into nuclear command and control, how do you actually create transparency?

How do you actually create dialogue at a senior level so that folks can reduce that sort of strategic level risk? That's very difficult. You know, it is often said that the benefits, do not outweigh the risks. You haven't figured out the answers to the problems enough to be able to head down that path. And it doesn't really give you that much more of an advantage, in the end. But there are others who would argue against that and say that it does, that it actually gives you capabilities that are really beyond even our comprehension at this point.

Gus Docker: You would definitely have to drive down the error rate of these automated systems to an unprecedented level.

Philip Reiner: That's right.

Gus Docker: Much lower than in self driving cars, for example.

Philip Reiner: Yeah.

Gus Docker: And we've seen that this is extremely difficult to do.

Philip Reiner: Yeah.

Russia's war in Ukraine

Gus Docker: I would like to touch upon the conflict between Russia and Ukraine. What is your overall take on how this war has changed the risk of nuclear weapons use?

Philip Reiner: I think at its core it's been deeply unsettling. As someone who has spent decades working with and around and on these sorts of challenges when you think about nuclear deterrence and you think about the risk of nuclear weapons, how violent and catastrophically violent they are. I think we often lose touch with the actual impact that the use of these would have. And we find ourselves in a generational shift where I think a lot of leaders and decision makers, even within the system today, seem to be increasingly inclined toward thinking about nuclear war fighting as an option, and that's deeply concerning because we're moving toward, in my opinion, we're moving toward a world with more nuclear weapons and more nuclear weapons states.

All of everything we've talked about here with NC3s today becomes all that much more important as you see a world with more states with nuclear weapons. What we've seen from the really awful and absolutely abhorrent situation in Ukraine is, this is one of the real longstanding challenges when it comes to just having nuclear weapons and the deterrent purpose that they're supposed to serve.

And what we've seen is Russia's willingness, quite honestly, to break a number of longstanding norms that it had been committed to i.e. nuclear assurances for Ukraine, that having given up its nuclear arsenal, that it would not be put at risk like this. What is that signal to other states that are considering moving down this path?

You know, now they're in the motions of going through these referenda and, annexing Ukraine's territory. So that, you know, once that's done and they say that, that's part of Russia, now they will say that attacks against that territory could result in a nuclear response.

This is beyond I think what anybody would've accepted even just a few years ago as acceptable behavior. It's an incredibly risky and dangerous dynamic. And it's moving the world in the direction, in a direction honestly, that I never in my life thought that I would see.

It's inherently unsettling, to say the least. And then you see, Russia going even further, putting a nuclear power plant basically in the middle of an armed conflict and using it, basically as blackmail and a massive potential radiological weapon. Almost in the heart of Europe, is reckless beyond, almost beyond imagination.

I think it definitely increases the risk. All of that being said, again, as someone who spent a lot of time around these things over the years, maybe, just maybe, this will actually motivate people to change some of the trajectories that we've been on.

Maybe this will actually point to the need to have greater constraints in place. Maybe this will actually galvanize international actors to move more collaboratively toward making clear that this sort of behavior is just absolutely unacceptable. So you think about, I hate to say it, but you think about the China-Taiwan dynamic and what does this scenario mean for that?

I think it's the opprobrium and absolute disgust in rejection of Russia's actions in Ukraine by the international community that matters. That speaks volume, volumes to the leaders and decision makers in Beijing. And so it is an incredibly critical situation. Anyone who tells you they know what Putin is going to do is lying.

I don't think anybody really knows what's gonna happen or how much he means it when he says he'd be willing to use a low yield, tactical nuclear weapon in Ukraine to advance their objectives. Hopefully, this has positive outcomes. But at the end of the day, I think as President Putin sees increasing losses and setbacks on the ground, that is an unacceptable outcome for him.

It leaves him with very few options, and so this is where, the National Security Advisor has his work cut out for him in advising President Biden on what decisions to, to take and how far to push and how to make sure that we don't go too far and, you know, push this clearly embattled leader into a corner to the point where he needs to make really horrible decisions to deploy and use nuclear weapons.

If he did, I don't know that any of us understand just how much that will change our world.

Gus Docker: How critical is the situation right now compared to, say, the Cuban Missile Crisis?

Philip Reiner: Very different dynamic, right? I think the lines of communication are, as far as we can tell, very open between the US and Russia, that those messages that we're sending are getting through, whether or not they're being heard is a completely different thing.

You know, if Russia were to actually move down the path of using low yield nuclear weapons against Ukraine, the response from NATO, the response from the United States and NATO, I don't know that it would necessitate a nuclear response, whereas an attack on the United States would. So it's a little bit of a different dynamic. However, horrifying and escalatory, that would be. The escalation ladder in that scenario is not anything anyone has ever actually had to live through. We don't know.

Gus Docker: It's an unknown situation. It's a situation the world has not been in before.

Philip Reiner: The imperative is upon our leaders to, to take all of the inputs that they have and, hopefully they're not relying on a brittle predictive analytics platform that is telling them what to do.

Yeah. This goes back to the risks of embedding automated decision making support systems into your nuclear command and control. You want to be able to really retain that human element in this sort of scenario to think about ways to find deescalation paths.

Gus Docker: Philip, I can't say this has been a happy conversation, but it's been very informative and scary also. Thank you for talking with me.

Philip Reiner: Absolutely. My pleasure. Thanks. Thanks for having me. Lots of work to do.

Gus Docker: That's it for this week's episode. On next week's episode, we'll be looking into how combining artificial intelligence with nuclear command and control systems could be dangerous, and we'll be looking into concrete scenarios for how this could increase risk.

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram