Skip to content
All Podcast Episodes

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

Published
February 25, 2021

  • The current state of the deployment and development of lethal autonomous weapons and swarm technologies
  • Drone swarms as a potential weapon of mass destruction
  • The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
  • The difficulty of attribution, verification, and accountability with autonomous weapons
  • Autonomous weapons governance as norm setting for global AI issues

You can check out the new lethal autonomous weapons website here

Transcript

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Stuart Rusell and Zachary Kallenborn on lethal autonomous weapons. Particularly, this conversation is focused on the highest risk aspects of such weapons, such as the potential for generating large numbers of autonomous weapons through their industrial scalability, and their miniaturization, which both together could lead to swarms of lethal autonomous weapons which could then lead to their classification as weapons of mass destruction. Stuart Russell is back on the podcast for his 3rd time. He is a Professor of Computer Science and holder of the Smith-Zadeh chair in engineering at the University of California, Berkeley. He has served as the vice chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is the author with Peter Norvig of the definitive and universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach. Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), and a Senior Consultant at ABS Group. His work has been published in outlets like Foreign Policy, Slate, War on the Rocks, and the Nonproliferation Review. 

 The release of this episode coincides with the launch of the brand new autonomousweapons.org website. This website was an effort on the part of the Future of Life Institute and contains information about the technology of autonomous weapons and the ongoing debate about governance issues pertaining to such weapons. So, if you’re interested in learning more in addition to this podcast, head on over to autonomousweapons.org. To introduce this podcast, and a key author behind the autonomousweapons.org website, please allow me to introduce Emilia Javorsky: 

Emilia Javorsky: Hi, I'm Emilia Javorsky and at my role at the Future of Life Institute I lead our advocacy efforts on lethal autonomous weapons. And a question that comes up a lot is, "Why is this issue so important?" Most people, not all, acknowledge that the words "lethal," "autonomous," and "weapons" in sequence sound like a very bad idea, but today we are bombarded with so many important and underserved causes to care about. Why does this issue in particular warrant taking up bandwidth in our thinking? And I'd argue, anyone who cares about a positive outcome for our collective future with AI should care about this issue. AI is and increasingly will be the most powerful technology we've ever developed. It has the power to completely transform society for the better, but to realize these benefits we need to develop and deploy the technology with wisdom. And this means drawing clear lines between acceptable and unacceptable uses of the technology. And these early decisions that we make will have a tremendous impact on steering the trajectory of AI towards being used in a way that is positive, it's safe, it builds public trust, and it advances society. Or one that is destabilizing and triggers backlash and has a net negative impact on our world. Much of our celebration of biotechnology today for curing disease and generating vaccines is arguably due to the early decisions to stigmatize its weaponization in the form of the bioweapons ban. By contrast, the fallout of nuclear weapons has served to curtail the potential of peaceful uses of nuclear power for decades and the question at the heart of the lethal autonomous weapons conversation is, "Do we as a global community want to cede the decision to kill humans to algorithms, to AI?" I strongly believe that if we fail to develop a governance mechanism to gain agreement on that AI shouldn't be allowed to kill people based on sensor data, we are completely toast with regards to our pursuit of a positive future. If we can't set this precedent, we're unlikely and in so many ways have morally abdicated our authority to be able to make many of the wise decisions that lie ahead that are needed to steer the deployment of AI and society to realizing its positive outcome. We have to get this question right and we have to set the precedent that humans must retain control over the use of lethal force.

And beyond this key ethical dimension of drawing a red line on uses of AI, these weapons systems would also be highly destabilizing to society. These are weapons that have been called "unpredictable by design," meaning that it's highly unlikely the typical training data would be sufficient to predict the behavior of these weapons in real world settings in rapidly evolving circumstances and in the context of encountering other lethal autonomous weapons systems in an adversarial context. By removing human control and judgement, there's a strong risk of accidental or intentional escalation of conflict and the inherent scalability of technology that relies on software means that these weapons systems would pose substantial proliferation risks, not only among states, but also to non-state actors. All of these destabilizing risks are happening in the context of a global AI arms race where the incentives are seeking to rush these systems into the battlefield as soon as possible, and we're not talking on the order of decades. The age of lethal autonomous weapons is here. There are systems that are around today that are capable of selecting and targeting humans without human control. So, the window for us to be able to draw that line is rapidly closing, and we really need to take action before it's too late. This is why I'm really excited about the conversation that lies ahead today between Zak and Stuart who are two individuals whose work has really served to highlight both of these categories of risks both on the ethical dimensions of lethal autonomous weapons and the risks that that poses, but also the security and stability risks of these systems. So, I'm really looking forward to the conversation ahead. 

Lucas Perry: All right, let's start off here with a bit of background about this issue. And so could you guys explain simply what a lethal autonomous weapon is and what are the various forms that they currently are taking today?

Zachary Kallenborn: Yes, I suppose I'll go ahead. There's certainly quite a bit of discussion about what exactly do we mean by autonomous when we're talking about these system. Primarily concern is what happens if we let machines make decisions about who to kill and who not. In terms of the specific development of that, there's been quite a lot of developments in recent years, but it's still a relatively nascent technology. When we're talking about lethal autonomous weapons, I think for the most part we're talking about formal more sophisticated weapons, like the SGR-A1 Gun Turret, developed by South Korea and deployed, I believe, at the South Korean border between South Korea and North Korea that has the capability of selecting and engaging personnel target.

That said, I think when we talk about autonomous weapons, I think it actually goes well beyond what we're seeing at the moment, because if you think about like landmines, in a sense, landmines are a very simplistic form of autonomy. Military very explicitly exclude landmines from their discussions of lethal autonomous weapons. But certainly, in a sense it is a system that ingests information from outside the world, it makes a decision based on physical properties and physical stimuli about who to explode and who not.

Stuart Russell: I would add, just for the record so to speak, there is the beginnings of an agreed upon definition, that the United Nations, and I think to some extent, the United States think makes sense, which is a weapon that can locate, select and engage human targets without human supervision. We can define any category of weapons that we want, and you can expand or contract the definition to rule in or rule out different kinds of weapons systems.

There is no right definition, but whenever you write a definition, there are weapons that fall in and out, and you want to know, have I left out some weapons systems that I should be worried about and that do we need to be subject to regulation? Or have I included some weapon systems that are actually, as far as weapons systems ever are, innocuous and that they don't raise any particular concerns? So engineering, these definitions is a way of actually putting off the real issue, which is what are we going to do about it?

Zachary Kallenborn: To an extent but I think there is sort of value in getting that precise definition, at least in so far as like you need to have some sort of common definition, in order to sort of engage in the global conversations about it. It's like if I use lethal autonomous weapons to refer to not just human targets, potentially, including like vehicles or other targets, that potentially changes our understanding of where those risks are.

And as well as prohibits, potentially, movement towards something more robust to deal with policy level.

Stuart Russell: Can we talk about technologies that are being created now and advertised and sold? I think that many people still regard the possibility of lethal autonomous weapons as science fiction. And for example, quite recently, we've heard the Russian Ambassador to the talks in Geneva, say that these things are 20 or 30 years in the future, and it's a complete waste of time to be talking about them now.

Zachary Kallenborn: I think that's definitely accurate. I think it also goes beyond just lethal autonomous weapons. Only now are countries beginning to recognize just the threat of drone systems just by themselves. And that's not even an emerging technology, we've had remote controlled airplanes since '68, I think was the first remote controlled airplane. But only now are states realizing like, hey, this is a huge issue. You know, much less talking about cooperative drones working together in the form of swarms or other types of systems.

Lucas Perry: All right, so can we get a bit more specific about the kinds of systems that are being deployed today and which already exist and be a bit more specific about what we're talking about? So, the short definition is that these are systems which can autonomously select and engage targets through the use of force. And so for, I don't know about a decade or so, I'm not sure how many years we've had, for example, turrets on aircraft carriers that could autonomously engage incoming missiles. In Israel, they have the Iron Dome, there's a defense system. And now there's the expansion of the autonomy towards engaging and selecting human targets. It's the beginning of the integration of machine learning and neural nets in the selection, acquisition and engagement of human targets, but also other kinds of military targets, like tanks and other kinds of vehicles.

Stuart Russell: I would argue that actually it doesn't matter how the AI system does it. I think, to a large extent, although not completely, it doesn't matter whether it's created using machine learning or neural nets or Bayes nets or rule-based systems or whatever. What matters is the autonomy and the capability. So the examples, I think probably one of the best known examples that you can actually buy today and have been able to buy for several years, is the Israeli Harpy missile also called the Harop. And so this is what's called a loitering weapon, which means a weapon that can wander around. So, it's a small aircraft about 11 feet long, I think, and it can wander around a designated geographical region for up to 6 hours. And it's given a targeting criterion, which has a radar signature that resembles an anti-aircraft system, or it could be looks like a tank, because it has onboard camera systems and can use image recognition algorithms as well.

Whenever it finds something that meets its targeting criterion, it can dive bomb it and it carries a 50 pound explosive payload, which is very destructive and can blow up whatever it is. So that seems to me to meet all the criteria for being a lethal autonomous weapon. A version was used as a by Azerbaijan and actually blew up a school bus, which was claimed to be carrying soldiers although I think there's still controversy about exactly why it was targeted. Certainly didn't look like a tank. So that's Israel Aircraft Industries, they are advertising something now called the Mini Harpy, which is a much smaller fixed wing aircraft about maybe a 18 inch or 2 foot wing span. And the advertising materials clearly emphasizing the autonomous attack capabilities. If you go to Turkey, there's a company called STM, which is advertising something called the Kargu drone, and they explicitly say autonomous hit capabilities, human face recognition, human tracking, human targeting, all their advertising materials, basically telling you this is a lethal autonomous weapon that you can buy in as larger quantity as you can afford and use to kill your enemies.

Zachary Kallenborn: Yeah. I want to jump back to Stuart made the point earlier about like, it doesn't really matter what technology you talk about, whether it's like neural nets or machine learning. I think that's really right. Especially from a risk perspective or at the end of the day, when we're talking about risk, what really matters is like the threat to human life and the outcome of how these weapons are used in that particular risk of accident. And to an extent that's going to be while there may be differences and between different types of techniques that you're using at the, in the end, like that's really what matters is sort of safeguarding folks. And I would also add to the sort of question of like broader spread of technology. There's definitely very clear interest across many different militaries, as you saw at the National Security Commission on Artificial Intelligence recently mentioned that the United States is interested in potentially pursuing autonomous weapons, provided that they're world lastly, tested and have appropriate ethical controls, but they see value in it, particularly through the advantages of speed and decision-making, and because in the military conflict, that's really fast moving, that can be extremely advantageous.

And I think that's likely to grow even further when we talk about sort of the spread of drones and general autonomous systems that may even if they don't necessarily have human control, or even if they have human control over targeting. I think as we have broader spread of drones and robotics, there is very clear incentives towards much higher degrees of autonomy, particularly to mitigate concerns over electronic warfare and jamming where you stop where a military stops the signals from coming into a particular robotic system, if you have an autonomous system that you don't need to worry about that. So there may be military reasons sort of move in that direction. And I think certainly we've seen increasing interest in drones generally across many states, especially after the recent conflict between Armenia and Azerbaijan, where Azeri drones were quite deaf proved quite devastating to Armenian tanks and a wide variety of capabilities.

Lucas Perry: So are there any key properties or features of autonomous weapons that would be wise to highlight here that are a part of why they pose such a high risk in the evolution of conflict and global governance?

Stuart Russell: So I think historically, and this discussion has been going on for about a decade. The first set of concerns had to do with what we might call collateral damage that because we might program them poorly, or we might not understand what they're going to do once they're deployed that they might end up killing a large numbers of civilians by accident related to that, the idea of accidental escalation that they might misinterpret activities by a neighboring country and initiate a hostile response, which will then be interpreted by their autonomous weapons correctly as a hostile attack, leading to a very rapid escalation, possibly on a timescale that humans could not even contain it before it became too serious.

So those are, I think are still real concerns. The first one, the ability of the AI system to correctly distinguish between legitimate and non-legitimate targets and so on, is obviously a moving target in the sense that as the technology improves, we would expect the instances of collateral damage to reduce. And many people argue that in fact, we have a moral obligation to create such weapons because they can reduce collateral damage to civilians. But I think the real concern, and this has been, I think at the forefront of the AI community's concern for the last five or six years, is that as a logical consequence of autonomy, you also get scalability. And scalability is a word that computer scientists use a lot and it basically means, can you do a thousand times as much stuff if you have a thousand times as much hardware and you know, we'll use this know Google onsets, billions and billions of queries every day, not by having billions of billions of people, but by having lots of computers.

So they have a scalable service that they can make it twice as big, but as buying twice as many computers. And the problem is when you have scalable death, you have a weapon of mass destruction and it's precisely because you don't need the human to supervise and intervene in each individual attack, that you can create these scalable weapons where you just, instead of buying a handful Kargu drones, each of which is about the size of a football, you buy a couple of hundred thousand of them. And then you launch a very large-scale attack that wipes out an entire city or an entire ethnic group. And so creating a low cost, easily proliferated weapon of mass destruction, that's probably going to be used against civilians. So even though you could make AI systems that are very careful, you could also make AI systems that aren't very careful to be used against civilians. So why we would want to create and manufacture and distribute such a technology. I don't know. I can't think of a good reason to do it.

Zachary Kallenborn: Yeah. So I'll expand about that. So, cause I feel like this is probably going to end up being a fairly long answer. I think there is particular issues with scalability and I think it goes even beyond the issues that Stuart rightly raised. And I also personally have some concerns about, even if you have improved targeting with autonomous weapons, that actually could be a concern in and of itself or other reasons, particularly related to chemical and biological warfare. So first to expand the scalability issue, I definitely share that concern around having numerous drones. I mean, we know already just from open source reporting that numerous states are exploring this technology and exploring it very quickly. So just in the past few months, there's been announcements over the U.S Naval Postgraduate School is doing some research. What if you potentially had a million drones all operating together where some are boats, some are flying somewhere under the sea, how would you deal with that from both an offensive and defensive perspective?

Now that was like more, very early on modeling and I'm sure they're very, very far away from actually doing something of that scale, but they're very clearly thinking about it. There's also a recent announcement from India. They showed off in their recent army parade, a group of 75 drones that they claimed were autonomous. That sounds like there's some pretty clear question of whether they were and to what extent they actually collaborated as true drones swarms, but there was very clearly at least interest in that. Now I think where the concerns are, is that goes even beyond scalability, because I think the error risks around targeting becomes multi-fold more complex. So even if we assume that we have an advanced military, like the United States or someone who is very sensitive to these ethical concerns and they do a whole bunch of work to make these systems really full-proof.

Then so they, their error rate becomes really, really small. If you have like 10,000 drones, even a really small error rate may end up resulting in all sorts of error. And I think you have potential errors coming through multiple directions. So when we're talking about drone swarms, much of the research focuses on rather than just having a whole bunch of drones. You particularly have drones that are talking to one another and able to communicate and share information from an error perspective that creates extra levels of problem. Because if you have one of those drones makes a mistake and then communicates it to all 10,000, you now have 10,000 that have made a mistake. So you have potential propagation of error all throughout the entire system. And there's also concerns about emergent behavior where, one of the interesting properties of swarming and why many militaries are interested in it is through like the emergent properties that come from a small number of things, working with simple rules that sort of create complex behavior.

The same way, if we talk about like starlings or can't remember what the particular bird is, where you see like these really fancy flocking formations that birds have based on really simple rules of like how they work together. So how would that work if you have from like an error perspective? Like what if your error isn't actually in a particular autonomous system where it miss identifies like a civilian as a soldier or vice versa, but rather small bits of information that may all be correctly read, ultimately lead to problematic conclusions when you move at scale. And I think from a scaling perspective, that's also a particularly huge issue because there's been a number of research that look, let's look at, how do we manage that level of complexity? And the simple reality is that humans can't manage huge amounts of drones. And in fact, the key limiter has been drone autonomy as in terms of the scaling, because we have imagined like a security guard sitting at a desk, they can fairly easily watch a monitor, a video like 50 different video feeds because nothing is happening.

And it's just a bunch of empty hallways. But if you're talking about like in a military conflict, you can't do nearly as that because your drone is moving about, you have things moving, entering the battlespace leaving the battlespace folks, very deliberately trying to camouflage or disguise their activities. So they creates huge incentives to move towards much higher levels of autonomy. And I think that's where you get the true weapon of mass destruction type idea, because that's the two fundamental properties we've seen with traditional weapons of mass destruction, namely chemical, biological, and nuclear weapons, where it's not just that you have a lot of harm that you can inflict, but you very much can't control that harm. Where if you have a rant, wind blows in the wrong direction, you potentially spray, move your chemical agent into, instead of hitting a soldier, it hits a bunch of civilians.

And that sort of leads me to the second aspect where I think control is itself may create its own types of risks, particularly with these weapons that are already uncontrollable like chemical and biological agents. If you can increase your level of control, that could potentially be a real problem from a chemical and biological norms perspective and particularly proliferation there, we're part of the reason militaries have given up chemical and biological weapons is they concluded that these weapons aren't really useful for us because we are so concerned that it's just going to spray us right back in the face, or create some harm to larger civilian groups and other types of groups that like this isn't advantageous to what we're trying to achieve. But if you have autonomous weapons that can improve those capabilities in a substantial way, that targeting capabilities substantial ways, then you now have a much stronger as, or you remove the disincentive against having those weapons in general.

And that's particularly concerning given the current issues surrounding chemical and biological weapons, where some of these norms seem to be fraying a little bit, some of the assassinations using Novichok agents in the United Kingdom. And there's some allegations from Bellingcat about Russia's current chemical weapons program that's actually much more developed than is publicly known. I haven't dived in the report myself, but like from some of my own research at least seems plausible. So if you could find lethal autonomous weapons with, these already horrible agents, it could be a potentially really major concern from a global security perspective.

Stuart Russell: Yeah. So I think this is an example of the kind of the blow back problem. And we hear over and over again from the U.S and I heard this, directly in the White House that although these things are possible, the U.S would never develop or use weapons in such a way. Now. To believe that you have to gloss over the history of the U.S' own biological and chemical weapons programs. But even if you take that at face value, that seems to be, I would say actually quite naive because once that technology is created and manufactured on a mass scale, just as has happened with automatic rifles, they spread around the world in vast numbers.

There are 75 million Kalashnikovs' in private hands, for example. So the idea that we can manufacture millions of swarm devices, but they only remain in the hands of highly trained, highly civilized Western militaries is completely naive. And one of the reasons behind the U.S abandonment of biological weapons, in addition to the military difficulties of using them, was the argument made first by biologists. And then by Henry Kissinger to Nixon, that such a weapon would reduce the security of the United States, because it would quickly proliferate. It was very cheap and will be used by our enemies against U.S Civilian populations. And I think all of those arguments apply to lethal drone swarms as well.

Zachary Kallenborn: I think that's right. I think we have to add a little bit of nuance to it, because I think there's necessarily going to be some limiters in terms of how much it spreads and what types of weapons, cause if we think about drone swarms, it's not like a bunch of off the Shelf quadcopters interacting together, you potentially are looking at like a large unmanned ground vehicles, like with some sort of artillery or tank that's interacting with other types of similar advanced systems, stuff like that is going to at least be harder to proliferate as opposed to like the computing or like the algorithms and other processing technology that goes into quadcopters. And we can see this if we look at particularly non-state actors, which just where my previous experiences before jumping into a drone swarm issues. So a number of non-state actors have also been quite interested in chemical and biological weapons, but even fairly robust and well-resourced organizations have had considerable failure when it comes to some of these more advanced ones. Now there's all sorts of incidents of people like getting ricin and trying to poison people and often failing and realize that it's not a very good inhalant, but my favorite example is Aum Shinrikyo. They were the folks who carry it out and it was the 1995, sarin gas attacks on the Tokyo subway. Before, that they attempted to pursue a fairly robust biological weapons program. And I don't really like using the term, but I think it's appropriate in many ways. It was laughable. You know, even though we're talking about like attempt to kill lots of people, I think in many ways, at one point they were trying to make botulinum toxin, extremely deadly toxin where that kills at the scale of like a handful of like micrograms or milligrams, like extremely, extremely small.

One of their folks actually fell into a VAT of what they thought was there were producing botulinum toxin, and the fellow emerged completely unharmed. And given how little it takes to kill that suggest like they really did not a very good job of actually cultivating and developing this. And these were fairly well well-resourced folks, I think estimates, but they had somewhere like neighbor of like a billion dollars worth of resources. And they were also working with like former Russian military. And they were like mining for uranium in Australia and doing all sorts of odd things.

But the point being is like, I think when we talk about these things, they are very simple things, like getting large amounts of quadcopters, getting those to work together. And I think it is a real concern, proliferation perspective. When we're talking about something like these truly massive, like million scale working across multiple domains. And I think there's proliferation to like some of the higher, more advanced states like China, Russia, United States, maybe even some like Turkey and some of the like middling powers. But I think, maybe not necessarily to like an Armenia or Azerbaijan, or something like that.

Stuart Russell: Yeah. That's why I specifically talked about proliferation of swarm weapons, which I mean quadcopters of fixed wing, but large numbers of small devices can spread in the same way that Kalashnikovs have spread. And on the subject of biological and chemical weapons, I think it's important to point out that one of the reasons it's been difficult for non-state actors is because they are not companies in the chemical and biological weapons, business manufacturing those weapons and selling them. Whereas they are already companies doing that with lethal autonomous weapons. And unless we do something, there will be a lot more and there'll be manufactured in much larger quantities. And inevitably as with small arms, those will filter out into the wider world.

Lucas Perry:All right. So there's sort of a range of places that we can notice these systems beginning to emerge. There are various versions of autonomous weapons systems increasingly in the military, so you talked about vehicles, or tanks, or particular kinds of drones, and these are their own form of autonomous weapon. In terms of swarms, there are both collective drone swarms, which can range from wingspans of... Stuart was saying something like 13 feet to more like commercial quadcopters, to smaller. And so Stuart is suggesting that the increasing miniaturization of these quadcopter-like things are what potentially might look something like the Kalashnikov of tomorrow, is it that those kinds of systems could proliferate because they're sufficiently cheap, and there was this scalability aspect of the ability to murder people or to kill people.

So the WMD aspect is due to the scalability, like how Google's computation is scalable based on its hardware. The scalability is a part of both the hardware and the software, the autonomy, which takes out human decision-making from the system. So I think before we get into more of the particular risks, I'm curious if both of you could paint a little bit more of a concrete picture of the current state of production. So you've mentioned that companies that are investing in creating these systems unlike with chemical and biological weapons. So, what is the state of industry in the creation of autonomous weapons? And is there any more clarity that you can add in relation to the extent to which global superpowers are pursuing these weapons?

Stuart Russell: I think this is a great pair of questions because the technology itself, from the point of view of AI, is entirely feasible. When the Russian ambassador made the remark that these things are 20 or 30 years off in the future, I responded that, with three good grad students and possibly the help of a couple of my robotics colleagues, it will be a term project to build a weapon that could come into the United Nations building and find the Russian ambassador and deliver a package to him.

Lucas Perry: So you think that would take you eight months to do?

Stuart Russell: Less, a term project.

Lucas Perry:Oh, a single term, I thought you said two terms.

Stuart Russell: Six to eight weeks. All the pieces, we have demonstrated quadcopter ability to fly into a building, explore the building while building a map of that building as it goes, face recognition, body tracking. You can buy a Skydio drone, which you basically key to your face and body, and then it follows you around making a movie of you as you surf in the ocean or hang glide or whatever it is you want to do. So in some sense, I almost wonder why it is that at least the publicly known technology is not further advanced than it is because I think we are seeing, I mentioned the Harpy, the Kargu, and there are a few others, there's a Chinese weapon called the Blowfish, which is a small helicopter with a machine gun mounted on it. So these are real physical things that you can buy, but I'm not aware that they're able to function as a cohesive tactical unit in large numbers.

Yeah. As a swarm of 10,000. I don't think that we've seen demonstrations of that capability. And, we've seen demonstrations of 50, 100, I think 250 in one of the recent US demonstrations. But relatively simple tactical and strategic decision-making, really just showing the capability to deploy them and have them function in formations for example. But when you look at all the tactical and strategic decision-making side, when you look at the progress in AI, in video games, such as Dota, and StarCraft, and others, they are already beating professional human gamers at managing and deploying fleets of hundreds of thousands of units in long drawn out struggles. And so you put those two technologies together, the physical platforms and the tactical and strategic decision-making and communication among the units.

Stuart Russell: It seems to me that if there were a Manhattan style project where you invested the resources, but also you brought in the scientific and technical talent required. I think in certainly in less than two years, you could be deploying exactly the kind of mass swarm weapons that we're so concerned about. And those kinds of projects could start, or they may already have started, but they could start at any moment and lead to these kinds of really problematic weapon systems very quickly.

Zachary Kallenborn: I think that's right. I have a few data points to add on. So the most robust that I've seen, I think is the Perdix swarm, they launched 103 drones out of an F/A-18 back in like 2015, 2016. I think that's the one you were alluding to where they formed up, traveled across battlefield and then did some other formation. What always struck me as really interesting about that is that the system was actually designed by MIT engineering students. Those are probably some of the most brilliant engineering students in the entire world, but they are when it comes down to it. Sorry, what?

Stuart Russell: Second most brilliant.

Zachary Kallenborn: Okay. Yeah. Fair, fair, fair. But certainly up at the top anyway, I think we could agree to that, but they were students, this isn't some huge skunkworks project that got thrown $100,000,000 with a bunch of high level military people who've been making weapons for 30 years, these are students. I think that's really important even though, yes, I think that we haven't really seen too many examples of that. It's very clear a number of states have expressed very real interest and are moving towards this. Just in like, as I mentioned previously, like there was the Indian swarm of 75 drones. I'm not convinced it's actually a swarm or even all that autonomous, but it does show that they were very clearly interested in it. And recently Spain also announced the creation of not an arms swarm, but working towards, like for intelligence purposes. I know Turkey has, quite a few programs about this. The US military I know has announced like several swarming related programs looking at, for example, having Marine infantry units that can control some really small group of drones, coming out there.

Yeah. I do find that interesting as well, the fact that we haven't seen it yet. In fact, I have a little Google search alert for like any drone swarm related news, and I've had a few Reddit threads pop up where it was like current master's students, I don't even know if there's master's, it may even be at bachelor's level. Or, 'I'm working on a swarm project as my term project.' Hopefully not to kill someone, but they're working at it at a very low level and already figuring out how these things work.

What'd say? I was going to go in a direction and I forgot what it was.

Yeah. So I think from like proliferation perspective. Absolutely. And I think that also creates all sorts of challenges, particularly because most of this is coming from the actual, like the algorithms and the computer control systems that allow you to get all of these drones to collaborate, which makes it much easier to spread about.

Oh yeah, that's what it was. So what strikes me, is I think part of the reason why it's been somewhat slow is that often adversarial groups are fairly slow to adopt new technology, and they do so largely in response to the conditions of security forces. So, as I said, a drone has been, first early remote controlled planes have been around since like the fifties or sixties, but the earliest example I could find of any terrorist group actually interested in was Aum Shinrikyo, in 1995, so 20, 30 years later or so. And you never really saw major interest until most recently, ISIS in 2013, I believe during the battle of Mosul where they used 300 and some odd drones at once in various missions during a single month.

And so I think that what's different now, is that you're starting to get much greater awareness of it and in part it's a response to some of the security apparatuses that have developed after 9/11 and some of the responses to terrorism, where now states have started to recognize that we need to worry about ground-based attacks where vehicle born explosives and stuff like that. And so our security measures have been designed around them. So I live in DC and you can very easily tell what a government building is because they always have like bollards, big concrete things that are all around them. So that if an IED tries to blow up the entranceway, they can't get to it.

Now when we talk about drones and some of these quadcopters and stuff like that, the advantage is that you can then just fly over many of these ground-based defenses, they become irrelevant. And therefore these terrorist actors who would previously have been using vehicle born explosives and stuff like that can now shift to a new posture and even though there're some costs in terms of, these drones have smaller explosives, so it's not as useful as truck bombs that may have like 50 or 100 pounds of explosives, but they can get access to targets that they hadn't previously from things that are high up in the air, like radar installations or potentially hitting other types of sensitive targets that may be of concern.

Stuart Russell: I think the other thing that's changed is that the early remotely controlled planes that the military worked with were large and very expensive, and that's true of the Predator Drones and Global Hawks and so on. These are big things that are not easy for a non-state actor to acquire clandestinely and to operate because you need actually quite a lot of technical expertise. I've heard up to 15 people per drone required for your launch recovery, health, maintenance, monitoring, targeting, navigation, et cetera, et cetera. But when you've got small drones, the size of a pizza box, whether it's fixed wing or, or quadcopter, then it's much more within the range of capability of a small organization to field and they've been pretty effective, I think, in this kind of asymmetric warfare, because infantry tactics just haven't adjusted to the idea that at any moment, something can arrive out of the sky and blow your head off. And so that's going to change very rapidly, and we might see battlefields that are almost devoid of human beings because there's just no way for a human being to survive in this kind of environment.

So one possible approach to this risk of proliferation in the use by non-state actors, would be a selective ban to say that the tanks and the submarines, the major powers are going to do whatever they want to do there, but the small anti-personnel weapons, those are the ones that we need to control because those are the ones that will proliferate, and be used for genocide and you name it. And there's an interesting precedent for this in a 19th century treaty called the St Petersburg agreement, which actually banned explosive ordinance below, I believe it was 400 grams of explosive. So you could have large artillery shells that blow up buildings, but you can't have small explosive bullets that are mainly designed to blow holes and people.

And that was for partly humanitarian reasons. 'If I'm going to shoot you then you're kind of out of combat and I don't need to then blow a hole in you as well.' So for whatever reason, there's a precedent for the idea that we ban weapons below a certain size because of potential negative consequences from their use.

And that seems to be a version. I mean, it's sort of real politic, right? If you believe that we can't achieve a blanket global ban on lethal autonomous weapons because of the opposition of major powers like the United States and Russia, perhaps in their own self-interest, and I keep coming back to this idea, it's in the self-interest of countries, not to want to be facing these weapons, either on the battlefield or in their own cities, to then just put a ban on weapons below a certain size. It might be a five kilogram or 20 kilogram payload or whatever it might be. And if that's what we have to do in order to get agreement from the major powers to move forward, then perhaps that's a tactic that might in the long run be more effective than insisting on a total ban.

Zachary Kallenborn: I think that's right, and I would go a bit farther than that because I think the risks go beyond just proliferation when it comes to the broader military powers. So a lot of the more military oriented literature, looking at the significance of swarming or drone swarming specifically, as opposed to swarming as a tactic tends to focus on what the threat poses to existing weapons systems and large platforms. So the United States militaries have been heavily invested in, for I don't know how long, but certainly decades at least, these really big exquisite expensive platforms, like the F-35 and they F-22. These things that cost like hundreds of millions of dollars to make. But if you can get a whole bunch of cheap weapons that can potentially cause considerable harm or even just disable that, that creates a huge problem for the US military as well as militaries in general because certainly others have done that as well. So I think there's even a potentially a military argument to make there that these are presenting pretty serious concerns.

There was actually a really interesting study over at the Naval postgraduate school. It was done by some students that it was, or I think it was like master's students, they're all like current military folks. They did some mathematical modeling. So if you had, I think it was eight drones attacking a currently equipped US destroyer, at the time, I think it was a few years ago, but using only about eight drones, I think four of them would typically get through the current defenses that they had. And they were looking at, I think it was four of the Israeli Harops, and then I think four of the commercial off the shelf type drones.

And you can think about all sorts of situations where that would be pretty terrifying from a military perspective, even if you're talking about some of these really cheap drones, because the consideration of 'what happens if a bunch of drones fly into, say an engine of a really expensive fighter jet or something of that sort?'. And even the concern about that could be really impactful because that potentially restricts access to that airfield. If you have 500 drones or some cheap things flying around that are like $5 quadcopters or not $5, but a cheap quadcopter you then lose access to potentially fly planes out of that.

And I would also add to the point about sort of restricting the spread. I think there's also other precedents as well when it talks both in the US and internationally, when talks about spreading weapons to the non-state actors. But I think the big one internationally is UN resolution 1540. I'm not a huge expert on it, but I understand the gist of it is that basically mandate states pass laws to restrict folks within their country from getting access to chemical, biological, radiological, and nuclear weapons. And I think that same sort of logic could apply very well there. Now that's partially why I think the framing of weapons of mass destruction is useful to sort of start connecting drone swarms with some of these existing weapons that we think about in this way, because I think the solution sets are going to be very similar. So I think modifying something like the UN resolution 1540 to incorporate these weapons systems is very possible and it could even happen at like the domestic level.

So I understand US law around weapons of mass destruction, terrorism takes a much more expansive view than how generally militaries think about weapons of mass destruction, where the FBI and other folks who focus on these look at consider explosives to be falling under the scope of weapons of mass destruction. Because in part, when we're talking about non-state actors, even relatively limited capabilities could cause pretty significant harm if they were used to assassinate a head of state or blow up a chemical facility or an office building or something like that. So I think some of these mechanisms absolutely exists that can be leveraged to great success.

Stuart Russell: I know that you've argued in some of your papers that drone swarms should be characterized as weapons of mass destruction. Is there pushback? I've come across a view that weapon of mass destruction equals CBRN, chemical, biological, radiological, nuclear, and possibly because of the various sort of legal consequences of expanding that definition I've come across a lot of resistance to characterizing drone swarms that way.

Zachary Kallenborn: I think so, to an extent. But I think it's partially for a handful of reasons. So weapons of mass destruction is a term, I think has been occasionally abused for political reasons. And so both within bureaucracy, as well as higher level politics, certainly we can talk about opinions over the Iraq War, and very clearly, the call to engage with weapons of mass destruction was a big part of that. So it starts fitting in with all of the politics that went into that conflict there, which in general, I have a lot of issues and a lot of views on that issue. Entirely won't go into them too much, but I think there's sort of that that's coming along with it. And yeah, I think typically in more recent years that weapons of mass destruction has been associated with chemical, biological, radiological and nuclear weapons.

But historically, look at where weapons of mass destruction as a term came from, that was never the case. So it originally came from, I think it was 1954, I believe. I can't remember the exact year, but it was around then coming from some discussions at, I believe, the United Nations, where they establish what researchers who have very seriously and exhaustively looked at weapons of mass destruction considered to be the most authoritative definition, where they consider weapons of mass destruction to be weapons of capability like chemical, biological, and nuclear weapons, but not necessarily limited to them. In fact, many of their discussions, they argue that we have no reason to believe that another weapons system will never emerge. That is just the same and deserve the same sort of considerations. At the moment, it's just these weapons. And so I think that's part of it, there's some inertia going on of like, we've never really thought about expanding this, but I think among the folks that I know who more seriously do think about the term and what that means, there seems to be some interest.

So recently the National Defense University, their Center for Weapons of Mass Destruction Studies (Center for the Study of Weapons of Mass Destruction)? I can't remember, I apologize to those folks if they're listening for not getting their name right. They just published a really lengthy study looking at the future of countering weapons of mass destruction. And they included a pretty big section 'unmanned aerial systems' and included my ideas about 'this could be a weapon of mass destruction' is a pretty major focus of that. I don't believe they explicitly said 'we agree with the conclusion,' but just the fact that they believe it's at least plausible enough to engage, I think is a persuasive argument that at least there is some interest. And I think they're sort of going beyond that. I think some of that, there's also sort of bureaucratic questions about what exactly this means because, or not bureaucratic, but broader multi-political issues about what we focus on.

Some of the pushback I've gotten is argued no, that weapons of mass destruction are really just nuclear weapons, and drone swarms don't reach that scale. I don't really dispute that claim. I think at extreme scales we're talking about a million drones. A drone swarm could meet Nagasaki, Hiroshima, type nuclear weapon, or I should say Fat Man, Little Boy, because that's the weapon, level of harm. But I find it hard to believe you would start seeing like Tsar Bomba, which was the 60 megaton nuclear weapon Russia detonated in like the 60s, 70s, something like that. I can't imagine it reaching that capability. And like from a broader risk perspective, it makes sense that nuclear weapons still should be more important than drone swarms. But I think nonetheless, they are important because of all these risks that we've decided about.

So I think the idea is nascent, as far as I've seen. I haven't been very convinced by any of the pushback that I've gotten. I know that like there was that discussion you and Paul Scharre had in IEEE, which was interesting, but personally I didn't find any of the counter-arguments all that convincing.

Stuart Russell:Yeah. And I think there are some characteristics of drone swarms that although I agree that the physical impact of Tsar Bomba is not likely to be matched, but the geographical scale certainly could be, and I think the naive vision of what war is about, which is to kill as many people as possible, soldiers will tell you, no, that's not the purpose of war at all. The purpose of war is to get another entity to do what you want. And if you can do that without killing any of them even better.

Zachary Kallenborn: I they'll immediately Clausewitz to you on war is politics by other means, but yeah, go ahead.

Stuart Russell: Yeah. So a large drone swarm could certainly control and threaten an entire metropolitan area of the size of New York without necessarily killing everybody but, killing some small fraction of selected targets as a demonstration of power, and then simply sort of being in residence as a way of controlling the remainder of the population. So it can have the same scale of impact. And I think the fact that it doesn't destroy all the buildings is an advantage. There's not much value to a large smoking radioactive creator, to anybody, but an intact city and an intact industrial infrastructure that is now under your physical control is in fact one of the ideal outcomes of warfare.

Lucas Perry: All right. So I'd like to pivot here into moving through particular risky aspects of autonomous weapons and moving through them sequentially probably spending about five to eight minutes on each one. So we'll have to be a little bit more brief with each of these sections to make it through all of them. Let's start off here with the inherent unpredictability of lethal autonomous weapons systems, both individually and also collectively. So if you have swarms interacting with swarms, or autonomous weapons systems operating in relation to other countries autonomous weapons systems, which they're not necessarily familiar with or whose behavior may lead to catastrophic behavior on either side. Can you guys speak a little bit to this inherent unpredictability of these systems, especially in multiagent scenarios so where we have multiple autonomous agents of one side, perhaps interacting with multiple autonomous agents from another side?

Stuart Russell:So I think this comes back to a caveat that I made earlier, that, that sometimes it matters that the system is built and is using machine learning in particular, because a system that follows a fixed algorithmic decision-making approach is vulnerable to countermeasures that the other side is adapting to that fixed strategy and finds a way to overcome it. So to give you an example, if you're playing rock paper scissors, and you just play rock every time, then the other side figures out that you're playing rock every time and plays paper every time and you lose. So in these kinds of adversarial settings, the ability to adapt to devise sort of defeating strategies against the opponent is crucial to survival. If you don't do that, then you'll be subject to that. And so that creates as you say, this sort of escalating unpredictability, because you don't know what the adversarial strategy is going to evolve into, and therefore you don't know what your own strategy is going to evolve into to counter that. And so it's actually extremely hard to model what could happen when two large heterogeneous groups of agents are engaged in the struggle with each other. And we've seen this, of course, in human warfare, in the past, what was happening by the end of the war often didn't look very much like what was anticipated at the beginning of the war.

Zachary Kallenborn: I think that's right. And I think that also creates, or I think we already, to an extent, like even know that's a problem, at least for swarms, so there was some work done by the Naval Postgraduate School, Tim Chung, who now works at DARPA and he did a 50 versus 50 swarm fight and where he actually set up, like I think it was all homogenous drone, like just quadcopter type things. So not as much complexity basically to see, like what are the factors that influence outcomes in these sort of behaviors instead of very controlled, I recall it like the key issue was speed and how quickly, not only like the physical speed of how, like the velocity that they're actually moving it, but the decision-making speed where you have 50 different drones. Like they need to be able to quickly recognize and respond to all of these different targets.

And that ends up being like speedily, like algorithmic sense of like how quickly can they process and detail information. And that obviously creates a problem when we can talk about error and risk, because the faster you're making decisions necessarily, as we know as just being humans, that tends to be where errors come in, if we're making rapid decision making about. And there's probably a pretty thoughtful reference that we could talk about like condiments in system one and system two type thinking, or I can't remember the exact details, but at the folks get the point, if you make fast decisions, they tend to create much errors and as things go larger and they interact together, those risks amplify.

Lucas Perry:Yeah. Right, so, I mean, this is making me think of, I think OODA cycles from the Center for Applied Rationality, they take it from jet pilots. I think the way that they frame dogfighting is whoever can make the most updates per second for movement decision-making in the dog fight is the one who's going to win. So with drones, it's both this aspect of increasing the cycle at which you're like making decisions as a swarm to outmaneuver the enemy swarm. And then also the factor of that increasing the likelihood of error.

Zachary Kallenborn:Yes. As well as expanding that over multiple different agents. Because when we're talking about drone swarms, we're talking about potentially 50-100 thousand agents. So it becomes even more complicated because you now have a hundred or whatever thousand points of where that decision process could get messed up.

Lucas Perry: All right. So let's get into escalation risks then, because this seems pretty related is we have these systems are inherently unpredictable and which that unpredictability risk rises as you have multiple agents interacting with multiple agents and this race towards faster and faster decision-making over time in order to gain a strategic edge. So it seems like these factors of unpredictability and escalation risks are increasing rapidly in order to gain a strategic advantage in say drone conflicts. So what kind of perspective can you guys offer on the escalation risks at borders between countries or between autonomous weapon systems that lead to both accidental and intentional risky outcomes, which then, interdependently with the increasing power of technology in the 21st century will lead to potentially worse and worse outcomes as the proliferation of more powerful technologies to do more damage increases along with this increasing capacity for the escalation of risk due to the proliferation of drone technology.

Zachary Kallenborn: Okay. Yeah, I think when it comes to escalation risks, I think that's an issue cause like, the challenge is that, okay, what happens if this swarm gets out of control or even just an element of that swarm gets out of control or any type of lethal autonomous weapon system and that potentially kills an actor that wasn't intended and it happens to be particularly important. When I think about escalation, I often think about like, just jump back to World War One where the assassination of, my head is always really bad for history. I can never remember terms and dates.

Lucas Perry: Archduke Ferdinand.

Zachary Kallenborn: Yes. Thank you. There we go, yeah. Where the assassination ended up setting off a chain of events that ended up getting a much broader conflict. So depending on the context of one that's used and how it's used, you potentially have pretty major problems. Like what happens if depending on the context, if there's some like high-level military person who's coming or... political leader, who's coming through an area and the drone happens to go out of control and attacks their convoy or something like that. That could be a huge, I mean, hopefully they have the intelligence and wherewithal to know, like let's not drive by an active conflict zone, but nonetheless, like people make mistakes and, drones go out of control. So, I think there are very real risks that can come there. And I think there's also escalation risks sort of in other ways. So one of the things I found really interesting when I was looking at the conflict between Armenia and Azerbaijan is that the Azeri drones were reportedly and I don't know, I'm not a hundred percent confident this is true, but at least it seems possible

We're actually run by Turkish military folks who were flying the drones. So what that means is that if the control system for the drones was attacked and blown up, then Turkey is now potentially involved with this conflict in a more direct way. And so if you apply that strategy in other situations where a state or two states are fighting each other, one's using drones and they essentially attack some of those ground control stations to better managing that larger complex swarm. You have all sorts of concerns of cascading effects, where you're potentially bringing in other military powers that also has bigger problems of instability. And even if the US and other countries don't necessarily care about have stakes in that actual conflict, there may be some level of obligation to like get involved to resolve issues. And that in and of itself can be an expenditure of resources, soft power, and in a variety of other potential costs that come along with it beyond the obvious cost of we don't want more people to die.

Stuart Russell: I think there's a potential for escalation risks even before a conflict begins. And we've obviously seen this in the Cold War with false alarms about nuclear attacks. And I imagine they happened on both sides, but we're aware of several times when Soviet nuclear missiles were almost launched in response to what they imagined to be US attacks, but because there was at least some human in the loop, those were averted. And the difficulty, once you have autonomous weapon capabilities, war can move so much faster that it might not be prudent to have humans in the loop coordinating defensive responses, or even deciding that a defensive response is needed because by the time the human has been woken up and dragged out of bed and informed about what's going on, it's too late, you've already lost the war, 75% of your installations have been wiped out.

And so there's always a temptation to say, because of this, the OODA loop, the speed of that accelerating, we have to have an autonomous response capability against attacks. And the problem is, yes, you respond autonomously to attacks, but you will also respond autonomously to things that are perceived to be attacks even when they aren't. And so any error in the detection and classification of potential attacks could lead to an autonomous response, which would be a real response, which would then lead to an accelerated response from the other side. And this is something the US military have explicitly expressed concern about.

And in recent discussions between the US and China that were reported in an article, a joint article by Fu Ying and John Allen, they talked about this as one area where everyone agrees, we would like to put into place protocols that will prevent this kind of escalation. And I think, in this area of international negotiations over autonomous weapons, anything that we can agree on and make progress would be, I think, beneficial because it sort of, it sets the precedent. It creates those channels of communication. It builds trust between the technologists working within each of the military hierarchies and could potentially expand to greater levels of agreement.

Zachary Kallenborn:I'll add just very briefly that I think there's also how those incentives play out, could create all sort of interesting and potentially harmful ways. It's like if you remove, like you keep humans at some level in the loop over ethical or whatever concerns, or have just some minor role, they've potential also become a target for other militaries. Because if you kill the humans, then you potentially cause much greater harm. So was the folks that have been, trained for a long time and that's at the end of it. And often what the politics people resonate with is, oh that's my son who... or my daughter that was killed.

So you have more of a strategic effect, but that creates an interesting incentives where like, okay, that if we need to target the humans, then we need to put them in defensive and very carefully guarded places, which may be like in the homeland somewhere. Okay, then we need to guard them, which then creates incentive to like, okay, we'll let's strike the humans where they are. So let's potentially, instead of like having a bunch of drones battle in the middle of the ocean or something like that, let's just strike directly at wherever that homeland is. So that could be particularly I think problematic depending on how that, how those incentives evolve.

Stuart Russell: Yeah. I think it's worth mentioning that many people ask, well wouldn't it be great if the drones fight it out and nobody has to die.

Zachary Kallenborn: Yeah.

Stuart Russell: And I think this is just a misunderstanding because, you could also ask, well, why don't we just play baseball to decide who wins the war? Yeah, we could do that too, but we don't do that because just because I am losing a game of baseball, doesn't mean I'm going to pay you eternal tribute and let you enslave my population. So I'm only going to let you enslave my population if I have no physical choice, because you have destroyed my defenses and began killing my population and my government hierarchy and my military hierarchy. So it's only when on the sustainable costs are imposed, that one side is going to surrender. So there is no circumstance that I can really imagine in which there could be a bloodless war.

Lucas Perry: That's a really good point. I appreciate that. This is a good point then Stuart, about there not ever being a bloodless war. So there's the risk of these weapons lowering the barrier to conflict because of an increased willingness to engage in conflict because autonomous weapons systems destroying each other is not the story of sons and daughters coming home and in body bags and caskets. So there's the extent to which stigma operates at sort of conditioning and controlling behavior. And so, the existence of machines are just on the line at the edges of conflict. It seems like there's this risk then of lowering the barrier to conflict because machines are on the line rather than people you have to still leads to an escalation of real conflict. Do you guys have any perspective on this?

Zachary Kallenborn: Yeah, I've heard those arguments personally. I've never found them overly persuasive. It was more of an intuition and not necessarily anything, all that concrete, but I did see there was a study that came out recently that folks that or states that have used drones in general have been tended to be more careful when they're using it and sort of rationalizing by own intuitive sense. Like, what makes sense to me is that like military generally are going to be very cognizant of escalation issues, deliberate escalation issues, I should say.

The folks that I follow who are like think tank policy wonks, they're constantly talking about like it's controlling the ladder of escalation and when a conflict emerges, and I think that's something that like states are already cognizant of. So like to me, outside of like error, which I think would be an issue where you no longer sort of lose that control of the escalation ladder, I'm not all that convinced that lethal autonomous weapons would have a meaningful effect. Like there were probably need to be like some discussions of the Pentagon, like, okay, how are we going to use this? Okay. Here's where it's going to be useful, here's not, here are some of the care we want to have at least from an escalation perspective.

Lucas Perry: Sorry. So, so there might be a distinction here then perhaps the risk of escalation is increased if it's an accidental escalation, but remains the same for intentional escalation.

Zachary Kallenborn: I think so yeah.

Stuart Russell: Yeah, I think it also depends on the political environment. For example, right now the US is operating drones in regions of Pakistan. There's no way we would be deploying US troops on the ground in Pakistan. If we didn't have drones that we would not be involved in military conflict in Pakistan right now, but it hasn't led to major escalation because the Pakistan government is partly, it's a very complicated situation, but they are unofficially tolerating it. And they don't view it as an attack on their own government and regime. And so it hasn't escalated even after several years of operations.

Zachary Kallenborn: Right.

Stuart Russell: And, but you could imagine in another circumstance that the same level of military activity using drones would lead to a direct response against the homeland of the United States. Then things would probably get out of hand very quickly.

Lucas Perry:So they function also as increasing the willingness for superpowers to engage in conflict abroad, where it would be too politically or socially expensive to actually have boots on the ground. So you can use autonomous weapons systems as a means of sort of lowering the cost of entering into conflict.

Stuart Russell: I think it lowers the cost of entry, but it can also be maintained at a low temperature. Whereas if you put troops there and then troops start getting killed, that tends to raise the temperature and raise the stakes. And no government wants to be seen retreating from a battle because a few of their soldiers have died and so on and so forth, whereas you have with your autonomous weapons, you can expand or contract your level of activity without serious political consequences at home.

So I think there's a lot of complicated effects going on here. And I think I agree with Zach it's too simplistic just to say it lowers the threshold for war because war itself is not a Boolean thing. It's not either or it's actually a whole continuum of activities.

Lucas Perry:All right. So we touched on this earlier when we were talking about the miniaturization of the hardware and then also the proliferation of the software also Stuart mentioned how it would only take him and a few grad students a term to make something that could take people out at the UN. So there's also this democratization and proliferation of the knowledge with which to make these systems. For example, I think that when CRISPR was discovered, it only took something like a year to go from being at the vanguard of science, to being something that's able to be taught in university and classrooms.

So I assume that we can expect some similar evolution of this with autonomous weapons, as both the hardware and algorithms, which are required to make really serious forms of them become cheaper and more widespread. So what are your perspectives on the proliferation risks of these very high risk swarm versions of autonomous weapons systems? So like not just the kinds of systems that militaries might be increasingly adopting, but these small, cheaper forms. And the associated risks of the diffusion of human knowledge and human talent in order to build these systems.

Stuart Russell: I think I want to separate two things. And many people will actually sort of mix them up and say, okay, if the knowledge becomes publicly available, then the weapons become publicly available and therefore trying to ban them or control them is completely pointless. And you only have to look at chemical weapons to see that that's not the case, the reactions you need to make chemical weapons are things that can be done in a high school chemical chemistry classrooms. But because we have a treaty that includes engagement of industry in monitoring the production and sale of precursor chemicals, and we have a stigma and we have very strict national laws. We have not seen wide-scale proliferation of chemical weapons and same with biological weapons, even though that lacks any verification component and that's being fixed right now, I think because of things like CRISPR people will see a more urgent need to get oversight of industry and precursor devices.

Lucas Perry: Can I can ask you questions about that.

Stuart Russell: Sure, yeah. So I wanted to just say that we could imagine a situation where, we might wish that these things were restricted to the US military or, other quote, "serious countries." But if there is large scale manufacturing, there will be large scale manufacturing for sale to anybody who has the money to buy them. And so the risk there is not that people would, find the software on the web and then go out and buy a bunch of toy drones and fix them in their garage. The risk is that they will just go to the arms markets, the same place as they buy the Kalashnikovs by the boatload and buy lethal autonomous weapons by the boatload. So you're changing the entire system from one of widespread large-scale manufacturing availability to one where there are serious legal restrictions and practical restrictions. That would mean that it will be much, much more difficult for large scale events to take place.

Lucas Perry: Sorry, Zach, I'll bring you in just a second. I just want to ask Stuart a quick question about that. So, in terms of chemical weapons, you're saying that they require a high school level chemical reactions to manufacture, but that the reason why we don't see, for example, even more non-state actors like terrorist organizations, effectively deploying them in the United States is because of the strict regulation and law in the federal government, towards industry for regulating the precursor chemicals, which will lead to the manufacturing of chemical weapons.

So I feel confused. Maybe you can help explain it to me, why it's different with autonomous weapons. You said that it would seem to me that there's like the proliferation of the hardware seems to just be a fact and you couldn't necessarily regulate that. And the kinds of algorithmic components which go into building the thing in your garage, as you said, there's like facial recognition and there's like 3D environment mapping and there was moving through environments and then perhaps getting some kind of payload onto it would be the most difficult part, but I don't see why it seems to me like it would be easier to make a lethal autonomous weapon in my garage if I had an undergraduate degree in computer science and mechanical engineering than it would to be, to figure out how to make a chemical weapon.

Stuart Russell: Okay, well, so first of all, I think I would actually disagree with that as a practical matter, but we're not talking about making one. We're talking about making a 100,000.

Lucas Perry: A swarm of them, yeah.

Stuart Russell: And the same issue with chemical weapons. You can get small amounts of pool chemicals, which include chlorine and so on and make a crude chemical weapon. But... and that has happened and some of the things that the Syrian government did involved chlorine, which is relatively easy to get hold of, but even there, the Syrian government had to go through, they created multiple shell companies and all kinds of subterfuge to obtain the materials that they needed to scale up their chemical weapons program. So I don't think you can prevent people making sort of on the scale of tens of weapons-

Lucas Perry:I see.

Stuart Russell: and using them for terrorist attacks and I think we will see as a result. So if you said the typical terrorist attack is killing, it's a small truck bomb kills 50 to a 100 people in a market in Baghdad or whatever. I think you will see the scale of that go up by a factor of 10. Because it'll be-

Lucas Perry: With autonomous weapons.

Stuart Russell: it'll be an attack with 20 or 30 things with shrapnel bombs that explode above a soccer stadium or something. So this would be obviously terrible, but the attack that wipes out an entire city or an entire ethnic group could only happen if you allow large-scale manufacturing. And so the industry controls as with chemical weapons, there are things on the list and anyone who wants to buy things on the list has to be vetted and it has to be accounting for the quantities and the purpose of the use and non diversion and so on and so forth. And so you could do the same thing with physical platforms. I don't think there's much you can do about the software. I think people will write that software it'll get there, but I think you look at the physical platforms as the place where you can start to impose some control.

Zachary Kallenborn: Yeah. So I want to jump in on that. So that shares many of my thoughts like that regulation is probably going to be a big part of it. I think it goes a bit beyond that to broader just awareness, training within law enforcement. Because it strikes me that, well in some sense the problems aren't really that different than existing ones in the sense of like, okay, if I'm a terrorist and I want to kill someone, I don't even need to get a bunch of chlorine, I can just go buy a knife from Walmart for like 50 bucks or however much it costs. And then just stab someone if I really wanted to or need to do. And of course we saw ISIS do exactly that, or just simply drive a car through a crowded area, which is horrible, but it's a pretty cheap way that you can cause pretty significant harm.

And responding to that, I think in some sense, it's sort of the same issue. In that its law enforcement awareness, collecting intelligence about these organizations, understanding how they operate, where are the folks who we need to worry about from an ideological perspective. And when I say ideological, I don't necessarily mean in terms of whether they're religious or right or left or cults or something like that. But I particularly mean that are interested in violence and that tend to have really maximalist views. That is like, if I'm an organization that wants to overthrow the existing world order and establish something new, in order to do that, I must necessarily have to take pretty significant action to do so. So identifying those types of organizations, collecting intelligence about them, and monitoring what's going on as well as some of these sales, I think could be really effective because at the end of the day, it's dealing with drone-based terrorism and swarm-based terrorism and laws-based terrorism is really just a subset of all the terrorist attacks.

That if you don't have a terrorist organization to support it, then necessarily you can't, or you won't be able to support those tactics. And it's often, I think, important to recognize that many terrorist organizations are often organizations of people and they run into the same types of challenges that people run into. Recall there was an Al-Qaeda member who turned state witness and testified against Al-Qaeda about a bombing in Kenya, because he was really pissed off that Al-Qaeda didn't pay for his wife's C-section. It's ridiculous, but at the end, they are human beings they run into the same challenges. Which means if you break apart, intercept and engage with these organizations and prevent them and the rest of folks who are responsible, even if the technology is available, you make it much more difficult for them to realize that potential. Because any organization still needs even if it's relatively simple needs the resources, the people, and simply the people who are excited and willing to take that action and do so.

Lucas Perry: All right. So we've got about 15 minutes here. Let's go into a bit of a rapid fire portion now and keep answers pretty short. So Stuart, if I understand what you're saying correctly, so when we're talking about swarms, what really unlocks that is industry level production capability. And so that's why there's going to be in an importance of regulation and monitoring at the industrial level. What to me seems relevant about the proliferation of the hardware, and knowledge and talent for smaller scale versions would be risks of assassination. So with swarms, you have this weapon of mass destruction difficulty, but then with the proliferation of smaller, cheaper weapons, there is an increase in the risks of assassination, which are also destabilizing and high-risk.

So do either of you guys have a few sentences on the increased risk of political assassination, or you wouldn't even ... So for example, there was some hacking at the Florida water treatment plant recently, you don't just only need the weapons to, for example, directly target people, but they could deliver other kinds of things to water treatment facilities or other locations that could cause mass harm. And then you wouldn't really need a swarm, but maybe the high school chemistry plus the dozens of autonomous weapons. So do either of you guys have perspectives on this?

Stuart Russell: I think the assassination risk is very real, and I know that the US government is very concerned. They have at least since Kennedy if not before, gone to great lengths to protect political figures from assassination by sweeping buildings, by checking all the lines of sight, et cetera, et cetera, for any public appearance. But of course, with an autonomous weapon, you could be five miles away and launching the weapon. And so they're very concerned what kinds of defenses could you come up with. But certainly I would much rather be defending against a few amateur homemade weapons than against a large number of high-tech professionally designed and engineered systems.

Zachary Kallenborn: 100% agreed on that. The challenge with drones or drone swarms and particularly is fundamentally that issue of mass where you just keep throwing drone upon drone upon drone. And we're talking about like it's an assassination type event, even if you have 20 drones and the security forces are successful and shoot down 19 of them, but even a single one goes through with a small charge and it explodes your head of state that's a complete failure and the ability to protect what they're trying to do. And I think that's why you've seen increasing focus among all sorts of law enforcement folks really across the globe have been increasingly focused an interest in this. I think for much of that reason, because there are some pretty significant harms that you can do not only in water treatment plants, but the one that really concerns me is chemical facilities. Particularly what happens if you go explode some like that of some dangerous chemical that then releases and then spreads to some sort of local populace that would create much wider harms using only potentially a relatively small charge. Yeah.

Lucas Perry: All right. So again, continuing with the rapid fire, let's talk a little bit here about attribution and accountability issues. So this is the question of who is responsible, for example, for attacks of all sorts, whether it be military. You guys talked about recently ... How do you pronounce that? Azerbaijan conflict?

Zachary Kallenborn: Azerbaijan. Yeah.

Lucas Perry:That Turkey is piloting some of the drones. So there's increasing difficulty of understanding, especially for large-scale systems or swarm attacks. Who should be held responsible for the system? Who is piloting it? If it malfunctions, is it the operator's fault? Is it the industry or the manufacturer's fault? These kinds of things.

Zachary Kallenborn: So I'm going to focus on the answer primarily through like more security perspective rather than like ... Because I don't really know how regulatory issues come if there's a mistake in the system. But what are the key challenges I see is ... Or I think the way to deal with that is through intelligence, law enforcement, folks on the ground as well as post attack, digital forensics where ... One of the things about swarms, if you have large amounts of drones that are attacking you necessarily, you're going to have quite a few that fall. From an attribution perspective, creates some really great opportunities because if you have this fallen drone, you potentially examine all sorts of things about how it works, where it came from and stuff like that. And I think from an attribution perspective, one of the key challenges for responders that they need to look beyond the actual drone itself.

So if I was a terrorist attacker trying to use a drone, I would launch the drone from the back of a pickup truck. So that basically as soon as I launch the drone and especially if it's autonomous, it has some sort of waypoint navigation. I could just send it towards wherever I want to blow it up and then get in my pickup truck and then drive away. And then I'm already gone before the attack even happens in the first place. And that means that law enforcement folks needs to look beyond just the immediate threat to look at. Okay, the areas that are surrounding places that could be attacked. To understand what's going on, who's around and be prepared for whatever comes.

Lucas Perry: Nothing to add Stuart?

Stuart Russell: No, I think it's forensic identification and accountability tracing is a really hard problem. I mean, it's also a hard problem for other kinds of non-autonomous weapon systems. You can launch mortar weapons that you buy on the open market. And just because the mortar bomb was made in Vietnam doesn't mean that the attack was launched by the Vietnamese. Same basic issue that you have with autonomous weapons.

Lucas Perry: All right. So technology is becoming increasingly powerful throughout the 21st century and particularly artificial intelligence. This represents a preliminary global issue in the governance of AI and cooperation and coordination on it as an emerging and evolving technology that will have tremendous power in the 21st century and into the future. So what is your guys' perspective on lethal autonomous weapons as an opportunity for norm setting and as a precedent for trust building for the global governance of AI?

Stuart Russell: I think there's increasing awareness that viewing AI as a virtual arms race. So here, I'm not talking about physical weapons, but AI as a technology that would confer global dominance on whoever develops the most capable AI. I think there's an increasing understanding that that view is in many ways outdated and counterproductive. And one way of thinking about that is that if we do develop what some people call human level AI or artificial general intelligence, that technology would be essentially a source of unlimited wealth for the human race and for one entity to try to control that would be as pointless as trying to control all the copies of a digital newspaper. Because you can just make more copies and if someone else has a copy, it doesn't mean you don't have a copy. It's not a zero sum game. And I think it would change the whole nature and feeling of how nations interact with each other.

We've had this zero sum mindset for thousands of years, that whatever we have, you don't have and vice versa. And that is increasingly invalid for all kinds of reasons, climate change among others. But on AI, this is a technology where we would all benefit to a much greater extent if it was shared. So if we can start by planting a flag, drawing a line in the sand and saying, "Okay, let's not get into an arms race on the physical technology." I think that would help. And I'm seeing encouraging signs from several major powers that there's beginnings of an understanding of this at the highest levels.

Zachary Kallenborn: Yeah. So Stuart looked to the future. So I suppose I'll look to the past. So if we think about humanity has been building weapons pretty much as long as humanity has existed. You know we started with sharpening rocks and strapping it to sticks. We made bows and arrows and swords of bronze and iron. And we made guns, tanks, fighter jets, and eventually nuclear weapons. And I think pretty much throughout that history, the fundamental rationale has been largely the same. We build weapons to keep ourselves safe as well as those we love in our broader country. But over the past century or so, we've come to realize that at some point, weapons stop making us safer when they start creating all of those risks to global stability and to potentially the very folks that are using them as we discussed around like chemical and biological weapons, to a lesser extent, land mines, nuclear weapons and all sorts of other examples.

So I think to move towards those international norms, there needs to be an understanding that these weapons are somehow risky and rather than keeping us safe, they create more problems than they actually have. And so in order to do that, I think there needs to be that focus on those particularly high-risk LAWS, whether particularly drone swarms, as well as the intersection between LAWS and chemical or biological, radiological, nuclear weapons. Because we're talking about autonomous weapon that's controlling a nuclear weapon. The risk of error suddenly become not only like a handful of folks, but potentially the survival of humanity, if that starts a nuclear war.

Lucas Perry: All right. And so this last risk facet for us to touch on as we're wrapping up here is verification. So some of the proposals and discussion around the governance and regulation of autonomous weapons systems has been the need for keeping a human in the loop in such systems. So that the decision of target acquisition, selection and execution is not solely in the hands of the system, but that there's a human being in the loop in the decision making of the system. So that there is responsibility and accountability and human judgment involved in the decision to take human life. How do you guys view the problem of verification with regards to fully autonomous and semi-autonomous weapon systems and the difficulty of regulation due to verification yet the importance of norm setting to achieve stability in the absence of easy verification with ... Keeping in mind the perspective of how the bio weapons ban has no verification regime, but has held through international stigma.

Stuart Russell: So I think it's quite a difficult question. I would point out that the absence of verification in the biological weapons convention meant that the Soviet Union actually expanded its biological weapons program after signing the treaty. And so I completely accept what I take to be the US position that a treaty with no verification and enforcement may not be in the best interest of the United States if other parties are not going to comply with it. So I think the idea that you could verify the software, although it's technically possible, I think that's a non-starter because it's so easy to change software so that what was a human controlled, human in the loop system becomes autonomous just by changing the software. So I would not be happy with a treaty that allowed people to build weapons that could become autonomous by changing the software.

So one obvious solution there is to separate the firing circuits from any on-board computation whatsoever, right? So you can have onboard image processing, navigation, doo-da-doo-da-doo-da, but whatever is on-board, doing all that computing has no physical connection to the firing circuits that has to be done by a remote control. There are ways of bypassing that by putting autonomy back at the home base, right? So taking the human controller who's sitting in his trailer in Nevada and replacing him with an AI system as well, but that certainly makes autonomous control more difficult, slower, easier to jam. But also you want to actually to have a sort of one-to-one or one to a fixed number correspondence between the number of weapons that you are building and a number of human control stations that you have. So if someone makes a couple of 100 million autonomous weapons, but only has five human control stations, then you know that that's not going to be used with human in the loop. So I think those two things would be absolute minimum. I think there are probably other requirements as well, but I will say that's a good starting place.

Zachary Kallenborn:I want to totally agree with that, but I'm a bit more optimistic. But at least when it comes to swarms, because I think there are certain aspects of swarming that actually make it even much more conducive to verification measures, particularly as I think I discussed before. The advantage you get from swarming is throwing large amounts of autonomous systems against defenses and you don't mind so much if a bunch get knocked down. But what that means though, is that if you have a whole bunch that are knocked down or disabled, those are all then sources of post attack verification around what exactly has happened. And because you can look at some of those systems and get a sense of, okay, there may be indicators and potentially and hopefully very clear ones about like, "Okay, here's how the system's operating." Likewise, I think there's a lot of potential around because of how swarms ... The difficulty of managing swarms, because it's not feasible for a human to control like 1,000, 10,000 drones all flying together.

If you see that happening, you must infer that there must be extremely high levels of autonomy, likely including lack of control over that human targeting when it's being used. In both cases, I think there was a third that I can't remember. But that basically allows you to do some level of post attack verification response, which is I think often key to many of these norms. So if we look at the use of sarin in Syria, much of the response was not at investigating the chemical facilities that Syria was building, rather the response came after the United States that is recognized that Syria is using the agents based on the secondary effects. And then afterwards there were military strikes. I don't remember if there were sanctions, but there was all sorts of things that came after.

And to an extent that's sort of what you need is that level of punishment that happens after. Now, that probably won't work at a situation where you have ongoing war between two major powers. Okay, yeah, we're alreadyin a state of conflict, this probably doesn't matter much. But I think in some of these smaller cases, it could be quite successful in sort of at the very least, naming and shaming who was responsible for this.

Lucas Perry:All right. So ending here now, starting with you, Stuart, do you have any final words then about autonomous weapons or anything you haven't said that you feel unresolved about that you'd like to just end on?

Stuart Russell: I feel quite concerned right now that the mindset of the major powers has gone from, "Okay, this is interesting. Let's think about what we're doing to, I'm concerned that my enemies are going to do this, so I have to do this now." And that mindset is how arms races take place. And once that starts to accelerate and the brakes are off, then I think it's very, very hard to put it back in the box. So I think we have a very short time if we're going to make some progress on regulation.

Zachary Kallenborn: Yeah. And the challenge I would mention is that if we look historically at arms control bans, they've typically come after the weapon systems have largely been used. I think that's one of the fundamental challenges we see with LAWS that we really haven't seen those on a large scale. In fact, I very briefly checked my email while we were chatting here. And I saw a note from arms control expert that I know who was commenting, "Oh, well, we haven't even used drone swarms yet, so is this really much of a concern here?" And I think the challenge is how you get global, public as well as governments to recognize that. And I think the solution to that is fear. We often talk about fear is very negative emotion because we only ever experienced it in terrible situations. You're not afraid when you're sitting on your couch, holding hands with your partner or watching TV.

You feel fear when that angry looking dog growls at you or you're sitting alone and there's that weird noise downstairs. Like, "Was that really a robber or was it that I'm just being paranoid." That fear motivates us to pay closer attention to a threat, to figure out what's actually happening here and then take positive action to do something about it. And I think in order to get a broader ban on LAWS prior to it being used, there has to be that strong element of fear.

So I think work like Stuart's around Slaughterbots and bringing this to global attention are extremely valuable for that. To recognize that these types of weapons while they may be useful in some cases, if we're talking about non-autonomous systems really create very serious and real risks that global governments need to address now. And it's better to do it now before things spread throughout the globe.

Stuart Russell: Yeah. And there is precedent for preemptive regulation and I would say biological weapons being one of them. Although, there were scattered uses of biological weapons in warfare. The kinds of weapons that people were developing were intended as global pandemics and the nation would vaccinate its own population and then wipe out the rest of the world and that was viewed as a legitimate form of warfare. So fortunately we didn't wait until that was done in practice before regulating it.

Lucas Perry:All right. So I'm taking away the need for fear and the need for thinking about expected value. The expected value and engaging in another arms race in the 21st century might not be the best idea. But it's also making me think of COVID and how poor humans are at preparing for high impact, low probability events and I feel slightly pessimistic about, like from what Zak said, it seems like we need to see the carnage of the thing before we actually take measures to avoid that risk in the future.

So thanks so much for joining me. Stuart and Zak it's been a pleasure.

Stuart Russell: Okay. Thank you.

Zachary Kallenborn: Thanks for having me.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram