Podcast: Banning Nuclear and Autonomous Weapons with Richard Moyes and Miriam Struyk
How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He’s worked closely with the International Campaign to Abolish Nuclear Weapons, he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.
The following interview has been heavily edited for brevity, but you can listen to it in its entirety here.
This podcast was edited by Tucker Davey.
Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.
Transcript
Why is a ban on nuclear weapons important, even if nuclear weapons states don’t sign?
Richard: This process came out the humanitarian impact of nuclear weapons: from the use of a single nuclear weapon that would potentially kill hundreds of thousands of people, up to the use of multiple nuclear weapons which could have devastating impacts for human society and for the environment as a whole. These weapons should be considered illegal because their effects cannot be contained or managed in a way that avoids massive suffering.
At the same time, it’s a process that’s changing the landscape against which those states continue to maintain and assert the validity of their maintenance of nuclear weapons. By changing that legal background, we’re potentially in position to put much more pressure on those states to move towards disarmament as a long-term agenda.
Miriam: At a time when we see erosion of international norms, it’s quite astonishing that in less than two weeks, we’ll have an international treaty banning nuclear weapons. For too long nuclear weapons were mythical, symbolic weapons, but we never spoke about what these weapons actually do and whether we think that’s illegal.
This treaty brings back the notion of what do these weapons do and do we want that.
It also brings democratization of security policy. This is a process that was brought about by several states and also by NGOs, by the ICRC and other actors. It’s so important that it’s actually citizens speaking about nukes and whether we think they’re acceptable or not.
What is an autonomous weapon system?
Richard: If I might just backtrack a little — an important thing to recognize in all of these contexts is that these weapons don’t prohibit themselves — weapons have been prohibited because a diverse range of actors from civil society and from international organizations and from states have worked together.
Autonomous weapons are really an issue of new and emerging technologies and the challenges that new and emerging technologies present to society particularly when they’re emerging in the military sphere — a sphere which is essentially about how we’re allowed to kill each other or how we’re allowed to use technologies to kill each other.
Autonomous weapons are a movement in technology to a point where we will see computers and machines making decisions about where to apply force, about who to kill when we’re talking about people, or what objects to destroy when we’re talking about material.
What is the extent of autonomous weapons today versus what do we anticipate will be designed in the future?
Miriam: It depends a lot on your definition of course. I’m still, in a way, a bit of an optimist by saying that perhaps we can prevent the emergence of lethal autonomous weapon systems. But I also see some similarities that lethal autonomous weapons systems, like we had with nuclear weapons a few decades ago, can lead to an arms race, and can lead to more global insecurity, and can also lead to warfare.
The way we’re approaching lethal autonomous weapon systems is to try to ban them before we see horrible humanitarian consequences. How does that change your approach from previous weapons?
Richard: That this is a more future-orientated debate definitely creates different dynamics. But other weapon systems have been prohibited. Blinding laser weapons were prohibited when there was concern that laser systems designed to blind people were going to become a feature of the battlefield.
In terms of autonomous weapons, we already see significant levels of autonomy in certain weapon systems today and again I agree with Miriam in terms of recognition that certain definitional issues are very important in all of this.
One of the ways we’ve sought to orientate to this is by thinking about the concept of meaningful human control. What are the human elements that we feel are important to retain? We are going to see more and more autonomy within military operations. But in certain critical functions around how targets are identified and how force is applied and over what period of time — those are areas where we will potentially see an erosion of a level of human, essentially moral, engagement that is fundamentally important to retain.
Miriam: This is not so much about a weapon system but how do we control warfare and how do we maintain human control in the sense that it’s a human deciding who is legitimate target and who isn’t.
An argument in favor of autonomous weapons is that they can ideally make decisions better than humans and potentially reduce civilian casualties. How do you address that argument?
Miriam: We’ve had that debate with other weapon systems, as well, where the technological possibilities were not what they were promised to be as soon as they were used.
It’s an unfair debate because it’s mainly from states with developed industries who are most likely the ones using some form of lethal autonomous weapons systems first. Flip the question and say, ‘what if these systems will be used against your soldiers or in your country?’ Suddenly you enter a whole different debate. I’m highly skeptical of people who say it could actually be beneficial.
Richard: I feel like there are assertions of “goodies” and “baddies” and our ability to label one from the other. To categorize people and things in society in such an accurate way is somewhat illusory and something of a misunderstanding of the reality of conflict.
Any claims that we can somehow perfect violence in a way where it can be distributed by machinery to those who deserve to receive it and that there’s no tension or moral hazard in that — that is extremely dangerous as an underpinning concept because, in the end, we’re talking about embedding categorizations of people and things within a micro bureaucracy of algorithms and labels.
Violence in society is a human problem and it needs to continue to be messy to some extent if we’re going to recognize it as a problem.
What is the process right now for getting lethal autonomous weapons systems banned?
Miriam: We started the International Campaign to Stop Killer Robots in 2013 — it immediately gave a push to the international discussion, including the one on the Human Rights Council and within the Conventional Weapons in Geneva. We saw a lot of debates there in 2013, 2014, and 2015and the last one was in April.
At the last CCW meeting it was decided that a group of governmental experts should start within CCW to look at these type of weapons which was applauded by many states.
Unfortunately, due to financial issues, the meeting has been canceled. So we’re in a bit of a silence mode right now. But that doesn’t mean there’s no progress. We have 19 states who called for a ban, and more than 70 states within the CCW framework discussing this issue. We know from other treaties that you need these kind of building blocks.
Richard: Engaging scientists and roboticists and AI practitioners around these themes — it’s one of the challenges sometimes that the issues around weapons and conflict can sometimes be treated as very separate from other parts of society. It is significant that the decisions that get made about the limits essentially of AI-driven decision making about life and death in the context of weapons could well have implications in the future regarding how expectations and discussions get set elsewhere.
What is the most important for people to understand about nuclear and autonomous weapon systems?
Miriam: Both systems go way beyond the discussion about weapon systems: it’s about what kind of world and society do we want to live in. None of these — not killer robots, not nuclear weapons — are an answer to any of the threats that we face right now, be it climate change, be it terrorism. It’s not an answer. It’s only adding more fuel to an already dangerous world.
Richard: Nuclear weapons — they’ve somehow become a very abstract, rather distant issue. Simple recognition of the scale of humanitarian harm from a nuclear weapon is the most substantial thing — hundreds of thousands killed and injured. [Leaders of nuclear states are] essentially talking about incinerating hundreds of thousands of normal people — probably in a foreign country — but recognizable, normal people. The idea that that can be approached in some ways glibly or confidently at all is I think very disturbing. And expecting that at no point will something go wrong — I think it’s a complete illusion.
On autonomous weapons — what sort of society do we want to live in, and how much are we prepared to hand over to computers and machines? I think handing more and more violence over to such processes does not augur well for our societal development.