Is an AI Arms Race Inevitable?

AI Arms Race Principle: An arms race in lethal autonomous weapons should be avoided.*

Perhaps the scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons, only a fraction of which could have killed every person on earth. As the race to create increasingly powerful artificial intelligence accelerates, and as governments increasingly test AI capabilities in weapons, many AI experts worry that an equally terrifying AI arms race may already be under way.

In fact, at the end of 2015, the Pentagon requested $12-$15 billion for AI and autonomous weaponry for the 2017 budget, and the Deputy Defense Secretary at the time, Robert Work, admitted that he wanted “our competitors to wonder what’s behind the black curtain.” Work also said that the new technologies were “aimed at ensuring a continued military edge over China and Russia.”

But the US does not have a monopoly on this technology, and many fear that countries with lower safety standards could quickly pull ahead. Without adequate safety in place, autonomous weapons could be more difficult to control, create even greater risk of harm to innocent civilians, and more easily fall into the hands of terrorists, dictators, reckless states, or others with nefarious intentions.

Anca Dragan, an assistant professor at UC Berkeley, described the possibility of such an AI arms race as “the equivalent of very cheap and easily accessible nuclear weapons.”

“And that would not fare well for us,” Dragan added.

Unlike nuclear weapons, this new class of WMD can potentially target by traits like race or even by what people have liked on social media.

Lethal Autonomous Weapons

Toby Walsh, a professor at UNSW Australia, took the lead on the 2015 autonomous weapons open letter, which calls for a ban on lethal autonomous weapons and has been signed by over 20,000 people. With regard to that letter and the AI Arms Race Principle, Walsh explained:

“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today.”

When asked about his take on this Principle, University of Montreal professor Yoshua Bengio pointed out that he had signed the autonomous weapons open letter, which basically “says it all” about his concerns of a potential AI arms race.

Details and Definitions

In addition to worrying about the risks of a race, Dragan also expressed a concern over “what to do about it and how to avoid it.”

“I assume international treaties would have to occur here,” she said.

Dragan’s not the only one expecting international treaties. The UN recently agreed to begin formal discussions that will likely lead to negotiations on an autonomous weapons ban or restrictions. However, as with so many things, the devil will be in the details.

In reference to an AI arms race, Cornell professor Bart Selman stated, “It should be avoided.” But he also added, “There’s a difference between it ‘should’ be avoided and ‘can’ it be avoided – that may be a much harder question.”

Selman would like to see “the same kinds of discussions as there were around atomic weapons or biological weapons, where people actually start to look at the tradeoffs and the risks of an arms race.”

“That discussion has to be had,” he said, “and it may actually bring people together in a positive way. Countries could get together and say this is not a good development and we should limit it and avoid it. So to bring it out as a principle, I think the main value there is that we need to have the discussion as a society and with other countries.”

Dan Weld, a professor at the University of Washington, also worries that simply saying an arms race should be avoided is insufficient.

“I fervently hope we don’t see an arms race in lethal autonomous weapons,” Weld explained. “That said, this principle bothered me, because it doesn’t seem to have any operational form. Specifically, an arms race is a dynamic phenomenon that happens when you’ve got multiple agents interacting. It takes two people to race. So whose fault is it if there is a race? I’m worried that both participants will point a finger at the other and say, ‘Hey, I’m not racing! Let’s not have a race, but I’m going to make my weapons more accurate and we can avoid a race if you just relax.’ So what force does the principle have?”

General Consensus

Though preventing an AI arms race may be tricky, there seems to be general consensus that a race would be bad and should be avoided.

“Weaponized AI is a weapon of mass destruction and an AI arms race is likely to lead to an existential catastrophe for humanity,” said Roman Yampolskiy, a professor at the University of Louisville.

Kay Firth-Butterfield, the Executive Director of AI-Austin.org, explained, “Any arms race should be avoided but particularly this one where the stakes are so high and the possibility of such weaponry, if developed, being used within domestic policing is so terrifying.”

But Stanford professor Stefano Ermon may have summed it up best when he said, “Even just with the capabilities we have today it’s not hard to imagine how [AI] could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

What do you think?

Is an AI arms race inevitable? How can it be prevented? Can we keep autonomous weapons out of the hands of dictators and terrorists? How can companies and governments work together to build beneficial AI without allowing the technology to be used to create what could be the deadliest weapons the world has ever seen?

This article is part of a weekly series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here.

*The AI Arms Race Principle specifically addresses lethal autonomous weapons. Later in the series, we’ll discuss the Race Avoidance Principle which will look at the risks of companies racing to creating AI technology.

6 replies
  1. Shiloh Hockman
    Shiloh Hockman says:

    We need to develop CrowdCouncil (think crowdfunding but well, Council from the crowds instead of funding from the crowds) with the goal in mind of building a platform that incentivizes people to openly share and evaluate ideas that can solve Global Grand Challenges such as Weaponized Autonomous Systems / AI… and we need to grow the user base and user engagement of the platform… let’s tap into the creative intelligence of the crowds to address these and other challenges… in the works, email me for details.

    Reply
  2. Michael Wulfsohn
    Michael Wulfsohn says:

    Whether international arms races can be avoided is a question of the strength of international institutions and level of trust between countries. These have probably improved in the last 100 years. But it might be a long time before global structures are strong enough to keep humanity from shooting itself in the foot. Perhaps the issue of autonomous weapons will help by forcing improvements to international coordination, in advance of even more powerful forms of AI becoming available.

    Reply
  3. Matthew Gentzel
    Matthew Gentzel says:

    I am not fully sure an arms race would be bad, the technology for countering lethal autonomous drones would get more funding and you can end up in situations where no humans fight, so the casualties of non-nuclear war are mostly drones.

    Terrorists and dictators can use such drones, but so can those who counter terrorists and dictators. In the long run, unless a world power becomes a dictatorship, terrorists and dictators will be at a huge disadvantage except when empowered by world powers.

    But at this rate, how do you even make an agreement in the first place? What is an autonomous weapon? When you fire a heat seeking missile, the missile does the work, the pilot just chose the heat signature to lock on to. If you update the missile design so it isn’t deceived by flares, then it is more autonomous and more effective. If you redesign the missile so that it can’t lock on to what it perceives to be civilian aircraft then you have a weird situation where pilots may be more willing to fire and the missile becomes the decision maker more and more.

    I don’t see where you draw the boundary, because the decision to launch a weapon system is still a human in the loop system overall, even if a particular strike is not. It seems like a useful thing to figure out and operationalize if banning autonomous weapons is indeed a good idea, because a bad definition can be worked around with technicalities. Having weapons that can be controlled with little input is a huge risk for superintelligent AI in the future, but developing them when few countries are good at autonomy may be better for figuring out safety controls than later when many countries are capable and it is no longer possible to stop an arms race.

    Reasoning out the details of the globally best strategy seems useful. I’d like to see longer posts on this.

    Reply
  4. Jason C. Stone
    Jason C. Stone says:

    Countries that cooperate with the UN should focus on automating systems that can do at least two things:

    1. Destroy automated weapons that are used for injuring and killing humans.
    2. Provide emergency medical care to all parties affected by a conflict, thus placing fewer human medical specialist at risk.

    These are worthwhile ways to apply AI to warfare. Automated killing should be universally recognized as dehumanizing, unneccesary and, self-destructive.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *