Skip to content

Leaders of Top Robotics and AI Companies Call for Ban on Killer Robots

Published:
August 20, 2017
Author:
Ariel Conn

Contents

Click here to see this page in other languages: Chinese  

Founders of AI/robotics companies, including Elon Musk (Tesla, SpaceX, OpenAI) and Demis Hassabis and Mustafa Suleyman (Google’s DeepMind), call for autonomous weapons ban, as UN delays negotiations.

Leaders from AI and robotics companies around the world have released an open letter calling on the United Nations to ban autonomous weapons, often referred to as killer robots.

Founders and CEOs of nearly 100 companies from 26 countries signed the letter, which warns:

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

In December, 123 member nations of the UN had agreed to move forward with formal discussions about autonomous weapons, with 19 members already calling for an outright ban. However, the next stage of discussions, which were originally scheduled to begin on August 21 — the release date of the open letter — were postponed because a small number of nations hadn’t paid their fees.

The letter was organized and announced by Toby Walsh, a prominent AI researcher at the University of New South Wales in Sydney, Australia. In an email, he noted that, “sadly, the UN didn’t begin today its formal deliberations around lethal autonomous weapons.”

“There is, however, a real urgency to take action here and prevent a very dangerous arms race,” Walsh added, “This open letter demonstrates clear concern and strong support for this from the Robotics & AI industry.”

The open letter included such signatories as:

Elon Musk, founder of Tesla, SpaceX and OpenAI (USA)
Demis Hassabis, founder and CEO at Google’s DeepMind (UK)
Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
Jürgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

In reference to the signatories, the press release for the letter added, “Their companies employ tens of thousands of researchers, roboticists and engineers, are worth billions of dollars and cover the globe from North to South, East to West: Australia, Canada, China, Czech Republic, Denmark, Estonia, Finland, France, Germany, Iceland, India, Ireland, Italy, Japan, Mexico, Netherlands, Norway, Poland, Russia, Singapore, South Africa, Spain, Switzerland, UK, United Arab Emirates and USA.”

Bengio explained why he signed, saying, “the use of AI in autonomous weapons hurts my sense of ethics.” He added that the development of autonomous weapons “would be likely to lead to a very dangerous escalation,” and that “it would hurt the further development of AI’s good applications.” He concluded his statement to FLI saying that this “is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).”

Stuart Russell, another of the world’s preeminent AI researchers and founder of Bayesian Logic Inc., added:

“Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”

Ryan Gariepy, founder & CTO of Clearpath Robotics was the first to sign the letter. For the press release, he noted, “Autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

The open letter ends with similar concerns. It states:

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

The letter was announced in Melbourne, Australia at the International Joint Conference on Artificial Intelligence (IJCAI), which draws many of the world’s top artificial intelligence researchers. Two years ago, at the last IJCAI meeting, Walsh released another open letter, which called on countries to avoid engaging in an AI arms race. To date, that previous letter has been signed by over 20,000 people, including over 3,100 AI/robotics researchers.

Read the letter here.

Translations: Chinese

This content was first published at futureoflife.org on August 20, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram