Skip to content

About the LAWS Pledge

July 18, 2018
Ariel Conn


About the Lethal Autonomous Weapons Systems (LAWS) Pledge

LAWS Pledge

Sign the pledge here.

LAWS Frequently Asked Questions

What sort of weapons systems do “LAWS” refer to? Won’t militaries without LAWS be at a disadvantage against adversaries who develop them? Won’t LAWS save lives by having robots die rather than soldiers, and minimizing collateral damage? And more.

Why Did Others Sign the Pledge

Artificial Intelligence is a complex technology that could fail in grave and subtle ways. Humanity will be better served if this technology is deliberately developed for civilian purposes first, and militaries exhibit restraint in its use until it its properties and failure modes are deeply understood.

Those favoring development of autonomous lethal weapons fantasize about precisely targeted strikes against enemy combatants — “bad guys” by their definition — while sparing uninvolved civilians. But once a technology exists, it eventually falls into the hands of “rogue” actors; and indeed the “rogues” may turn out to include those who sponsored the development in the first place.

Lethal autonomous weapons will make it far easier for war criminals to escape prosecution.

The Robotics Council of the Brazilian Computer Society (CE-R SBC) would like to state that we are against to all forms of lethal autonomous weapons, A.I. killer robots or any other form of robotic or autonomous machines where the decision to take a human life was delegated to the machine. Killer robots should be completely banned from our planet.

Autonomous weapons are a threat to every human being and every form of life. In most cases there will be no practical defense against them. We must pledge not to create them and to enact an international treaty prohibiting their development.

It would be reckless for international governments to ignore the need for a binding Treaty agreement on the regulation of autonomous lethal weapons. The urgency of this requirement is increasing quickly.

WeRobotics believes that the future of robotics and artificial intelligence technologies must be driven by a core ethical commitment to improving human and ecological well-being above all. Autonomous weapons systems threaten both human life and the stability of planetary society and ecology by shifting control over the fundamental decisions of life and death to algorithmic processes that may likely be immune to ethical judgment and human control. As we help to build a future in which robotics and artificial intelligence are applied to building wealth and solving problems for all people, we must insist that autonomous weapons remain off limits to all countries, based on commonly agreed upon global ethical standards.

Lucid believes AI to be one of the world’s greatest assets in solving global problems in all industries. We see the possibilities of AI-for-good everywhere. Uses of AI for weaponry pits country against country, rather than using AI to help unite Humanity as otherwise possible and needed. Lucid will not allow use of any AI technology it creates for weaponry.

Press Release for LAWS Pledge

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons

Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the UN’s Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS. At the most recent meeting in April, twenty-six countries announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

As Featured In

This content was first published at on July 18, 2018.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Future of Life Institute Statement on the Pope’s G7 AI Speech

Max Tegmark provides a response to the Pope's remarks on autonomous weapons to G7 leaders.
18 June, 2024

An introduction to the issue of Lethal Autonomous Weapons

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them 'smart'. This means these weapons will soon be making critical decisions by themselves - perhaps even deciding who lives and who dies.
30 November, 2021

10 Reasons Why Autonomous Weapons Must be Stopped

Lethal autonomous weapons pose a number of severe risks. These risks significantly outweigh any benefits they may provide, even for the world's most advanced military programs.
27 November, 2021

Real-Life Technologies that Prove Autonomous Weapons are Already Here

For years, we have seen the signs that lethal autonomous weapons were coming. Unfortunately, these weapons are no longer just 'in development' - they are starting to be used in real military applications. Slaugherbots are officially here.
22 November, 2021
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram