2019 Statement to the United Nations in Support of a Ban on LAWS

2019 Statement to the United Nations in Support of a Ban on LAWS

The following statement was read on the floor of the United Nations during the March, 2019 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. 

Thank you chair for your leadership.

The Future of Life Institute (FLI) is a research and outreach organization that works with scientists to mitigate existential risks facing humanity. FLI is deeply worried about an imprudent application of technology in warfare, especially with regard to emerging technologies in the field of artificial intelligence.

Let me give you an example: In just the last few months, researchers from various universities have shown how easy it is to trick image recognition software. For example, researchers at Auburn University found that if objects, like a school bus or a firetruck were simply shifted into unnatural positions, so that they were upended or turned on their sides in an image, the image classifier would not recognize them. And this is just one of many, many examples of image recognition software failing because it does not understand the context within the image.

This is the same technology that would analyze and interpret data picked up by the sensors of an autonomous weapons system. It’s not hard to see how quickly image recognition software could misinterpret situations on the battlefield if it has to quickly assess everyday objects that have been upended or destroyed.

And challenges with image recognition is only one of many examples why an increasing number of people in AI research and in the tech field – that is an increasing number of the people who are most familiar with how the technology works, and how it can go wrong – are all saying that this technology cannot be used safely or fairly to select and engage a target. In the last few years, over 4500 aritificial intelligence and robotics researchers have called for a ban on lethal autonomous weapons, over 100 CEOs of prominent AI companies have called for a ban on lethal autonomous weapons, and over 240 companies and nearly 4000 people have pledged to never develop lethal autonomous weapons.

But as we turn our attention to human-machine teaming, we must also carefully consider research coming from the field of psychology and recognize the limitations there as well. I’m sure everyone in this room has had a beneficial personal experience working with artificial intelligence. But when under extreme pressure, as in life and death situations, psychologists find that humans become overly reliant on technology. In one study at Georgia Tech, students were taking a test alone in a room, when a fire alarm went off. The students had the choice of leaving through a clearly marked exit that was right by them, or following a robot that was guiding them away from the exit. Almost every student followed the robot, away from the safe exit. In fact, even when the students had been warned in advance that the robot couldn’t be trusted, they still followed it away from the exit.

As the delegate from Costa Rica mentioned yesterday, the New York Times has reported that pilots on the Boeing 737 Max had only 40 seconds to fix the malfunctioning automated software on the plane. These accidents represent tragic examples of how difficult it can be for a human to correct an autonomous system at the last minute if something has gone wrong.

Meaningful human control is something we must strive for, but as our colleagues from ICRAC said yesterday, “If states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems.”

I want to be clear. Artificial intelligence will be incredibly helpful for militaries, and militaries should move to adopt systems that can be implemented safely in areas such as improving the situational awareness of the military personnel who would be in the loop, logistics, and defense. But we cannot allow algorithms to make the decision to harm a human – they simply cannot be trusted, and we have no reason to believe they will be trustworthy anytime soon. Given the incredible pace at which the technology is advancing, thousands of AI researchers from around the world call with great urgency for a ban on lethal autonomous weapons.

There is a strong sense in the science and technology community that only a binding legal instrument can ensure continued research and development of beneficial civilian applications without the endeavor being tainted by the spectre of lethal algorithms. We thus call on states to take real leadership on this issue! We must move to negotiate a legally binding instrument that will ensure algorithms are not allowed to make the decision – or to unduly influence the decision — to harm or kill a human.

Thank you.

2018 Statement to United Nations on Behalf of LAWS Open Letter Signatories

2018 Statement to United Nations on Behalf of LAWS Open Letter Signatories

The following statement was read on the floor of the United Nations during the August, 2018 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. No conclusions were reached at this meeting.

Thank you, Mr. Chair, and I thank the Chair for his excellent leadership during this meeting. I’m grateful for the opportunity to share comments on behalf of the Future of Life Institute.

First, I read the following on behalf of the nearly 4,000 AI and robotics researchers and scientists from around the world who have called on the United Nations to move forward to negotiations to consider a legally binding instrument on lethal autonomous weapons.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

Second, on behalf of 137 CEOs of AI and robotics companies around the world, and in light of the rapid progress we’re seeing in artificial intelligence, I add:

We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Finally, I would add that nearly 240 AI-related organizations and over 3,000 individuals have taken their concerns about LAWS a step further, and they have pledged that they will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

Thousands of artificial intelligence researchers around the world are calling on states to begin negotiations toward a legally binding instrument regarding LAWS, and we are happy to do all we can to help clarify technical issues surrounding delegates’ concerns about definitions and meaningful human control.

Thank you.

UN Ban on Nuclear Weapons Open Letter

An Open Letter from Scientists in Support of the UN Nuclear Weapons Negotiations

Click here to see this page in other languages : Russian 
Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought. Individual explosions can obliterate cities, radioactive fallout can contaminate regions, and a high-altitude electromagnetic pulse may cause mayhem by frying electrical grids and electronics across a continent. The most horrible hazard is a nuclear-induced winter, in which the fires and smoke from as few as a thousand detonations might darken the atmosphere enough to trigger a global mini ice age with year-round winter-like conditions. This could cause a complete collapse of the global food system and apocalyptic unrest, potentially killing most people on Earth – even if the nuclear war involved only a small fraction of the roughly 14,000 nuclear weapons that today’s nine nuclear powers control. As Ronald Reagan said: “A nuclear war cannot be won and must never be fought.”

Unfortunately, such a war is more likely than one may hope, because it can start by mistake, miscalculation or terrorist provocation. There is a steady stream of accidents and false alarms that could trigger all-out war, and relying on never-ending luck is not a sustainable strategy. Many nuclear powers have larger nuclear arsenals than needed for deterrence, yet prioritize making them more lethal over reducing them and the risk that they get used.

But there is also cause for optimism. On March 27 2017, an unprecedented process begins at the United Nations: most of the world’s nations convene to negotiate a ban on nuclear arms, to stigmatize them like biological and chemical weapons, with the ultimate goal of a world free of these weapons of mass destruction. We support this, and urge our national governments to do the same, because nuclear weapons threaten not merely those who have them, but all people on Earth.

 

To express your support, please add your name below:
(please sign if you study or work in a STEM field – including social sciences)

Full Name *

This is a required question
Job Title
(For example “Professor of Physics” or “Biology grad student”)

This is a required question
Email *

This is a required question
Affiliation

This is a required question


If you have questions about this letter, please contact Max Tegmark.

To date, this letter has been signed by scientists (this does not imply endorsement by their organizations):

You need javascript enabled to view the letter signers.



Sources:
* 1979 report by the US Government estimating that nuclear war would kill 28%-88% without including nuclear winter effects
* Electromagnetic pulse: p79 of US Army Report AD-A278230 (unclassified)
* Peer-reviewed 2007 nuclear winter calculation
* Estimate of current nuclear warhead inventory from Federation of American Scientists
* Timeline of nuclear close calls
* UN General Assembly Resolution to launch the above-mentioned negotiations

AI Principles German

AI Principles Korean

AI Principles Russian

UN Ban on Nuclear Weapons Open Letter Russian

Открытое письмо ученых в поддержку переговоров в ООН о запрещении ядерного оружия

Click here to see this page in other languages : English 

Ядерное оружие – единственное оружие массового уничтожения, до сих пор не запрещенное международной конвенцией, несмотря на то, что является самым разрушительным и неизбирательным оружием, когда-либо созданным человеком. Мы, ученые, несем особую ответственность за ядерное оружие, поскольку именно ученые изобрели его и обнаружили, что последствия его применения гораздо страшнее, чем предполагалось вначале. Отдельные взрывы могут уничтожить целые города, радиоактивные осадки могут загрязнить большие территории, а высотный электромагнитный импульс может вызвать хаос, сжигая опоры линий электропередач и электронику на континенте. Самая страшная опасность – ядерная зима. Пожары и дым от взрывов всего тысячи ядерных боеголовок могут затемнить атмосферу настолько, что на Земле наступит мини-ледниковый период без лета и тепла. Это может привести к полному краху мировой продовольственной системы и катастрофическим общественным беспорядкам, в результате которых может погибнуть большая часть населения Земли. И это если в ядерной войне будет задействована лишь небольшая часть из примерно 14 000 ядерных боезарядов, находящихся на вооружении девяти ядерных держав. Как сказал Рональд Рейган: «Ядерную войну нельзя выиграть, ее можно только не начинать».

К сожалению, вероятность ядерной войны намного выше, чем нам хотелось бы. Она может начаться по ошибке, из-за просчёта или террористической провокации. Постоянные аварии и ложные тревоги могут вызвать тотальную войну, так что всегда полагаться на удачу — не самая рациональная стратегия. Многие ядерные державы располагают более крупным ядерным арсеналом, чем требуется для сдерживания агрессии. Кроме того, для них главное — сделать его еще более смертоносным, а не сократить его количество и возможные опасности от его использования.

Но есть повод и для оптимизма. 27 марта 2017 года в Организации Объединенных Наций начинается беспрецедентный процесс: большинство стран мира собираются для переговоров о запрещении ядерного оружия и его стигматизации, как это было с биологическим и химическим оружием, чтобы создать мир, свободный от оружия массового уничтожения.

Мы поддерживаем эту инициативу и стараемся убедить правительства своих стран тоже поддержать ее, потому что ядерное оружие представляет угрозу не только для его обладателей, но и для всех жителей Земли.

На настоящий момент это открытое письмо подписали более 3754 человека. Если вы хотите подписать это письмо или просмотреть список подписавших, пожалуйста, посетите страницу открытого письма на английском языке.

An Open Letter to the United Nations Convention on Certain Conventional Weapons (Chinese)

联合国常规武器公约公开信

由于各个企业在人工智能和机器人上建立的技术能被转用在发展自动武器上,因此我们觉得特别有责任发出预警。

我们衷心欢迎联合国常规武器公约建立致命自动武器系统的专家小组(GGE),我们当中的很多研究和工程人员都渴望能为你们的审议提供建议。

我们赞赏任命印度大使的Amandeep Singh Gil作为该小组的主席,我们恳请专家小组中的缔约方努力寻求制止在这些武器上的军备竞赛,保护市民免受武器滥用的伤害,避免这些技术产生的不稳定。

我们非常遗憾GGE小组原本定于今天开始的第一次会议, 由于小部分国家没有支付会员费而被取消,我们恳请敦促缔约方在限定11月的第一次会议上加倍努力。

致命自动武器足以构成第三次武器革命的威胁,一旦开发成功,他们将有可能使冲突升级至从未有过的庞大规模,而且会达到人类难以适应的速度;他们有可能成为恐怖的技术,成为暴君和恐怖分子残害无辜民众的武器,或者被黑客挟持的把柄,我们的时间并不多,一旦潘多拉的盒子被打开,就很难被关上。

因此,我们恳请缔约方找到一种保护我们全人类免于这种危险的途径。

Click here to see this page in other languages: English German Japanese   Russian

公开信签署者名单(按国家排序)

Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.

Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.

Charles Gretton, founder of Hivery, Australia.

Brad Lorge, founder & CEO of Premonition.io, Australia

Brenton O’Brien, founder & CEO of Microbric, Australia.

Samir Sinha, founder & CEO of Robonomics AI, Australia.

Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.

Peter Turner, founder & MD of Tribotix, Australia.

Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.

Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.

Geoffrey Hinton, founder of DNNResearch Inc, Canada.

James Chow, founder & CEO of UBTECH Robotics, China.

Robert Li, founder & CEO of Sankobot, China.

Marek Rosa, founder & CEO of GoodAI, Czech Republic.

Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.

Markus Järve, founder & CEO of Krakul, Estonia.

Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.

Esben Østergaard, founder & CTO of Universal Robotics, Denmark.

Raul Bravo, founder & CEO of DIBOTICS, France.

Ivan Burdun, founder & President of AIXTREE, France.

Raphael Cherrier, founder & CEO of Qucit, France.

Alain Garnier, founder & CEO of ARISEM (acquired by Thales), founder & CEO of Jamespot, France.

Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.

Charles Ollion, founder & Head of Research at Heuritech, France.

Anis Sahbani, founder & CEO of Enova Robotics, France.

Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.

Marcus Frei, founder & CEO of NEXT.robotics, Germany.

Kristinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.

Fahad Azad, founder of Robosoft Systems, India.

Debashis Das, Ashish Tupate & Jerwin Prabu, founders (incl. CEO) of Bharati Robotics, India.

Pulkit Gaur, founder & CTO of Gridbots Technologies, India.

Pranay Kishore, founder & CEO of Phi Robotics Research, India.

Shahid Memom, founder & CTO of Vanora Robots, India.

Krishnan Nambiar & Shahid Memon, founders, CEO & CTO of Vanora Robotics, India.

Achu Wilson, founder & CTO of Sastra Robotics, India.

Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.

Parsa Ghaffari, founder & CEO of Aylien, Ireland.

Alan Holland, founder & CEO of Keelvar Systems, Ireland.

Alessandro Prest, founder & CTO of LogoGrab, Ireland.

Frank Reeves, founder & CEO of Avvio, Ireland.

Alessio Bonfietti, founder & CEO of MindIT, Italy.

Angelo Sudano, founder & CTO of ICan Robotics, Italy.

Domenico Talia, founder and R&D Director of DtoK Labs, Italy.

Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot Corporation, Japan.

Andrejs Vasiljevs, founder and CEO of Tilde, Latvia.

Luis Samahí García González, founder & CEO of QOLbotics, Mexico.

Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.

Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.

Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.

Rob Brouwer, founder and Director of Operatins, Aeronavics, New Zealand.

Philip Solaris, founder and CEO of X-Craf Enterprises, New Zealand.

Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at BlueEye Robotics, Norway.

Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.

Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.

Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.

Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.

Jasper Horrell, founder of DeepData, South Africa.

Onno Huyser and Mark van Wyk, founders of FlyH2 Aerospace, South Africa.

Toni Ferrate, founder & CEO of RO-BOTICS, Spain.

José Manuel del Río, founder & CEO of Aisoy Robotics, Spain.

Victor Martin, founder & CEO of Macco Robotics, Spain.

Angel Lis Montesinos, founder & CTO of Neuronalbite, Spain.

Timothy Llewellynn, founder & CEO of nViso, Switzerland.

Francesco Mondada, founder of K-Team, Switzerland.

Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders, President & CEO of Nnaisense, Switzerland.

Satish Ramachandran, founder of AROBOT, United Arab Emirates.

Silas Adekunle, founder & CEO of Reach Robotics, UK.

Steve Allpress, founder & CTO of FiveAI, UK.

John Bishop, founder and Director of Tungsten Centre for Intelligent Data Analytis, UK.

Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.

Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.

Nic Greenway, founder of React AI Ltd (Aiseedo), UK.

Daniel Hulme, founder & CEO of Satalia, UK.

Bradley Kieser, founder & Director of SMS Speedway, UK.

Charlie Muirhead & Tabitha Goldstaub, founders & CEO of CognitionX, UK.

Geoff Pegman, founder & MD of R U Robots, UK.

Demis Hassabis & Mustafa Suleyman, founders, CEO & Head of Applied AI, DeepMind, UK.

Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.

Antoine Biondeau, founder & CEO of Sentient Technologies, USA.

Steve Cousins, founder & CEO of Savioke, USA.

Brian Gerkey, founder & CEO of Open Source Robotics, USA.

Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.

John Hobart, founder & CEO of Coria, USA.

Henry Hu, founder & CEO of Cafe X Technologies, USA.

Zaib Husain, founder and CEO of Makerarm, Inc.

Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.

Kris Kitchen, founder & Chief Data Scientit at Qieon Research, USA.

Justin Lane, founder of Prospecture Simulation, USA.

Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.

Brian Mingus, founder & CTO of Latently, USA.

Mohammad Musa, founder & CEO at Deepen AI, USA.

Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.

Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.

Erik Nieves, founder & CEO of PlusOne Robotics, USA.

Steve Omohundro, founder & President of Possibility Research, USA.

Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.

Greg Phillips, founder & CEO, ThinkIt Data Solutins, USA.

Dan Reuter, found & CEO of Electric Movement, USA.

Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA.

Dan Rubins, founder & CEO of Legal Robot, USA.

Stuart Russell, founder & VP of Bayesian Logic Inc., USA.

Andrew Schroeder, founder of WeRobotics, USA.

Stanislav Shalunov, founder & CEO of Clostra, USA

Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.

Martin Spencer, founder & CEO of GeckoSystems, USA.

Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.

Michael Stuart, founder & CEO of Lucid Holdings, USA.

Madhuri Trivedi, founder & CEO of OrangeHC, USA.

Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.

Reza Zadeh, founder & CEO of Matroid, USA.

An Open Letter to the United Nations Convention on Certain Conventional Weapons

An Open Letter to the United Nations Convention on Certain Conventional Weapons

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Translations: Chinese GermanJapanese    Russian

FULL LIST OF SIGNATORIES TO THE OPEN LETTER

To add your company, please contact Toby Walsh at tw@cse.unsw.edu.au.

Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
Charles Gretton, founder of Hivery, Australia.
Brad Lorge, founder & CEO of Premonition.io, Australia
Brenton O’Brien, founder & CEO of Microbric, Australia.
Samir Sinha, founder & CEO of Robonomics AI, Australia.
Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
Peter Turner, founder & MD of Tribotix, Australia.
Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
Geoffrey Hinton, founder of DNNResearch Inc, Canada.
James Chow, founder & CEO of UBTECH Robotics, China.
Robert Li, founder & CEO of Sankobot, China.
Marek Rosa, founder & CEO of GoodAI, Czech Republic.
Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
Markus Järve, founder & CEO of Krakul, Estonia.
Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
Raul Bravo, founder & CEO of DIBOTICS, France.
Ivan Burdun, founder & President of AIXTREE, France.
Raphael Cherrier, founder & CEO of Qucit, France.
Alain Garnier, founder & CEO of ARISEM (acquired by Thales), founder & CEO of Jamespot, France.
Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
Charles Ollion, founder & Head of Research at Heuritech, France.
Anis Sahbani, founder & CEO of Enova Robotics, France.
Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
Marcus Frei, founder & CEO of NEXT.robotics, Germany.
Kristinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
Fahad Azad, founder of Robosoft Systems, India.
Debashis Das, Ashish Tupate & Jerwin Prabu, founders (incl. CEO) of Bharati Robotics, India.
Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
Pranay Kishore, founder & CEO of Phi Robotics Research, India.
Shahid Memom, founder & CTO of Vanora Robots, India.
Krishnan Nambiar & Shahid Memon, founders, CEO & CTO of Vanora Robotics, India.
Achu Wilson, founder & CTO of Sastra Robotics, India.
Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
Parsa Ghaffari, founder & CEO of Aylien, Ireland.
Alan Holland, founder & CEO of Keelvar Systems, Ireland.
Alessandro Prest, founder & CTO of LogoGrab, Ireland.
Frank Reeves, founder & CEO of Avvio, Ireland.
Alessio Bonfietti, founder & CEO of MindIT, Italy.
Angelo Sudano, founder & CTO of ICan Robotics, Italy.
Domenico Talia, founder and R&D Director of DtoK Labs, Italy.
Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot Corporation, Japan.
Andrejs Vasiljevs, founder and CEO of Tilde, Latvia.
Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
Rob Brouwer, founder and Director of Operatins, Aeronavics, New Zealand.
Philip Solaris, founder and CEO of X-Craf Enterprises, New Zealand.
Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at BlueEye Robotics, Norway.
Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
Jasper Horrell, founder of DeepData, South Africa.
Onno Huyser and Mark van Wyk, founders of FlyH2 Aerospace, South Africa.
Toni Ferrate, founder & CEO of RO-BOTICS, Spain.
José Manuel del Río, founder & CEO of Aisoy Robotics, Spain.
Victor Martin, founder & CEO of Macco Robotics, Spain.
Angel Lis Montesinos, founder & CTO of Neuronalbite, Spain.
Timothy Llewellynn, founder & CEO of nViso, Switzerland.
Francesco Mondada, founder of K-Team, Switzerland.
Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders, President & CEO of Nnaisense, Switzerland.
Satish Ramachandran, founder of AROBOT, United Arab Emirates.
Silas Adekunle, founder & CEO of Reach Robotics, UK.
Steve Allpress, founder & CTO of FiveAI, UK.
John Bishop, founder and Director of Tungsten Centre for Intelligent Data Analytis, UK.
Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
Daniel Hulme, founder & CEO of Satalia, UK.
Bradley Kieser, founder & Director of SMS Speedway, UK.
Charlie Muirhead & Tabitha Goldstaub, founders & CEO of CognitionX, UK.
Geoff Pegman, founder & MD of R U Robots, UK.
Demis Hassabis & Mustafa Suleyman, founders, CEO & Head of Applied AI, DeepMind, UK.
Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
Steve Cousins, founder & CEO of Savioke, USA.
Brian Gerkey, founder & CEO of Open Source Robotics, USA.
Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
John Hobart, founder & CEO of Coria, USA.
Henry Hu, founder & CEO of Cafe X Technologies, USA.
Zaib Husain, founder and CEO of Makerarm, Inc.
Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
Kris Kitchen, founder & Chief Data Scientit at Qieon Research, USA.
Justin Lane, founder of Prospecture Simulation, USA.
Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
Brian Mingus, founder & CTO of Latently, USA.
Mohammad Musa, founder & CEO at Deepen AI, USA.
Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
Erik Nieves, founder & CEO of PlusOne Robotics, USA.
Steve Omohundro, founder & President of Possibility Research, USA.
Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
Greg Phillips, founder & CEO, ThinkIt Data Solutins, USA.
Dan Reuter, founder & CEO of Electric Movement, USA.
Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA.
Dan Rubins, founder & CEO of Legal Robot, USA.
Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
Andrew Schroeder, founder of WeRobotics, USA.
Stanislav Shalunov, founder & CEO of Clostra, USA
Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
Martin Spencer, founder & CEO of GeckoSystems, USA.
Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
Michael Stuart, founder & CEO of Lucid Holdings, USA.
Madhuri Trivedi, founder & CEO of OrangeHC, USA.
Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.
Reza Zadeh, founder & CEO of Matroid, USA.

AI Principles Japanese

AI Principles Chinese

Principles signatories

The The Principles Signatories Include:

AI/Robotics Researchers:

 

You need javascript enabled to view the open letter signers.

Other Endorsers:

 

You need javascript enabled to view the open letter signers.


AWOS signatories

The Open Letter Signatories Include:

AI/Robotics Researchers:

 

You need javascript enabled to view the open letter signers.

Other Endorsers:

 

You need javascript enabled to view the open letter signers.


Digital Economy Open Letter

An open letter by a team of economists about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact. (Jun 4, 2015)

AI Open Letter