OECD Principles on AI
On May 22, 2019, the Organization for Economic Cooperation and Development (OECD) released a set of principles on AI, intended to promote AI “that is innovative and trustworthy and that respects human rights and democratic values.” The OECD principles are the world’s first intergovernmental AI policy guidelines, already adopted by more than 40 countries.
FLI strongly endorses the OECD principles and applauds their prominent calls for AI robustness, security, and safety. We are pleased to note their many similarities to the Asilomar Principles.
For more on why this matters, read Jessica Cussins Newman’s article 3 reasons you should pay attention to the OECD AI principles.
These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.
Nevertheless, the development of military AI is accelerating. A new report from Pax outlines the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea.
We’ve compiled its key points here, but you can also read on for short summaries about the current military approach to AI for each of those countries.
The United States
In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).” And in September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III. Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.
The United Kingdom
A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive effects we need in this regard.” The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
The Israeli military already deploys weapons with a considerable degree of autonomy, and it is expected that Israeli use of AI tools in the military will increase rapidly in the near future. The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.” South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
What We’ve Been Up to This Month
Jessica Cussins Newman gave a talk at the “Towards an Inclusive Future in AI” workshop organized by AI Commons, Swissnex, and foraus in San Francisco, CA. She also participated in the annual Center for Human-Compatible AI (CHAI) workshop in Asilomar, CA, and she participated and presented at the United Nations AI for Good Summit in Geneva, Switzerland.
Richard Mallah participated in the Partnership on AI’s Workshop on Positive Futures in San Francisco. He also spoke on a panel discussion on ethics in machine learning at Deep Learning Summit in Boston and participated in NIST’s Federal Engagement in AI Standards Workshop in Gaithersburg, MD