Contents
FLI May, 2019 Newsletter
AI Safety & Ethics: New Podcasts, Principles & More
New Podcast Episodes
with Ashley Llorens and Francesca Rossi
In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.
The concept of consciousness is at the forefront of much scientific and philosophical thinking. At the same time, there remains substantial disagreement over what consciousness exactly is and whether it can be fully captured by science or explained away by reductionism. The Qualia Research Institute, a leader in consciousness research, takes consciousness to be something substantial and real, and they expect it can be captured by the language and tools of science and mathematics. In this episode, Lucas spoke with the Qualia Research Institute’s Mike Johnson and Andrés Gómez Emilsson to unpack this viewpoint.
You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play, and Stitcher.
OECD Principles on AI
On May 22, 2019, the Organization for Economic Cooperation and Development (OECD) released a set of principles on AI, intended to promote AI “that is innovative and trustworthy and that respects human rights and democratic values.” The OECD principles are the world’s first intergovernmental AI policy guidelines, already adopted by more than 40 countries.
FLI strongly endorses the OECD principles and applauds their prominent calls for AI robustness, security, and safety. We are pleased to note their many similarities to the Asilomar Principles.
For more on why this matters, read Jessica Cussins Newman’s article 3 reasons you should pay attention to the OECD AI principles.
Recent Articles
by Ariel Conn for the Bulletin of the Atomic Scientists
“Today, there is a very real threat that if the CCW doesn’t act quickly, if these weapons aren’t banned soon, lethal autonomous weapons could become ultra-cheap, easily accessible weapons of mass destruction.”
State of AI: Artificial Intelligence, the Military, and Increasingly Autonomous Weapons
by Kirsten Gronlund
As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.
These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.
Nevertheless, the development of military AI is accelerating. A new report from Pax outlines the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea.
We’ve compiled its key points here, but you can also read on for short summaries about the current military approach to AI for each of those countries.
The United States
In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).” And in September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
China
There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III. Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.
Russia
The United Kingdom
A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive effects we need in this regard.” The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
France
Israel
The Israeli military already deploys weapons with a considerable degree of autonomy, and it is expected that Israeli use of AI tools in the military will increase rapidly in the near future. The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
South Korea
In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.” South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
What We’ve Been Up to This Month
Jessica Cussins Newman gave a talk at the “Towards an Inclusive Future in AI” workshop organized by AI Commons, Swissnex, and foraus in San Francisco, CA. She also participated in the annual Center for Human-Compatible AI (CHAI) workshop in Asilomar, CA, and she participated and presented at the United Nations AI for Good Summit in Geneva, Switzerland.
Richard Mallah participated in the Partnership on AI’s Workshop on Positive Futures in San Francisco. He also spoke on a panel discussion on ethics in machine learning at Deep Learning Summit in Boston and participated in NIST’s Federal Engagement in AI Standards Workshop in Gaithersburg, MD
FLI in the News
FORBES: How To Prevent AI Ethics Councils From Failing