Skip to content
All Newsletters

FLI May, 2019 Newsletter

Published
4 June, 2019
Author
Revathi Kumar

Contents

FLI May, 2019 Newsletter

AI Safety & Ethics: New Podcasts, Principles & More

New Podcast Episodes




FLI Podcast: Applying AI Safety and Ethics Today
with Ashley Llorens and Francesca Rossi

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.





AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

The concept of consciousness is at the forefront of much scientific and philosophical thinking. At the same time, there remains substantial disagreement over what consciousness exactly is and whether it can be fully captured by science or explained away by reductionism. The Qualia Research Institute, a leader in consciousness research, takes consciousness to be something substantial and real, and they expect it can be captured by the language and tools of science and mathematics. In this episode, Lucas spoke with the Qualia Research Institute’s Mike Johnson and Andrés Gómez Emilsson to unpack this viewpoint.


You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloudiTunesGoogle Play, and Stitcher.

OECD Principles on AI


On May 22, 2019, the Organization for Economic Cooperation and Development (OECD) released a set of principles on AI, intended to promote AI “that is innovative and trustworthy and that respects human rights and democratic values.” The OECD principles are the world’s first intergovernmental AI policy guidelines, already adopted by more than 40 countries.

FLI strongly endorses the OECD principles and applauds their prominent calls for AI robustness, security, and safety. We are pleased to note their many similarities to the Asilomar Principles.

For more on why this matters, read Jessica Cussins Newman’s article 3 reasons you should pay attention to the OECD AI principles.

Recent Articles




The United Nations and the Future of Warfare
by Ariel Conn for the Bulletin of the Atomic Scientists

“Today, there is a very real threat that if the CCW doesn’t act quickly, if these weapons aren’t banned soon, lethal autonomous weapons could become ultra-cheap, easily accessible weapons of mass destruction.”





State of AI: Artificial Intelligence, the Military, and Increasingly Autonomous Weapons
by Kirsten Gronlund

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.


These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.

Nevertheless, the development of military AI is accelerating. A new report from Pax outlines the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea.

We’ve compiled its key points here, but you can also read on for short summaries about the current military approach to AI for each of those countries.


The United States

In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).” And in September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”

China

There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III. Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.

Russia

At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to the technological and economic security of Russia.” In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”

The United Kingdom

A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive effects we need in this regard.” The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming. 

France

France’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as […] in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.” It also states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.

Israel

The Israeli military already deploys weapons with a considerable degree of autonomy, and it is expected that Israeli use of AI tools in the military will increase rapidly in the near future. The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”

South Korea

In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.” South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.

What We’ve Been Up to This Month


Jessica Cussins Newman gave a talk at the “Towards an Inclusive Future in AI” workshop organized by AI Commons, Swissnex, and foraus in San Francisco, CA. She also participated in the annual Center for Human-Compatible AI (CHAI) workshop in Asilomar, CA, and she participated and presented at the United Nations AI for Good Summit in Geneva, Switzerland.

Richard Mallah participated in the Partnership on AI’s Workshop on Positive Futures in San Francisco. He also spoke on a panel discussion on ethics in machine learning at Deep Learning Summit in Boston and participated in NIST’s Federal Engagement in AI Standards Workshop in Gaithersburg, MD

FLI in the News

FORBES: How To Prevent AI Ethics Councils From Failing



We’re new to instagram! Please check out our profile and give us a follow: @futureoflifeinstitute

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram