FLI January, 2018 Newsletter

Top AI Breakthroughs and Challenges of 2017

AlphaZero, progress in “metalearning,” the role of AI in fake news, the difficulty of developing fair machine learning — 2017 was another year of big breakthroughs and big challenges for AI researchers!

To discuss this more, we invited FLI’s Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month’s podcast. They talked about some of the progress they were most excited to see and what they’re looking forward to in the coming year.

Listen to the podcast here.

Check us out on SoundCloud and iTunes!

Extended Deadline for the 2018 Grants Competition

We’re pleased to announce that we’ve extended the deadline for our 2018 AI Safety Grants Competition by one week. The deadline for initial applications is now February 25. Learn more about the grants and how to apply here.

ICYMI: This Month’s Most Popular Articles

Shared Benefit Principle: AI technologies should benefit and empower as many people as possible.

Continuing our discussion of the 23 Asilomar Principles, experts discuss how we can ensure that AI doesn’t end up only benefitting the rich. To avoid this, we may need to change our approach to innovation.

Why You Should Fear ‘Slaughterbots’—A Response
By Stuart Russell, Anthony Aguirre, Ariel Conn and Max Tegmark

In response to Paul Scharre’s IEEE article, Why You Shouldn’t Fear Slaughterbots, Stuart Russell and members of the FLI team crafted this response. Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions.

Citing the growing threats of climate change, increasing tensions between nuclear-armed countries, and a general loss of trust in government institutions, the Bulletin warned that we are “making the world security situation more dangerous than it was a year ago—and as dangerous as it has been since World War II.”

AI safety problems are too important for the discussion to be derailed by status contests like “my issue is better than yours”. This kind of false dichotomy is itself a distraction from the shared goal of ensuring AI has a positive impact on the world, both now and in the future. People who care about the safety of current and future AI systems are natural allies – let’s support each other on the path towards this common goal.

By Viktoriya Krakovna

Now is a great time for the AI field to think deeply about value alignment. As Pieter Abbeel said at the end of his keynote, “Once you build really good AI contraptions, how do you make sure they align their value system with our value system? Because at some point, they might be smarter than us, and it might be important that they actually care about what we care about.”

What We’ve Been Up to This Month

Max Tegmark gave a talk on AI safety at the American Museum of Natural History this month. His talk centered on themes in his best-seller, Life 3.0, asking questions like: Will A.I. create conditions that allow life to flourish as never seen before or open a Pandora’s box of unintended consequences?

Ariel Conn spoke on a panel during a side event at the World Economic Forum in Davos, Switzerland. The event was hosted by the Global Challenges Foundation as part of their New Shape Prize, which seeks to create new forms of global governance to address catastrophic risks.

Viktoriya Krakovna attended an AI strategy retreat co-run by the Future of Humanity Institute and the Center for Effective Altruism.

Lucas Perry gave a talk at the Cambridge “Don’t Bank on the Bomb” Forum this month, where he presented divestment opportunities, tools, and strategies for individuals and institutions to divest from nuclear weapons-producing companies.

FLI in the News

“The drones could be used in a similar fashion as depicted in ‘Slaughterbots,’ a short film made by the Future of Life Institute and University of California at Berkeley professor Stuart Russell. The film depicts a dystopian future where killer drone swarms in the hands of unknown criminals terrorize society.”
“The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.”
ELECTRONICS 360: Strong AI Principles Now — Less Dystopia Later
A number of technology think tanks have written up their own rules to govern hard AI. At the Future of Life Institute’s (FLI) Asilomar Beneficial AI Conference in 2017, researchers developed 23 principles for the development of responsible AI.
Media Studies Faculty Member Peter Asaro Named a Finalist in 2017 World Technology Awards
“His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones from a perspective that combines media theory with science and technology studies. In 2015, he received a $116,000 grant from the Future of Life Institute.”

If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.