Contents
FLI January, 2018 Newsletter
Top AI Breakthroughs and Challenges of 2017
To discuss this more, we invited FLI’s Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month’s podcast. They talked about some of the progress they were most excited to see and what they’re looking forward to in the coming year.
Check us out on SoundCloud and iTunes!
Extended Deadline for the 2018 Grants Competition
We’re pleased to announce that we’ve extended the deadline for our 2018 AI Safety Grants Competition by one week. The deadline for initial applications is now February 25. Learn more about the grants and how to apply here.
ICYMI: This Month’s Most Popular Articles
Shared Benefit Principle:Â AI technologies should benefit and empower as many people as possible.
Continuing our discussion of the 23 Asilomar Principles, experts discuss how we can ensure that AI doesn’t end up only benefitting the rich. To avoid this, we may need to change our approach to innovation.
By Stuart Russell, Anthony Aguirre, Ariel Conn and Max Tegmark
In response to Paul Scharre’s IEEE article, Why You Shouldn’t Fear Slaughterbots, Stuart Russell and members of the FLI team crafted this response. Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions.
Citing the growing threats of climate change, increasing tensions between nuclear-armed countries, and a general loss of trust in government institutions, the Bulletin warned that we are “making the world security situation more dangerous than it was a year ago—and as dangerous as it has been since World War II.”
AI safety problems are too important for the discussion to be derailed by status contests like “my issue is better than yours”. This kind of false dichotomy is itself a distraction from the shared goal of ensuring AI has a positive impact on the world, both now and in the future. People who care about the safety of current and future AI systems are natural allies – let’s support each other on the path towards this common goal.
Now is a great time for the AI field to think deeply about value alignment. As Pieter Abbeel said at the end of his keynote, “Once you build really good AI contraptions, how do you make sure they align their value system with our value system? Because at some point, they might be smarter than us, and it might be important that they actually care about what we care about.”
What We’ve Been Up to This Month
Max Tegmark gave a talk on AI safety at the American Museum of Natural History this month. His talk centered on themes in his best-seller, Life 3.0, asking questions like: Will A.I. create conditions that allow life to flourish as never seen before or open a Pandora’s box of unintended consequences?
Ariel Conn spoke on a panel during a side event at the World Economic Forum in Davos, Switzerland. The event was hosted by the Global Challenges Foundation as part of their New Shape Prize, which seeks to create new forms of global governance to address catastrophic risks.
Viktoriya Krakovna attended an AI strategy retreat co-run by the Future of Humanity Institute and the Center for Effective Altruism.
Lucas Perry gave a talk at the Cambridge “Don’t Bank on the Bomb” Forum this month, where he presented divestment opportunities, tools, and strategies for individuals and institutions to divest from nuclear weapons-producing companies.
FLI in the News
“A number of technology think tanks have written up their own rules to govern hard AI. At the Future of Life Institute’s (FLI) Asilomar Beneficial AI Conference in 2017, researchers developed 23 principles for the development of responsible AI.“
“His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones from a perspective that combines media theory with science and technology studies. In 2015, he received a $116,000 grant from the Future of Life Institute.”
If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.