Skip to content
All Newsletters

FLI January, 2018 Newsletter

Published
February 3, 2018
Author
Revathi Kumar

Contents

FLI January, 2018 Newsletter



Top AI Breakthroughs and Challenges of 2017

AlphaZero, progress in “metalearning,” the role of AI in fake news, the difficulty of developing fair machine learning — 2017 was another year of big breakthroughs and big challenges for AI researchers!

To discuss this more, we invited FLI’s Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month’s podcast. They talked about some of the progress they were most excited to see and what they’re looking forward to in the coming year.

Listen to the podcast here.

Check us out on SoundCloud and iTunes!




Extended Deadline for the 2018 Grants Competition

We’re pleased to announce that we’ve extended the deadline for our 2018 AI Safety Grants Competition by one week. The deadline for initial applications is now February 25. Learn more about the grants and how to apply here.


ICYMI: This Month’s Most Popular Articles




Shared Benefit Principle: AI technologies should benefit and empower as many people as possible.

Continuing our discussion of the 23 Asilomar Principles, experts discuss how we can ensure that AI doesn’t end up only benefitting the rich. To avoid this, we may need to change our approach to innovation.





Why You Should Fear ‘Slaughterbots’—A Response
By Stuart Russell, Anthony Aguirre, Ariel Conn and Max Tegmark

In response to Paul Scharre’s IEEE article, Why You Shouldn’t Fear Slaughterbots, Stuart Russell and members of the FLI team crafted this response. Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions.





Citing the growing threats of climate change, increasing tensions between nuclear-armed countries, and a general loss of trust in government institutions, the Bulletin warned that we are “making the world security situation more dangerous than it was a year ago—and as dangerous as it has been since World War II.”





AI safety problems are too important for the discussion to be derailed by status contests like “my issue is better than yours”. This kind of false dichotomy is itself a distraction from the shared goal of ensuring AI has a positive impact on the world, both now and in the future. People who care about the safety of current and future AI systems are natural allies – let’s support each other on the path towards this common goal.





By Viktoriya Krakovna

Now is a great time for the AI field to think deeply about value alignment. As Pieter Abbeel said at the end of his keynote, “Once you build really good AI contraptions, how do you make sure they align their value system with our value system? Because at some point, they might be smarter than us, and it might be important that they actually care about what we care about.”


What We’ve Been Up to This Month


Max Tegmark gave a talk on AI safety at the American Museum of Natural History this month. His talk centered on themes in his best-seller, Life 3.0, asking questions like: Will A.I. create conditions that allow life to flourish as never seen before or open a Pandora’s box of unintended consequences?

Ariel Conn spoke on a panel during a side event at the World Economic Forum in Davos, Switzerland. The event was hosted by the Global Challenges Foundation as part of their New Shape Prize, which seeks to create new forms of global governance to address catastrophic risks.

Viktoriya Krakovna attended an AI strategy retreat co-run by the Future of Humanity Institute and the Center for Effective Altruism.

Lucas Perry gave a talk at the Cambridge “Don’t Bank on the Bomb” Forum this month, where he presented divestment opportunities, tools, and strategies for individuals and institutions to divest from nuclear weapons-producing companies.

FLI in the News

“The drones could be used in a similar fashion as depicted in ‘Slaughterbots,’ a short film made by the Future of Life Institute and University of California at Berkeley professor Stuart Russell. The film depicts a dystopian future where killer drone swarms in the hands of unknown criminals terrorize society.”

“The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.”
ELECTRONICS 360: Strong AI Principles Now — Less Dystopia Later
A number of technology think tanks have written up their own rules to govern hard AI. At the Future of Life Institute’s (FLI) Asilomar Beneficial AI Conference in 2017, researchers developed 23 principles for the development of responsible AI.
Media Studies Faculty Member Peter Asaro Named a Finalist in 2017 World Technology Awards
“His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones from a perspective that combines media theory with science and technology studies. In 2015, he received a $116,000 grant from the Future of Life Institute.”


If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram