Skip to content
All Newsletters

FLI January, 2017 Newsletter

Published
2 February, 2017
Author
admin

Contents

FLI January, 2017 Newsletter

The Next Step to Ensuring Beneficial AI

The Asilomar Beneficial AI Principles

Two years ago, after an exciting conference in Puerto Rico that included many of the top minds in AI, we produced two open letters — one on beneficial AI and one on autonomous weapons — which were signed and supported by tens of thousands of people. But that was just one step along the path to creating artificial intelligence that will benefit us all.

This month, we brought together even more AI researchers, entrepreneurs, and thought leaders for our second Beneficial AI Conference, held in Asilomar, California (see videos below). Speakers and panelists discussed the future of AI, economic impacts, legal issues, ethics, and more. And during breakout sessions, groups gathered to discuss what basic principles we could all agree on that could help shape a future of beneficial AI.

As we expressed in a recent post about the process involved in creating the Asilomar Principles:

“We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

“We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.”

If you haven’t had a chance to review the Principles yet, we encourage you to do so now, and consider joining the thousands of other researchers and concerned citizens who have already signed.

For more in-depth discussion about the Principles, we interviewed Anca Dragan, Yoshua Bengio, Kay Firth-Butterfield, Guruduth Banavar, Francesca Rossi, Toby Walsh, Stefano Ermon, Dan Weld, and  Roman Yampolskiy.


Sampling of Asilomar videos on YouTube so far




Superintelligence: Science or Fiction?

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss what likely outcomes might be if we succeed in building human-level AI. Moderated by Max Tegmark.




Interactions between the AI Control Problem and the Governance Problem

Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI.



Creating Human-Level AI

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence.



AI and the Economy

Economist Erik Brynjolfsson explores how we can grow our prosperity through automation without leaving people lacking income and meaning.



Public Risk Management for AI: The Path Forward

Law scholar Matt Scherer explores means of mitigating risks to public from AI.


These are just a sampling of videos from the conference, and many more will be uploaded in the coming days. Please visit and follow our YouTube channel for more great videos and updates as we add more.

Don’t forget to follow us on SoundCloud and iTunes!

Podcast: Top AI Breakthroughs of 2016,

with Ian Goodfellow and Richard Mallah

2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the transcript here.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram