Skip to content

Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?

Published:
April 11, 2015
Author:
Jesse Galef

Contents

To anyone only reading certain news articles, it might seem like the top minds in artificial intelligence disagree about whether AI safety is a concern worth studying.

But on Science Friday yesterday, guests Stuart Russell, Eric Horvitz, and Max Tegmark all emphasized how much agreement there was.

Horvitz, head of research at Microsoft, has sometimes been held up as a foil to Bill Gates’ worries about superintelligence. But he made a point to say that the reported disagreements are overblown:

“Let me say that Bill and I are close, and we recently sat together for quite a while talking about this topic. We came away from that meeting and we both said: You know, given the various stories in the press that put us at different poles of this argument (which is really over-interpretation and amplifications of some words), we both felt like we were in agreement that there needs to be attention focused on these issues. We shouldn’t just march ahead in a carefree manner. These are real interesting and challenging concerns about potential pitfalls. Yet I come away from these discussions being – as people know me – largely optimistic about the outcomes of what machine intelligence – AI – will do for humanity in the end.”

It’s good to see the public conversation moving so quickly past “Are these concerns legitimate?” and shifting toward “How should we handle these legitimate concerns?”

Click here to listen to the full episode of Science Friday.

This content was first published at futureoflife.org on April 11, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram