Skip to content

What AI Researchers Say About Risks from AI

Published:
May 24, 2015
Author:
Viktoriya Krakovna

Contents

As the media relentlessly focuses on the concerns of public figures like Elon Musk, Stephen Hawking and Bill Gates, you may wonder – what do AI researchers think about the risks from AI? In his informative article, Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on these risks. He selected researchers to profile in his article as follows:

The criteria for my list: I’m only mentioning the most prestigious researchers, either full professors at good schools with lots of highly-cited papers, or else very-well respected scientists in industry working at big companies with good track records. They have to be involved in AI and machine learning. They have to have multiple strong statements supporting some kind of view about a near-term singularity and/or extreme risk from superintelligent AI. Some will have written papers or books about it; others will have just gone on the record saying they think it’s important and worthy of further study.

Scott’s review turns up some interesting parallels between the views of the concerned researchers and the skeptics:

When I read the articles about skeptics, I see them making two points over and over again. First, we are nowhere near human-level intelligence right now, let alone superintelligence, and there’s no obvious path to get there from here. Second, if you start demanding bans on AI research then you are an idiot.

I agree whole-heartedly with both points. So do the leaders of the AI risk movement.

[…]

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

It’s encouraging that there is less controversy than one might expect – in a nutshell, AI researchers agree that hype is bad and research into potential risks is good.

This content was first published at futureoflife.org on May 24, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram