What AI Researchers Say About Risks from AI

As the media relentlessly focuses on the concerns of public figures like Elon Musk, Stephen Hawking and Bill Gates, you may wonder – what do AI researchers think about the risks from AI? In his informative article, Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on these risks. He selected researchers to profile in his article as follows:

The criteria for my list: I’m only mentioning the most prestigious researchers, either full professors at good schools with lots of highly-cited papers, or else very-well respected scientists in industry working at big companies with good track records. They have to be involved in AI and machine learning. They have to have multiple strong statements supporting some kind of view about a near-term singularity and/or extreme risk from superintelligent AI. Some will have written papers or books about it; others will have just gone on the record saying they think it’s important and worthy of further study.

Scott’s review turns up some interesting parallels between the views of the concerned researchers and the skeptics:

When I read the articles about skeptics, I see them making two points over and over again. First, we are nowhere near human-level intelligence right now, let alone superintelligence, and there’s no obvious path to get there from here. Second, if you start demanding bans on AI research then you are an idiot.

I agree whole-heartedly with both points. So do the leaders of the AI risk movement.

[…]

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

It’s encouraging that there is less controversy than one might expect – in a nutshell, AI researchers agree that hype is bad and research into potential risks is good.