Skip to content

Sam Harris TED Talk: Can We Build AI Without Losing Control Over It?

Published:
October 7, 2016
Author:
Tucker Davey

Contents

The threat of uncontrolled artificial intelligence, Sam Harris argues in a recently released TED Talk, is one of the most pressing issues of our time. Yet most people “seem unable to marshal an appropriate emotional response to the dangers that lie ahead.”

Harris, a neuroscientist, philosopher, and best-selling author, has thought a lot about this issue. In the talk, he clarifies that it’s not likely armies of malicious robots will wreak havoc on civilization like many movies and caricatures portray. He likens this machine-human relationship to the way humans treat ants. “We don’t hate ,” he explains, “but whenever their presence seriously conflicts with one of our goals … we annihilate them without a qualm. The concern is that we will one day build machines that, whether they are conscious or not, could treat us with similar disregard.”

Harris explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:

  1. Intelligence is a product of information processing in physical systems.
  2. We will continue to improve our intelligent machines.
  3. We do not stand on the peak of intelligence or anywhere near it.

Humans have already created systems with narrow intelligence that exceeds human intelligence (such as computers). And since mere matter can give rise to general intelligence (as in the human brain), there is nothing, in principle, preventing advanced general intelligence in machines, which are also made of matter.

But Harris says the third assumption is “the crucial insight” that “makes our situation so precarious.” If machines surpass human intelligence and can improve themselves, they will be more capable than even the smartest humans—in unimaginable ways.

Even if a machine is no smarter than a team of researchers at MIT, “electronic circuits function about a million times faster than biochemical ones,” Harris explains. “So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.”

Harris wonders, “how could we even understand, much less constrain, a mind making this sort of progress?”

Harris also worries that the power of superintelligent AI will be abused, furthering wealth inequality and increasing the risk of war. “This is a winner-take-all scenario,” he explains. Given the speed that these machines can process information, “to be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.”

If governments and companies perceive themselves to be in an arms race against one another, they could develop strong incentives to create superintelligent AI first—or attack whoever is on the brink of creating it.

Though some researchers argue that superintelligent AI will not be created for another 50-100 years, Harris points out, “Fifty years is not that much time to meet one of the greatest challenges our species will ever face.”

Harris warns that if his three basic assumptions are correct, “then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.”

 

 Photo credit to Bret Hartman from TED. Illustration credit to Paul Lachine. You can see more of Paul’s illustrations at http://www.paullachine.com/index.php.

This content was first published at futureoflife.org on October 7, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram