Skip to content

The AI Debate Must Stay Grounded in Reality

Published:
March 21, 2017
Author:
a guest blogger

Contents

The following article was written by Vincent Conitzer and originally posted in Prospect Magazine.

Progress in artificial intelligence has been rapid in recent years. Computer programs are dethroning humans in games ranging from Jeopardy to Go to poker. Self-driving cars are appearing on roads. AI is starting to outperform humans in image and speech recognition.

With all this progress, a host of concerns about AI’s impact on human societies have come to the forefront. How should we design and regulate self-driving cars and similar technologies? Will AI leave large segments of the population unemployed? Will AI have unintended sociological consequences? (Think about algorithms that accurately predict which news articles a person will like resulting in highly polarised societies, or algorithms that predict whether someone will default on a loan or commit another crime becoming racially biased due to the input data they are given.)

Will AI be abused by oppressive governments to sniff out and stifle any budding dissent? Should we develop weapons that can act autonomously? And should we perhaps even be concerned that AI will eventually become “superintelligent”—intellectually more capable than human beings in every important way—making us obsolete or even extinct? While this last concern was once purely in the realm of science fiction, notable figures including Elon Musk, Bill Gates, and Stephen Hawking, inspired by Oxford philosopher Nick Bostrom’s Superintelligence book, have recently argued it needs to be taken seriously.

These concerns are mostly quite distinct from each other, but they all rely on the premise of technical advances in AI. Actually, in all cases but the last one, even just currently demonstrated AI capabilities justify the concern to some extent, but further progress will rapidly exacerbate it. And further progress seems inevitable, both because there do not seem to be any fundamental obstacles to it and because large amounts of resources are being poured into AI research and development. The concerns feed off each other and a community of people studying the risks of AI is starting to take shape. This includes traditional AI researchers—primarily computer scientists—as well as people from other disciplines: economists studying AI-driven unemployment, legal scholars debating how best to regulate self-driving cars, and so on.

A conference on “Beneficial AI” held in California in January brought a sizeable part of this community together. The topics covered reflected the diversity of concerns and interests. One moment, the discussion centered on which communities are disproportionately affected by their jobs being automated; the next moment, the topic was whether we should make sure that super-intelligent AI has conscious experiences. The mixing together of such short- and long-term concerns does not sit well with everyone. Most traditional AI researchers are reluctant to speculate about whether and when we will attain truly human-level AI: current techniques still seem a long way off this and it is not clear what new insights would be able to close the gap. Most of them would also rather focus on making concrete technical progress than get mired down in philosophical debates about the nature of consciousness. At the same time, most of these researchers are willing to take seriously the other concerns, which have a concrete basis in current capabilities.

Is there a risk that speculation about super-intelligence, often sounding like science fiction more than science, will discredit the larger project of focusing on the societally responsible development of real AI? And if so, is it perhaps better to put aside any discussion of super-intelligence for now? While I am quite sceptical of the idea that truly human-level AI will be developed anytime soon, overall I think that the people worried about this deserve a place at the table in these discussions. For one, some of the most surprisingly impressive recent technical accomplishments have come from people who are very bullish on what AI can achieve. Even if it turns out that we are still nowhere close to human-level AI, those who imagine that we are could contribute useful insights into what might happen in the medium-term.

I think there is value even in thinking about some of the very hard philosophical questions, such as whether AI could ever have subjective experiences, whether there is something it would be like to be a highly advanced AI system. (See also my earlier Prospect article.) Besides casting an interesting new light on some ancient questions, the exercise is likely to inform future societal debates. For example, we may imagine that in the future people will become attached to the highly personalised and anthropomorphised robots that care for them in old age, and demand certain rights for these robots after they pass away. Should such rights be granted? Should such sentiments be avoided?

At the same time, the debate should obviously not exclude or turn off people who genuinely care about the short-term concerns while being averse to speculation about the long-term, especially because most real AI researchers fall in this last category. Besides contributing solutions to the short-term concerns, their participation is essential to ensure that the longer-term debate stays grounded in reality. Research communities work best when they include people with different views and different sub-interests. And it is hard to imagine a topic for which this is truer than the impact of AI on human societies.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

This content was first published at futureoflife.org on March 21, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Could we switch off a dangerous AI?

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
27 December, 2024

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram