Skip to content

The Ethical Questions Behind Artificial Intelligence

Published:
October 24, 2016
Author:
Ariel Conn

Contents

What do philosophers and ethicists worry about when they consider the long-term future of artificial intelligence? Well, to start, though most people involved in the field of artificial intelligence are excited about its development, many worry that without proper planning an advanced AI could destroy all of humanity.

And no, this does not mean they’re worried about Skynet.

At a recent NYU conference, the Ethics of Artificial Intelligence, Eliezer Yudkowsky from the Machine Intelligence Research Institute explained that AI run amok was less likely to look like the Terminator and more likely to resemble the overeager broom that Mickey Mouse brings to life in the Sorcerer’s Apprentice in Fantasia. The broom has one goal and not only does it remain focused, regardless of what Mickey does, it multiplies itself and becomes even more efficient. Concerns about a poorly designed AI are similar — except with artificial intelligence, there will be no sorcerer to stop the mayhem at the end.

To help visualize how an overly competent advanced AI could go wrong, Oxford philosopher Nick Bostrom came up with a thought experiment about a deadly paper-clip-making machine. If you are in the business of selling paper clips then making a paper-clip-maximizing artificial intelligence seems harmless enough. However, with this as its only goal, an intelligent AI might keep making paper clips at the expense of everything else you care about. When it runs out of materials, it will figure out how to break everything around it down to molecular components and reassemble the molecules into paper clips. Soon it will have destroyed life on earth, the earth itself, the solar system, and possibly even the universe — all in an unstoppable quest to build more and more paper clips.

This might seem like a silly concern, but who hasn’t had some weird experience with their computer or some other technology when it went on the fritz? Consider the number of times you’ve sent bizarre messages thanks to autocorrect or, more seriously, the Flash Crash of 2010. Now imagine how such a naively designed — yet very complex — program could be exponentially worse if that system were to manage the power grid or oversee weapons systems.

Even now, with only very narrow AI systems, researchers are discovering that simple biases lead to increases in racism and sexism in the tech world; that cyberattacks are growing in strength and numbers; and that a military AI arms race may be underway.

At the conference, Bostrom explained that there are two types of problems that AI development could encounter: the mistakes that can be fixed later on, and the mistakes that will only be made once. He’s worried about the latter. Yudkowsky also summarized this concern when he said, “AI … is difficult like space probes are difficult: Once you’ve launched it, it’s out there.”

AI researcher and philosopher Wendell Wallach added, “We are building technology that we can’t effectively test.”

As artificial intelligence gets closer to human-level intelligence, how can AI designers ensure their creations are ethical and behave appropriately from the start? It turns out this question only begets more questions.

What does beneficial AI look like? Will AI benefit all people or only some? Will it increase income inequality? What are the ethics behind creating an AI that can feel pain? Can a conscious AI be developed without a concrete definition of consciousness? What is the ultimate goal of artificial intelligence? Will it help businesses? Will it help people? Will AI make us happy?

“If we have no clue what we want, we’re less likely to get it,” said MIT physicist Max Tegmark.

Stephen Peterson, a philosopher from Niagara University, summed up the gist of all of the questions when he encouraged the audience to wonder not only what the “final goal” for artificial intelligence is, but also how to get there. Scrooge, whom Peterson used as an example, always wanted happiness: the ghosts of Christmases past, present, and future just helped him realize that friends and family would help him achieve this goal more than money would.

Facebook’s Director of AI Research, Yann LeCun, believes that such advanced artificial intelligence is still a very long way off. He compared the current state of AI development to a chocolate cake. “We know how to make the icing and the cherry,” he said, “but we have no idea how to make the cake.”

But if AI development is like baking a cake, it seems AI ethics will require the delicate balance and attention of perfecting a soufflé. And most participants at the two-day event agreed that the only way to ensure permanent AI mistakes aren’t made, regardless of when advanced AI is finally developed, is to start addressing ethical and safety concerns now.

This is not to say that the participants of the conference aren’t also excited about artificial intelligence. As mentioned above, they are. The number of lives that could be saved and improved as humans and artificial intelligence work together is tremendous. The key is to understand what problems could arise and what questions need to be answered so that AI is developed beneficially.

“When it comes to AI,” said University of Connecticut philosopher, Susan Schneider, “philosophy is a matter of life and death.”

This content was first published at futureoflife.org on October 24, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram