Skip to content

Westworld Op-Ed: Are Conscious AI Dangerous?

Published:
December 6, 2016
Author:
Ariel Conn
Evan Rachel Wood in Westworld. Photo Credit: John P. Johnson.

Contents

Click here to see this page in other languages: Chinese   Russian  

“These violent delights have violent ends.”

With the help of Shakespeare and Michael Crichton, HBO’s Westworld has brought to light some of the concerns about creating advanced artificial intelligence.

If you haven’t seen it already, Westworld is a show in which human-like AI populate a park designed to look like America’s Wild West. Visitors spend huge amounts of money to visit the park and live out old west adventures, in which they can fight, rape, and kill the AI. Each time one of the robots “dies,” its body is cleaned up, its memory is wiped, and it starts a new iteration of its script.

The show’s season finale aired Sunday evening, and it certainly went out with a bang – but not to worry, there are no spoilers in this article.

AI Safety Issues in Westworld

Westworld was inspired by an old Crichton movie of the same name, and leave it to him – the writer of Jurassic Park — to create a storyline that would have us questioning the level of control we’ll be able to maintain over advanced scientific endeavors. But unlike the original movie, in which the robot is the bad guy, in the TV show, the robots are depicted as the most sympathetic and even the most human characters.

Not surprisingly, concerns about the safety of the park show up almost immediately. The park is overseen by one man who can make whatever program updates he wants without running it by anyone for a safety check. The robots show signs of remembering their mistreatment. One of the characters mentions that only one line of code keeps the robots from being able to harm humans.

These issues are just some of the problems the show touches on that present real AI safety concerns: A single “bad agent” who uses advanced AI to intentionally cause harm to people; small glitches in the software that turn deadly; and a lack of redundancy and robustness in the code to keep people safe.

But to really get your brain working, many of the safety and ethics issues that crop up during the show hinge on whether or not the robots are conscious. In fact, the show whole-heartedly delves into one of the hardest questions of all: what is consciousness? On top of that, can humans create a conscious being? If so, can we control it? Do we want to find out?

To consider these questions, I turned to Georgia Tech AI researcher Mark Riedl, whose research focuses on creating creative AI, and NYU philosopher David Chalmers, who’s most famous for his formulation of the “hard problem of consciousness.”

Can AI Feel Pain?

I spoke with Riedl first, asking him about the extent to which a robot would feel pain if it was so programmed. “First,” he said, “I do not condone violence against humans, animals, or anthropomorphized robots or AI.” He then explained that humans and animals feel pain as a warning signal to “avoid a particular stimulus.”

For robots, however, “the closest analogy might be what happens in reinforcement learning agents, which engage in trial-and-error learning.” The AI would receive a positive or negative reward for some action and it would adjust its future behavior accordingly. Rather than feeling like pain, Riedl suggests that the negative reward would be more “akin to losing points in a computer game.”

“Robots and AI can be programmed to ‘express’ pain in a human-like fashion,” says Riedl, “but it would be an illusion. There is one reason for creating this illusion: for the robot to communicate its internal state to humans in a way that is instantly understandable and invokes empathy.”

Riedl isn’t worried that the AI would feel real pain, and if the robot’s memory is completely erased each night, then he suggests it would be as though nothing happened. However, he does see one possible safety issue here. For reinforcement learning to work properly, the AI needs to take actions that optimize for the positive reward. If the robot’s memory isn’t completely erased — if the robot starts to remember the bad things that happened to it – then it could try to avoid those actions or people that trigger the negative reward.

“In theory,” says Riedl, “these agents can learn to plan ahead to reduce the possibility of receiving negative reward in the most cost-effective way possible. … If robots don’t understand the implications of their actions in terms other than reward gain or loss, this can also mean acting in advance to stop humans from harming them.”

Riedl points out, though, that for the foreseeable future, we do not have robots with sufficient capabilities to pose an immediate concern. But assuming these robots do arrive, problems with negative rewards could be potentially dangerous for the humans. (Possibly even more dangerous, as the show depicts, is if the robots do understand the implications of their actions against humans who have been mistreating them for decades.)

Can AI Be Conscious?

Chalmers sees things a bit differently. “The way I think about consciousness,” says Chalmers, “the way most people think about consciousness – there just doesn’t seem to be any question that these beings are conscious. … They’re presented as having fairly rich emotional lives – that’s presented as feeling pain and thinking thoughts. … They’re not just exhibiting reflexive behavior. They’re thinking about their situations. They’re reasoning.”

“Obviously, they’re sentient,” he adds.

Chalmers suggests that instead of trying to define what about the robots makes them conscious, we should instead consider what it is they’re lacking. Most notably, says Chalmers, they lack free will and memory. However, many of us live in routines that we’re unable to break out from. And there have been numerous cases of people with extreme memory problems, but no one thinks that makes it okay to rape or kill them.

“If it is regarded as okay to mistreat the AIs on this show, is it because of some deficit they have or because of something else?” Chalmers asks.

The specific scenarios portrayed in Westworld may not be realistic because Chalmers doesn’t believe the Bicameral-mind theory is unlikely to lead to consciousness, even for robots. ” I think it’s hopeless as a theory,” he says, “even of robot consciousness — or of robot self-consciousness, which seems more what’s intended.  It would be so much easier just to program the robots to monitor their own thoughts directly.”

But this still presents risks. “If you had a situation that was as complex and as brain-like as these, would it also be so easily controllable?” asks Chalmers.

In any case, treating robots badly could easily pose a risk to human safety. We risk creating unconscious robots that learn the wrong lessons from negative feedback, or we risk inadvertently (or intentionally, as in the case of Westworld) creating conscious entities who will eventually fight back against their abuse and oppression.

When a host in episode two is asked if she’s “real,” she responds, “If you can’t tell, does it matter?”

These seem like the safest words to live by.

This content was first published at futureoflife.org on December 6, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram