Skip to content

When AI Journalism Goes Bad

Published:
April 26, 2016
Author:
Ariel Conn
A rebuttal to bad AI journalism

Contents

Slate is currently running a feature called “Future Tense,” which claims to be the “citizens guide to the future.” Two of their recent articles, however, are full of inaccuracies about AI safety and the researchers studying it. While this is disappointing, it also represents a good opportunity to clear up some misconceptions about why AI safety research is necessary.

The first contested article was Let Artificial Intelligence Evolve, by Michael Chorost, which displays a poor understanding of the issues surrounding the evolution of artificial intelligence. The second, How to be Good, by Adam Elkus, got some of the concerns about developing safe AI correct, but, in the process, did great disservice to one of today’s most prominent AI safety researchers, as well as to scientific research in general.

We do not know if AI will evolve safely

In his article, Chorost defends the idea of simply letting artificial intelligence evolve, without interference from researchers worried about AI safety. Chorost first considers an example from Nick Bostrom’s book, Superintelligence, in which a superintelligent system might tile the Earth with some undesirable product, thus eliminating all biological life. Chorost argues this is impossible because “a superintelligent mind would need time and resources to invent humanity-destroying technologies.” Of course it would. The concern is that a superintelligent system, being smarter than us, would be able to achieve such goals without us realizing what it was up to. How? We don’t know. This is one of the reasons it’s so important to study AI safety now.

It’s quite probable that a superintelligent system would not attempt such a feat, but at the moment, no one can guarantee that. We don’t know yet how a superintelligent AI will behave. There’s no reason to expect a superintelligent system to “think” like humans do, yet somehow we need to try to anticipate what an advanced AI will do. We can’t just hope that advanced AI systems will evolve compatibly with human life: we need to do research now to try to ensure compatibility.

Chorost then goes on to claim that a superintelligent AI won’t tile the Earth with some undesirable object because it won’t want to. He says, “Until an A.I. has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests and fight off human resistance. Wanting is essential to any kind of independent action.” This represents misplaced anthropromorphization and a misunderstanding of programming goals. What an AI wants to do is dependent on what it is programmed to do. Microsoft Office doesn’t want me to spell properly, yet it will mark all misspelled words because that’s what it was programmed to do. And that’s just software, not an advanced, superintelligent system, which would be infinitely more complex.

If a robot is given the task of following a path to reach some destination, but is programmed to recognize that reaching the destination is more important than sticking to the path, then if it encounters an obstacle, it will find another route in order to achieve its primary objective. This isn’t because it has an emotional attachment to reaching its destination, but rather, that’s what it was programmed to do. AlphaGo doesn’t want to beat the world’s top Go player: it’s just been programmed to win at Go. The list of examples of a system wanting to achieve some goal can go on and on, and it has nothing to do with how (or whether) the system feels.

Chorost continues this argument by claiming: “And the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent A.I. will have to develop a human-like moral sense that certain things are right and others are wrong.” Unless it’s smart enough to trick us into thinking it’s doing what we want while doing something completely different without us realizing it. Any child knows that one of the best ways to not get in trouble is to not get caught. Why would we think a superintelligent system couldn’t learn the same lesson? A punishment might just antagonize it or teach it to deceive us. There’s also the chance that the superintelligent agent will partake in some sort of action that is too complex for us to understand its ramifications; we can’t punish an agent if we don’t realize that what it’s doing is harmful.

The article then considers that for a superintelligent system to want something in the way that biological entities want something, it can’t be made purely with electronics. The reasoning is that since humans are biochemical in nature, if we want to create a superintelligent system with human wants and needs, that system must be made of similar stuff. Specifically, Chorost says, “To get a system that has sensations, you would have to let it recapitulate the evolutionary process in which sensations became valuable.”

First, it’s not clear why we need a superintelligent system that exhibits sensations, nor is there any reason that should be a goal of advanced AI. Chorost argues that we need this because it’s the only way a system can evolve to be moral, but his arguments seem limited to the idea that for a system to be superintelligent, it must be human-like.

Yet, consider the analogy of planes to birds. Planes are essentially electronics and metal – none of the biochemistry of a bird – yet they can fly higher, faster, longer, and farther than any bird. And while collisions between birds and planes can damage a plane, they’re a lot more damaging to the bird. Though planes are on the “dumber” end of the AI superintelligence spectrum, compared to birds, they could be considered “superflying” systems. There’s no reason to expect a superintelligent system to be any more similar to humans than planes are to birds.

Finally, Chorost concludes the article by arguing that history has shown that as humanity has evolved, it has become less and less violent. He argues, “A.I.s will have to step on the escalator of reason just like humans have, because they will need to bargain for goods in a human-dominated economy and they will face human resistance to bad behavior.” However, even if this is a completely accurate prediction, he doesn’t explain how we survive a superintelligent system as it transitions from its early violent stages to the more advanced social understanding we have today.

Again, it’s important to keep in mind that perhaps as AI evolves, everything truly will go smoothly, but we don’t know for certain that’s the case. As long as there are unknowns about the future of AI, we need beneficial AI research.

This leads to the problematic second article by Elkus. The premise of his article is reasonable: he believes it will be difficult to teach human values to an AI, given that human values aren’t consistent across all societies. However, his shoddy research and poor understanding of AI research turn this article into an example of a dangerous and damaging type of scientific journalism, both for AI and science in general.

Bad AI journalism can ruin the science

Elkus looks at a single interview that AI researcher Stuart Russell gave to Quanta Magazine. He then uses snippets of that interview, taken out of context, as his basis for arguing that AI researchers are not properly addressing concerns about developing AI with human-aligned values. He criticizes Russell for only focusing on the technical side of robotics values, saying, “The question is not whether machines can be made to obey human values but which humans ought to decide those values.” On the contrary, both are important questions that must be asked, and Russell asks both questions in all of his published talks. The values a robot takes on will have to be decided by societies, government officials, policy makers, the robot’s owners, etc. Russell argues that the learning process should involve the entire human race, to the extent possible, both now and throughout history. In this talk he gave at CERN in January of this year, Russell clearly enunciates that the “obvious difficulties” of value alignment include the fact that “values differ across individuals and cultures.” Elkus essentially fabricates a position that Russell does not take in order to provide a line of attack.

Elkus also argues that Russell needs to “brush up on his A.I. History” and learn from failed research in the past, without realizing that those lessons are already incorporated into Russell’s research (and apparently without realizing that Russell is the co-author of the seminal textbook on Artificial Intelligence, which, over 20 year later, is still the most influential and fundamental text on AI — the book is viewed by other AI history experts, such as Nils Nilsson, as perhaps the authoritative source on much of AI’s history). He also misunderstands the objectives of having a robot learn about human values from something like movies or books. Elkus inaccurately suggests that the AI would learn only from one movie, which is obviously problematic if the AI only “watches” the silent, racist movie, Birth of a Nation. Instead, the AI could look at all movies. Then it could look at all criticisms and reviews of each movie, as well as how public reactions to the movies change over the years. This is just one example of how an AI could learn values, but certainly not the only one.

Finally, Elkus suggests that Russell, as a “Western, well-off, white male cisgender scientist,” has no right to be working on the problem of ensuring that machines respect human values. For the sake of civil discourse, we will ignore the ad hominem nature of this argument and assume that it is merely a recommendation to draw on the expertise of multiple disciplines and viewpoints. Yet a simple Google search would reveal that not only is Russell one of the fiercest advocates for ensuring we keep AI safe and beneficial, but he is an equally strong advocate for bringing together a broad coalition of researchers and the broadest possible range of people to tackle the question of human values. In this talk at the World Economic Forum in 2015, Russell predicted that “in the future, moral philosophy will be a key industry sector,” and he suggests that machines will need to “engage in an extended conversation with the human race” to learn about human values.

Two days after Elkus’s article went live, Slate published an interview with Russell, written by another author, that does do a reasonable job of explaining Russell’s research and his concerns about AI safety. However, this is uncommon. Rarely do scientists have a chance to defend themselves. Plus, even when they are able to rebut an article, seeds of doubt have already been planted in the public’s mind.

From the perspective of beneficial AI research, articles like Elkus’s do more harm than good. Elkus describes an important problem that must be solved to achieve safe AI, but portrays one of the top AI safety researchers as someone who doesn’t know what he’s doing. This unnecessarily increases fears about the development of artificial intelligence, making researchers’ jobs that much more difficult. More generally, this type of journalism can be damaging not only to the researcher in question, but also to the overall field. If the general public develops a distaste for some scientific pursuit, then raising the money necessary to perform the research becomes that much more difficult.

For the sake of good science, journalists must maintain a higher standard and do their own due diligence when researching a particular topic or scientist: when it comes to science, there is most definitely such a thing as bad press.

This content was first published at futureoflife.org on April 26, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram