Skip to content

Too smart for our own good?

Published:
April 28, 2016
Author:
Matt Scherer

Contents

Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams


Two stories this past week caught my eye.  The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip.  Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.”  Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.”  But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities.  Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”

The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks.  But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.

I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell.  (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets of a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell.  But despite the strawmen and snide comments, Elkus inadvertently makes some good points on the vexing issue of how to program ethics and morality into AI systems.

Elkus argues that there are few human values that are truly universal, which means that encoding values into an AI system prompts the “question of whose values ought to determine the values of the machine.”  He criticizes the view, which he attributes to Russell, that programmers can “sidestep these social questions” by coming up with algorithms that instruct machines to learn human values by observing human behavior.  Elkus rhetorically asks whether such programming means that “a machine could learn about American race relations by watching the canonical pro-Ku Klux Klan and pro-Confederacy film The Birth of a Nation?”

Indeed, Elkus points out that concepts that many AI experts take for granted contain implicit ethical choices that many people would dispute.  Elkus notes that “[w]hen talks about ‘tradeoffs’ and ‘value functions,’ he assumes that a machine ought to be an artificial utilitarian.”

The problem, of course, is that not all people agree that utilitarianism–focusing on “the greatest good for the greatest number” by maximizing aggregate utility–is a proper organizing ethical principle.  Firm individualists such as Ayn Rand find the collectivist tinge of utilitarianism repugnant, while religious figures such as Pope John Paul II have criticized utilitarianism for ignoring the the will of God.  Harry Truman’s comments on the power of the atomic bomb, the last technological development that led to widespread concerns about existential risk, reveal how prominent religious concerns are even in industrialized societies.  In his post-Nagasaki statement, Truman did not express hope that nuclear technology would be used for the benefit of humanity; instead, he prayed that God would “guide us to use it in His ways and for His purposes.”

AI engineers might scoff at the notion that such factors should be taken into consideration when figuring out how to encode ethics into AI systems, but billions of people would likely disagree.  Indeed, the entire “rationality”-based view of intelligence that pervades the current academic AI literature would likely be questioned by people whose worldviews give primacy to religious or individualistic considerations.

Unfortunately, those people are largely absent from the conferences and symposia where AI safety concerns are aired.  In the diverse world of people concerned with AI safety, many people–including and perhaps especially Stuart Russell–have expressed dismay that the AI safety ‘world’ is split up into various groups that don’t seem to listen to each other very well.  The academic AI people have their conferences, the tech industry people interested in AI have other conferences, the AI and law/ethics/society people have their conferences, and the twain (thrain?) rarely meet.  But Elkus suggests an even deeper problem–even those three groups are all largely composed of, to paraphrase Elkus, “Western, well-off, white male cisgender scientists” and professionals.

As a result, even when all three groups come together in one place (which is not often enough), they hardly form a representative cross-section of human values and concerns.  Elkus questions whether such a comparatively privileged group should have “the right to determine how the machine encodes and develops human values, and whether or not everyone ought to have a say in determining the way that AI systems” make ethical decisions.


To end on a more positive note, however, my impression is that Russell and Elkus probably do not disagree on the problems of AI safety as much as Elkus thinks they do–a fact that Elkus himself would have discovered if he had bothered to review some of Russell’s other speeches and writings before writing his essay.  Russell has often made the point in his books and speeches that human programmers face significant hurdles in getting AI systems to both (a) understand human values and (b) “care” about human values.  The fact that Russell spends more time focusing on the latter does not mean he does not recognize the former.  Instead, Russell and Elkus share the fundamental concern of most people who have expressed concerns about AI safety: that we will make AI systems that have immense power but that lack the sense of ethics and morality necessary to know how to use it properly.   In the future, I hope that Elkus will find more constructive and thoughtful ways to address those shared concerns.

This article was originally posted on Matt Scherer’s blog, Law and AI.

This content was first published at futureoflife.org on April 28, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram