Skip to content

Think-tank dismisses leading AI researchers as luddites

Published:
December 24, 2015
Author:
Max Tegmark

Contents

By Stuart Russell and Max Tegmark

2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with the filmed expert testimony from a panel that he himself chaired last summer. The ITIF report makes three main points regarding AI:

1) The people promoting this beneficial-AI agenda are Luddites and “AI detractors.”

This is a rather bizarre assertion given that the agenda has been endorsed by thousands of AI researchers, including many of the world’s leading experts in industry and academia, in two open letters supporting beneficial AI and opposing offensive autonomous weapons. ITIF even calls out Bill Gates and Elon Musk by name, despite them being widely celebrated as drivers of innovation, and despite Musk having landed a rocket just days earlier. By implication, ITIF also labels as Luddites two of the twentieth century’s most iconic technology pioneers – Alan Turing, the father of computer science, and Norbert Wiener, the father of control theory – both of whom pointed out that super-human AI systems could be problematic for humanity. If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning.

Contrary to ITIF’s assertion, the goal of the beneficial-AI movement is not to slow down AI research, but to ensure its continuation by guaranteeing that AI remains beneficial. This goal is supported by the recent $10M investment from Musk in such research and the subsequent $15M investment by the Leverhulme Foundation.

2) An arms race in offensive autonomous weapons beyond meaningful human control is nothing to worry about, and attempting to stop it would harm the AI field and national security.

The thousands of AI researchers who disagree with ITIF’s assessment in their open letter are in a situation similar to that of the biologists and chemists who supported the successful bans on biological and chemical weapons. These bans did not prevent the fields of biology and chemistry from flourishing, nor did they harm US national security – as President Richard Nixon emphasized when he proposed the Biological Weapons Convention. As in this summer’s panel discussion, Atkinson once again appears to suggest that AI researchers should hide potential risks to humanity rather than incur any risk of reduced funding.

3) Studying how AI can be kept safe in the long term is counterproductive: it is unnecessary and may reduce AI funding.

Although ITIF claims that such research is unnecessary, he never gives a supporting argument, merely providing a brief misrepresentation of what Nick Bostrom has written about the advent of super-human AI (raising, in particular, the red herring of self-awareness) and baldly stating that, What should not be debatable is that this possible future is a long, long way off.” Scientific questions should by definition be debatable, and recent surveys of AI researchers indicate a healthy debate with broad range of arrival estimates, ranging from never to not very far off. Research on how to keep AI beneficial is worthwhile today even if it will only be needed many decades from now: the toughest and most crucial questions may take decades to answer, so it is prudent to start tackling them now to ensure that we have the answers by the time we need them. In the absence of such answers, AI research may indeed be slowed down in future in the event of localized control failures – like the so-called “Flash Crash” on the stock market – that dent public confidence in AI systems.

ITIF argues that the AI researchers behind these open letters have unfounded worries. The truly unfounded worries are those that ITIF harbors about AI funding being jeopardized: since the beneficial-AI debate heated up during the past two years, the AI field has enjoyed more investment than ever before, including OpenAI’s billion-dollar investment in beneficial AI research – arguably the largest AI funding initiative in history, with a large share invested by one of ITIF’s alleged Luddites.

Under Robert Atkinson’s leadership, the Information Technology Innovation Foundation has a distinguished record of arguing against misguided policies arising from ignorance of technology. We hope ITIF returns to this tradition and refrains from further attacks on expert scientists and engineers who make reasoned technical arguments about the importance of managing the impacts of increasingly powerful technologies. This is not Luddism, but common sense.

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Max Tegmark, MIT, Professor of Physics, President of Future of Life Institute

This content was first published at futureoflife.org on December 24, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram