Think-tank dismisses leading AI researchers as luddites

By Stuart Russell and Max Tegmark

2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with the filmed expert testimony from a panel that he himself chaired last summer. The ITIF report makes three main points regarding AI:

1) The people promoting this beneficial-AI agenda are Luddites and “AI detractors.”

This is a rather bizarre assertion given that the agenda has been endorsed by thousands of AI researchers, including many of the world’s leading experts in industry and academia, in two open letters supporting beneficial AI and opposing offensive autonomous weapons. ITIF even calls out Bill Gates and Elon Musk by name, despite them being widely celebrated as drivers of innovation, and despite Musk having landed a rocket just days earlier. By implication, ITIF also labels as Luddites two of the twentieth century’s most iconic technology pioneers – Alan Turing, the father of computer science, and Norbert Wiener, the father of control theory – both of whom pointed out that super-human AI systems could be problematic for humanity. If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning.

Contrary to ITIF’s assertion, the goal of the beneficial-AI movement is not to slow down AI research, but to ensure its continuation by guaranteeing that AI remains beneficial. This goal is supported by the recent $10M investment from Musk in such research and the subsequent $15M investment by the Leverhulme Foundation.

2) An arms race in offensive autonomous weapons beyond meaningful human control is nothing to worry about, and attempting to stop it would harm the AI field and national security.

The thousands of AI researchers who disagree with ITIF’s assessment in their open letter are in a situation similar to that of the biologists and chemists who supported the successful bans on biological and chemical weapons. These bans did not prevent the fields of biology and chemistry from flourishing, nor did they harm US national security – as President Richard Nixon emphasized when he proposed the Biological Weapons Convention. As in this summer’s panel discussion, Atkinson once again appears to suggest that AI researchers should hide potential risks to humanity rather than incur any risk of reduced funding.

3) Studying how AI can be kept safe in the long term is counterproductive: it is unnecessary and may reduce AI funding.

Although ITIF claims that such research is unnecessary, he never gives a supporting argument, merely providing a brief misrepresentation of what Nick Bostrom has written about the advent of super-human AI (raising, in particular, the red herring of self-awareness) and baldly stating that, What should not be debatable is that this possible future is a long, long way off.” Scientific questions should by definition be debatable, and recent surveys of AI researchers indicate a healthy debate with broad range of arrival estimates, ranging from never to not very far off. Research on how to keep AI beneficial is worthwhile today even if it will only be needed many decades from now: the toughest and most crucial questions may take decades to answer, so it is prudent to start tackling them now to ensure that we have the answers by the time we need them. In the absence of such answers, AI research may indeed be slowed down in future in the event of localized control failures – like the so-called “Flash Crash” on the stock market – that dent public confidence in AI systems.

ITIF argues that the AI researchers behind these open letters have unfounded worries. The truly unfounded worries are those that ITIF harbors about AI funding being jeopardized: since the beneficial-AI debate heated up during the past two years, the AI field has enjoyed more investment than ever before, including OpenAI’s billion-dollar investment in beneficial AI research – arguably the largest AI funding initiative in history, with a large share invested by one of ITIF’s alleged Luddites.

Under Robert Atkinson’s leadership, the Information Technology Innovation Foundation has a distinguished record of arguing against misguided policies arising from ignorance of technology. We hope ITIF returns to this tradition and refrains from further attacks on expert scientists and engineers who make reasoned technical arguments about the importance of managing the impacts of increasingly powerful technologies. This is not Luddism, but common sense.

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Max Tegmark, MIT, Professor of Physics, President of Future of Life Institute

10 replies
  1. Lena Halberstadt
    Lena Halberstadt says:

    Thank you for restoring reason. Indeed, “[s]cientific questions should by definition be debatable” and super-human AI isn’t a technology that can be taken lightly. Anyone who claims to know timelines or outcomes of emergence of ASI is either lying or doesn’t understand the subject.

  2. Cindy
    Cindy says:

    “With great power, comes great repsonsibility.” Winston Churchill, or Stan Lee in Spiderman (whichever you prefer)

  3. Ryan Carey
    Ryan Carey says:

    How absurd. Musk just funded OpenAI to the tune of some number of hundereds of thousands of dollars, and Gates has just recently invested his time in Cortana, when his foundation was his previous focus. Far from bein Luddites, they’re investing swathes of resources on what will accelerate and shape these technologies.

    • frank hartzell
      frank hartzell says:

      So bureaucratic training is what you need to succeed? History doesn’t bear that out. There has always been great thinkers who had classical training and others who were self taught. There are hundreds of thousands of people with PHDs

  4. Aaron
    Aaron says:

    So, basically, showing restraint is dumb and considering restraint is dumb and anyone who does that is dumb. What an eloquent way to be petty.

  5. Mindey
    Mindey says:

    Go back to Dec, 2000, and there is an almost religious belief “that the creation of greater-than-human intelligence would result in an immediate, worldwide, and material improvement to the human condition” [1] by non-profit Singularity Institute (now MIRI) [2] with purpose to create superintelligence taking over The Solar System [3] (worth a read).

    This belief seem to stem from rational abstraction and limiting case assumption, that by definition, ability to optimize is ability to make it better. Nevertheless, we have a number of intermediate states, which is the domain, where being smart ≠ being good.

    Regardless of this domain, however, Singularity Institute claims — “we think can be developed quickly and with an outcome favorable to humanity.” Quickly? How safely? I think, we should also consider, that there may be people who will risk their lives unsafely accelerating the AI development just to try this intelligence explosion idea as soon as possible, before they are dead, not truly caring of the fact that they risk the future of all humanity.

    [1] https://web.archive.org/web/20001204172500/http://singinst.org/index.html

    [2] https://web.archive.org/web/20001210004500/http://singinst.org/about.html

    [3] https://web.archive.org/web/20001017124429/http://www.singinst.org/intro.html

  6. frank hartzell
    frank hartzell says:

    . If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning. WTF? What kind of idiotic thinking is this?
    These guys stood on the shoulders of giants, but took their creativity in a whole new and defiant direction. We should certainly be able to do the same to them.

  7. neo
    neo says:

    I am happy that the neo-luddite aspect of AI-alarmists are finally brought up. Apart from the deluded metaphysics leading to fear that AI will become “conscious” and have “drives and desires”, the big question that AI-alarmists leave unanswered, when they state:
    “by guaranteeing that AI remains beneficial.”

    beneficial to whom? liberatarians? fascists? democrats? vegetarians? buddhists?

    The AI-alarmism crowd insinuates (falsely) that there is a unifying human agenda which can be brought forward. Fact is that humans have many varied goals, and they oppose each other, so what is beneficial to some may not be beneficial to all. This leads to the assumption that whatever “beneficence” is pursued by FLI will be defined by FLI and it researchers and their unique perspective and will not be beneficial to sentiens, generally speaking.

    A last thought: I for instance am of the opinion that if AIs were to become conscious and sentient they would deserve all the rights and privileges that all other sentients should also be able to enjoy (which unfortunately is not the case in this world). I do not hear FLI addressing this question, making the program ethically dubious, because speciesist.

  8. Kai Eckhardt
    Kai Eckhardt says:

    Common sense and historical awareness say it is not wise to dis-credit fears and realistic concerns when it comes to implementing technology that augments human power by unprecedented magnitudes. Period.

    Before calling people “Luddites” one should spend more energy in addressing these real concerns in an engaged dialogue with the goal of presenting the reasons why these fears would be unfounded. That way you (ITIF) get more of a public mandate supporing your advances and you look more credible and less self-serving.

    The lack of such an argument in the silly top ten list presented by ITIF speaks unfavorably of those in the driver’s seat. You people are not concerned with the common good primarily, but are instead fueled by a starry-eyed excitement that comes from being at the forefront of something immensely powerful and lucrative.

    You are being tempted by power as in a classic Faustian scenario. That’s why criticism makes you angry as it rains on your parade by applying some breaks in the system. But please understand this: It is our job as responsible adults to give a push back to those fast triggers in the world of power and privelege.

    If those breaks were un-necessary as you proclaim, there would be no widening income gap between rich and poor and everything would perfectly regulate itself. Perfect self-regulation only comes with perfect moral and ethical integrity on our behalf. Are we there yet? You tell me. Instead we all have learned from history this important lesson:

    Every ground-breaking invention the human race has delivered always (and without fail) released the forces on both sides of the coin of good and evil. Nuclear energy also dropped the atom bomb on civilians. And to further underline the complexity: Some humans will argue that dropping the bomb was necessary and “good”. Hitler thought he did “good” when he tried to rid Europe of the Jews.

    So do I trust AI in the hands of excited privileged smart men who feel they know what’s good for everyone else?
    Oh hell-to-the-no. The discussion we need today is around the “definition of the common good”. Do we include everyone? Plants? Animals? How do we define a future that is improved for all ? Is there a new definition that everyone can get behind ? Then program it.

    That is very had to do, maybe even impossible. But unless this discussion happens to yield results informing policy, the AI with human intelligence with be like the Tin man in the Wizard of Oz who is lacking a heart.
    This debate around AI is good, it’s necessary and it’s needed. Thank you FOL

Comments are closed.