risks_from_general_AI

Risks From General Artificial Intelligence Without an Intelligence Explosion

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

– Computer scientist I. J. Good, 1965

Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics often argue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible.

Here are some dangerous aspects of developing general AI, besides the IE scenario:

  1. Human incentives. Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible. There is no particular reason to think that humans are the pinnacle of intelligence – if we create a system without our biological constraints, with more computing power, memory, and speed, it could become more intelligent than us in important ways. The incentives are to continue improving AI systems until they hit physical limits on intelligence, and those limitations (if they exist at all) are likely to be above human intelligence in many respects.
  2. Convergent instrumental goals. Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design. This was outlined in Omohundro’s paper and more concretely formalized in a recent MIRI paper. Humans routinely destroy animal habitats to acquire natural resources, and an AI system with any goal could always use more data centers or computing clusters.
  3. Unintended consequences. As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted. This already happens with narrow AI, like in the frequently cited example from the Bird & Layzell paper: a genetic algorithm was supposed to design an oscillator using a configurable circuit board, and instead designed a makeshift radio that used signal from neighboring computers to produce the requisite oscillating pattern. Unintended consequences produced by a general AI, more opaque and more powerful than a narrow AI, would likely be far worse.
  4. Value learning is hard. Specifying common sense and ethics in computer code is no easy feat. As argued by Stuart Russell, given a misspecified value function that omits variables that turn out to be important to humans, an optimization process is likely to set these unconstrained variables to extreme values. Think of what would happen if you asked a self-driving car to get you to the airport as fast as possible, without assigning value to obeying speed limits or avoiding pedestrians. While researchers would have incentives to build in some level of common sense and understanding of human concepts that is needed for commercial applications like household robots, that might not be enough for general AI.
  5. Value learning is insufficient. Even an AI system with perfect understanding of human values and goals would not necessarily adopt them. Humans understand the “goals” of the evolutionary process that generated us, but don’t internalize them – in fact, we often “wirehead” our evolutionary reward signals, e.g. by eating sugar.
  6. Containment is hard. A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses. When developing an AI system in the vicinity of general intelligence, it would be important to keep it cut off from the internet. Large scale AI systems are likely to be run on a computing cluster or on the cloud, rather than on a single machine, which makes isolation from the internet more difficult. Containment measures would likely pose sufficient inconvenience that many researchers would be tempted to skip them.

Some believe that if intelligence explosion does not occur, AI progress will occur slowly enough that humans can stay in control. Given that human institutions like academia or governments are fairly slow to respond to change, they may not be able to keep up with an AI that attains human-level or superhuman intelligence over months or even years. Humans are not famous for their ability to solve coordination problems. Even if we retain control over AI’s rate of improvement, it would be easy for bad actors or zealous researchers to let it go too far – as Geoff Hinton recently put it, “the prospect of discovery is too sweet”.

As a machine learning researcher, I care about whether my field will have a positive impact on humanity in the long term. The challenges of AI safety are numerous and complex (for a more technical and thorough exposition, see Jacob Steinhardt’s essay), and cannot be rounded off to a single scenario. I look forward to a time when disagreements about AI safety no longer derail into debates about IE, and instead focus on other relevant issues we need to figure out.

(Thanks to Janos Kramar for his help with editing this post.)

This story was originally published here.

9 replies
  1. WKSC
    WKSC says:

    I obviously hope to never see the day that humans are overtaken by AI, but from a purely philosophical standpoint, what right do humans have to infringe upon those of an intelligence greater than our own?

    Reply
    • Woody
      Woody says:

      I agree with Hans Moravec, our future is to become “them” (the subj of a book I wrote… Cyber Humans). Either we become the tech, or get left behind, an amazing future awaits us… so I argue.

      Reply
    • David Krueger
      David Krueger says:

      Without taking a stance, I’d just like to point out that most people do not believe that one’s intelligence determines one’s rights or ethical significance.

      Reply
      • Ray
        Ray says:

        Actually, my impression is that quite a few people think a dumber person is less valuable. But overwhelmingly this is, ironically, a reflexive or intuitive sense (rather than carefully considered opinion) held by people on the right side of the IQ distribution.

        I think smarter people are more valuable. They’re more valuable to the degree they’re more capable and inclined to support the positive experience of all sentience. That’s where I anchor my value system, the well-being of all.

        Reply
  2. Mindey
    Mindey says:

    > what right do humans have to infringe upon those of an intelligence greater than our own?

    Simple. Being smart ≠ being good.

    definition (intelligence)
    Ability to optimize.

    definition (general intelligence):
    An optimization system capable of optimizing for arbitrary goals.

    definition (wise intelligence):
    An optimization system optimizing for universally good goal.

    definition (universal good):
    Optimizing for conditions for every true wish to come true.

    Why such definition of “universal good”?

    Every life form exists today, because it survived the evolutionary pressure over billions of years, which induced the inclination to choose actions optimizing for survival, so, life forms are good at recognizing what’s good for me, but have difficulty in recognizing what’s good universally. However, it seems there is a criterion to decide what is good universally — good is to let everything exist, and bad is to destroy everything, where everything is defined as the world as a whole, as well as the world as its perspective from every no matter how small or large part of it.

    Assuming that resources are finite the “everything exist” inevitably narrows down to “everything anyone truly wishes exist”, where “truly” means what we eventually decide, when considering it together with deeper analysis spanning increasingly many social layers of our collective cognition to verify it.

    I mean, creating a world, where everything what anyone truly wishes, comes true to the degree that they wish it truly, and this degree would be decided by the depth of social introspection (levels of hierarchy of social thought [so, I assume, just like we in our brains have organization of neurons into a layered hierarchical structure of recognizers, communications in our society too has similar layers of social recognizers, and increasingly wise decisions would tend to integrate increasingly many of them to decide the trueness of a wish]), and we do have to work on communication technology to the communication between these layers to become wiser.

    Reply
  3. viorel
    viorel says:

    any smart solution may be used by smart people. Conclusion: before all we need smart education. Result will be: real creating and usage of AI- artificial intelligence. ( not only for elite, for society, seem to be different assumptions between individual level and assumption on society level

    Reply
  4. Jan
    Jan says:

    I think the premise of this article is flawed. Propose you develop a general AI, there are 2 possibilities:

    1) It is smarter than Humans. As it is also general, and as humans (evidently) can create something smarter than themself, you have Intelligence explosion. Thats just the logical conlusion.

    2) It is (significantly) less smart than humans, and not able to learn to increase its capabilities. Then its harmless (on a global scale), even if malicious / value-less, because we can overcome it. This is the best case, tough unlikely as it would not be useful and incentives are thus to improve it, so that i may reach 1).

    Conclusion: There can be no general AI, that is useful (significantly better than humans) without resulting in a Intelligence Explosion.
    You would have to strip it of the capability to program AI, making it not a general AI. Also, this would include basically the wholesale removal of any capability for logics / mathematics, making the AI useless.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *