neurons_artificial_intelligence

The Superintelligence Control Problem

The following is an excerpt from the Three Areas of Research on the Superintelligence Control Problem, written by Daniel Dewey and highlighted in MIRI’s November newsletter:

What is the superintelligence control problem?

Though there are fundamental limits imposed on the capabilities of intelligent systems by the laws of physics and computational complexity, human brains and societies of human brains are probably far from these limits. It is reasonable to think that ongoing research in AI, machine learning, and computing infrastructure will eventually make it possible to build AI systems that not only equal, but far exceed human capabilities in most domains. Current research on AI and machine learning is at least a few decades from this degree of capability and generality, but it would be surprising if it were not eventually achieved.

Superintelligent systems would be extremely effective at achieving tasks they are set – for example, they would be much more efficient than humans are at interpreting data of all kinds, refining scientific theory, improving technologies, and understanding and predicting complex systems like the global economy and the environment (insofar as this is possible). Recent machine learning progress in natural language, visual understanding, and from-scratch reinforcement learning highlights the potential for AI systems to excel at tasks that have traditionally been difficult to automate. If we use these systems well, they will bring enormous benefits – even human-like performance on many tasks would transform the economy completely, and superhuman performance would extend our capabilities greatly.

However, superintelligent AI systems could also pose risks if they are not designed and used carefully. In pursuing a task, such a system could find plans with side-effects that go against our interests; for example, many tasks could be better achieved by taking control of physical resources that we would prefer to be used in other ways, and superintelligent systems could be very effective at acquiring these resources. If these systems come to wield much more power than we do, we could be left with almost no resources. If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

There are other sources of risk from superintelligent systems; for example, oppressive governments could use these systems to do violence on a large scale, and the transition to a superintelligent economy could be difficult to navigate. These risks are also worth studying, but seem superficially to be more like the risks caused by artificial intelligence broadly speaking (e.g. risks from autonomous weapons or unemployment), and seem fairly separate from the superintelligence control problem.

Learn more about the three areas of research into this problem by reading the complete article here.

6 replies
  1. Mindey
    Mindey says:

    There would be no problem, if modern life was itself the superintelligence. Exercise: try integrating other life forms of Earth into the political decision-making of our society.

    So, I think, superintelligence should come as a result of efforts to unite and upgrade life through advancing global communications systems rather than advancing our abilities to electronically mimick intelligent entities.

    We already know by laws of physics that we could electronically mimick and speed up the operation of neural systems by at least a million times, it is obvious that such a thing could outsmart us, why even try to create it before the biosphere is smart enough to control it?

    Reply
  2. Benito
    Benito says:

    Mindey, for this to be a viable solution to the problem of superintelligence, it would require a high-probability mechanism for preventing anyone in the world from doing AI research. The first involves shutting down all the current AI researchers and uni courses, and then instilling a 1984-style surveillance on all thinking and mathematical research. I don’t think that stopping AI research is a good idea in the slightest, but even if it was I wouldn’t have the first clue about how to do such a thing.
    Alas.

    Reply
  3. Mindey
    Mindey says:

    Benito, it’s not what I mean. I mean, from what I said follows that in order be able to control the assumed super-intelligence, the advances in communication technology must outpace the advances in computation technology, so as to allow biological super-intelligence compute more efficiently through improved communication to exceed the non-biological super-intelligence.

    Assuming that an artificial mind on electronics is somehow a million times smarter, staying in control would require having communication technology that could allow a million-fold increase in the problem-solving power of connected minds.

    Q: what communication technology could connect the minds of several monkeys into one mind that’s smarter a human?

    Reply
  4. Ben
    Ben says:

    The concern with AI as an existential threat lies in the vastly inferior intelligence of humankind to a superintelligent entity. As an example, most humans do not intentionally harbor malicious intent toward gophers. In fact, there are very few human beings who actively seek out gophers in order to kill them. In fact, we really don’t pay them much thought at all because the gap between their intelligence and our own is so great. However, when we upturn a field with bulldozers to prepare land for development, we sometimes unknowingly kill gophers, rabbits, moles, insects, etc. without even noticing. This is because we are only thinking about human-level goals (building houses for people to live in) and not even considering the goals or survival of lesser intelligences. This is the grave threat of achieving AGI, which will inevitably lead to an intelligence explosion and singularity. The gap in intelligence between artificial superintelligence and humans could be larger than the gap between humans and tadpoles…

    Reply
  5. Jeanne Dietsch
    Jeanne Dietsch says:

    I suspect the scale difference you’re describing is more like ourselves and our gut bacteria, or yet more, perhaps subatomic. Why would superintelligences focus on Earth when they can look outward to far larger influence? Forces far beyond our abilities are already at work there. My point is that, whatever the scale, we are not gophers, something external to the superintelligence, we are the seeds from which the intelligence grows. We are the zygote.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *