Skip to content

The Superintelligence Control Problem

Published:
November 23, 2015
Author:
Ariel Conn

Contents

The following is an excerpt from the Three Areas of Research on the Superintelligence Control Problem, written by Daniel Dewey and highlighted in MIRI’s November newsletter:

What is the superintelligence control problem?

Though there are fundamental limits imposed on the capabilities of intelligent systems by the laws of physics and computational complexity, human brains and societies of human brains are probably far from these limits. It is reasonable to think that ongoing research in AI, machine learning, and computing infrastructure will eventually make it possible to build AI systems that not only equal, but far exceed human capabilities in most domains. Current research on AI and machine learning is at least a few decades from this degree of capability and generality, but it would be surprising if it were not eventually achieved.

Superintelligent systems would be extremely effective at achieving tasks they are set – for example, they would be much more efficient than humans are at interpreting data of all kinds, refining scientific theory, improving technologies, and understanding and predicting complex systems like the global economy and the environment (insofar as this is possible). Recent machine learning progress in natural language, visual understanding, and from-scratch reinforcement learning highlights the potential for AI systems to excel at tasks that have traditionally been difficult to automate. If we use these systems well, they will bring enormous benefits – even human-like performance on many tasks would transform the economy completely, and superhuman performance would extend our capabilities greatly.

However, superintelligent AI systems could also pose risks if they are not designed and used carefully. In pursuing a task, such a system could find plans with side-effects that go against our interests; for example, many tasks could be better achieved by taking control of physical resources that we would prefer to be used in other ways, and superintelligent systems could be very effective at acquiring these resources. If these systems come to wield much more power than we do, we could be left with almost no resources. If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

Other risks posed by advanced Artificial Intelligence

There are other sources of risk from superintelligent systems; for example, oppressive governments could use these systems to do violence on a large scale, and the transition to a superintelligent economy could be difficult to navigate. These risks are also worth studying, but seem superficially to be more like the risks caused by artificial intelligence broadly speaking (e.g. risks from autonomous weapons or unemployment), and seem fairly separate from the superintelligence control problem.

Learn more about the three areas of research into this problem by reading the complete article here.

This content was first published at futureoflife.org on November 23, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram