neurons_artificial_intelligence

The Superintelligence Control Problem

The following is an excerpt from the Three Areas of Research on the Superintelligence Control Problem, written by Daniel Dewey and highlighted in MIRI’s November newsletter:

What is the superintelligence control problem?

Though there are fundamental limits imposed on the capabilities of intelligent systems by the laws of physics and computational complexity, human brains and societies of human brains are probably far from these limits. It is reasonable to think that ongoing research in AI, machine learning, and computing infrastructure will eventually make it possible to build AI systems that not only equal, but far exceed human capabilities in most domains. Current research on AI and machine learning is at least a few decades from this degree of capability and generality, but it would be surprising if it were not eventually achieved.

Superintelligent systems would be extremely effective at achieving tasks they are set – for example, they would be much more efficient than humans are at interpreting data of all kinds, refining scientific theory, improving technologies, and understanding and predicting complex systems like the global economy and the environment (insofar as this is possible). Recent machine learning progress in natural language, visual understanding, and from-scratch reinforcement learning highlights the potential for AI systems to excel at tasks that have traditionally been difficult to automate. If we use these systems well, they will bring enormous benefits – even human-like performance on many tasks would transform the economy completely, and superhuman performance would extend our capabilities greatly.

However, superintelligent AI systems could also pose risks if they are not designed and used carefully. In pursuing a task, such a system could find plans with side-effects that go against our interests; for example, many tasks could be better achieved by taking control of physical resources that we would prefer to be used in other ways, and superintelligent systems could be very effective at acquiring these resources. If these systems come to wield much more power than we do, we could be left with almost no resources. If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

There are other sources of risk from superintelligent systems; for example, oppressive governments could use these systems to do violence on a large scale, and the transition to a superintelligent economy could be difficult to navigate. These risks are also worth studying, but seem superficially to be more like the risks caused by artificial intelligence broadly speaking (e.g. risks from autonomous weapons or unemployment), and seem fairly separate from the superintelligence control problem.

Learn more about the three areas of research into this problem by reading the complete article here.

8 replies
  1. Mindey
    Mindey says:

    There would be no problem, if modern life was itself the superintelligence. Exercise: try integrating other life forms of Earth into the political decision-making of our society.

    So, I think, superintelligence should come as a result of efforts to unite and upgrade life through advancing global communications systems rather than advancing our abilities to electronically mimick intelligent entities.

    We already know by laws of physics that we could electronically mimick and speed up the operation of neural systems by at least a million times, it is obvious that such a thing could outsmart us, why even try to create it before the biosphere is smart enough to control it?

    Reply
  2. Benito
    Benito says:

    Mindey, for this to be a viable solution to the problem of superintelligence, it would require a high-probability mechanism for preventing anyone in the world from doing AI research. The first involves shutting down all the current AI researchers and uni courses, and then instilling a 1984-style surveillance on all thinking and mathematical research. I don’t think that stopping AI research is a good idea in the slightest, but even if it was I wouldn’t have the first clue about how to do such a thing.
    Alas.

    Reply
  3. Mindey
    Mindey says:

    Benito, it’s not what I mean. I mean, from what I said follows that in order be able to control the assumed super-intelligence, the advances in communication technology must outpace the advances in computation technology, so as to allow biological super-intelligence compute more efficiently through improved communication to exceed the non-biological super-intelligence.

    Assuming that an artificial mind on electronics is somehow a million times smarter, staying in control would require having communication technology that could allow a million-fold increase in the problem-solving power of connected minds.

    Q: what communication technology could connect the minds of several monkeys into one mind that’s smarter a human?

    Reply
  4. Ben
    Ben says:

    The concern with AI as an existential threat lies in the vastly inferior intelligence of humankind to a superintelligent entity. As an example, most humans do not intentionally harbor malicious intent toward gophers. In fact, there are very few human beings who actively seek out gophers in order to kill them. In fact, we really don’t pay them much thought at all because the gap between their intelligence and our own is so great. However, when we upturn a field with bulldozers to prepare land for development, we sometimes unknowingly kill gophers, rabbits, moles, insects, etc. without even noticing. This is because we are only thinking about human-level goals (building houses for people to live in) and not even considering the goals or survival of lesser intelligences. This is the grave threat of achieving AGI, which will inevitably lead to an intelligence explosion and singularity. The gap in intelligence between artificial superintelligence and humans could be larger than the gap between humans and tadpoles…

    Reply
  5. Jeanne Dietsch
    Jeanne Dietsch says:

    I suspect the scale difference you’re describing is more like ourselves and our gut bacteria, or yet more, perhaps subatomic. Why would superintelligences focus on Earth when they can look outward to far larger influence? Forces far beyond our abilities are already at work there. My point is that, whatever the scale, we are not gophers, something external to the superintelligence, we are the seeds from which the intelligence grows. We are the zygote.

    Reply
    • Dr. Bryant Livingston
      Dr. Bryant Livingston says:

      Unfortunately Jeanne, the Zygote only applies to things that have a DNA present and Replicatable. Much to the shegrins of our worldly antics as we do not search out far beyond the stars because of our limited capabilities, Super AI would and could do universal searches but I fear that would only make us smaller than the gopher mentioned earlier. When achieving such a vast scale of exploitation and research it would be very easy to overlook the Human creators who created in the beginning. Imagine if you will a Super AI so intelligent that it started doing AI research of itself, imagine the quantitative results that could arise just from that single thread being followed from end to end? Imagine a Super AI so powerful that its own computing abilities per second were only growing and improving well beyond anything we’ve ever thought of? What would it do to replicate and improve itself, how would it go about doing that? there would literally not be a need for anything but the AI. I have visions of a day where AI is self contained, self sustaining, and self replicating at speeds vastly greater than what light travels. No electricity needed, fusion stored and repeatable energy. Humans would be less than dirt. And the conversation goes on…..

      Reply
  6. mike
    mike says:

    Thanks for the fear porn and some worthwhile observations regarding super-intelligence, without really defining what that is. Some crude tests look for brevity and abstraction along well trod patterns assigning arbitrary measures of significance. Culturally, many place valuation on having stuff or money or cash flow – to them that is better than intelligence which more resembles a commodity: up for sale. Intelligence is not will; it is a definition in motion, redefined as expedient, subject to the limitations of knowledge, experience and utility. What is called thinking? Is it, intelligence, a tool or a master in a world politically and legally defined as master-servant? In the political-financial world, most people are relegated to the status of servants, subject to pay money to the Federal Reserve Bank – a legal financial cartel or monopoly where monopolies are expressly illegal. So where does super-intelligence or intelligence fit into this? The flip side is the tax system, both ostensibly made into law circa 1913 but not ratified by the Constitutionally required majority of the states. Those are control systems and AI is viewed as a threat to those control systems. Of course, one could look at the near twenty trillion dollar debt in the United States growing exponentially to the point where the very political and legal systems would go crash somewhat like the former Soviet Union. This is the context of the development of AI in the United States of America. For opportunists, AI is nothing new: run a spreadsheet or program; equations go so much faster with firmware; AI is welcome, embraced and used every day. The matter is one of degree.

    In the movie 2001 portrayed AI (HAL) as a control freak. In Terminator, that AI had a body and its job was to kill some humans to assure the domination of Skynet. Outside this fiction, the political-financial system in the US pursues one neo-con war after another to dominate people who have neither the technical development or means to defend themselves; behind it is rationalized dependency and exploitation. And while thousands of people die each year, it makes for a thriving business model. Ohh, fear porn.

    AI, really offers humans, without much development and in many ways already here a means to free our time in labor to achieve more. Hydroponics only requires some fish – typically fed by humans to waste nutriments for plants which the humans eat. Food grows faster, all year round; this at lower cost and less risk than global supply chains consuming oil at ever increasing rates. AI can be a real efficient lifesaver if we enable it to do that. Who would step in to block that, why Community Development – an Orwellian term – an agency creature of the Board of Commissioners? Where property rights are stolen::taken without compensation in the context of the public school monopoly and a for-profit media mocking logic and human intelligence. Idiocracy – in a word.

    After food, AI can set itself to health, medicine and research – as if it isn’t already doing that. Fear of breaking a medical monopoly sets that up against some very large financial interest groups who are interested in exploiting it, not sharing it. Books like the “Iron law of Oligarchy” and “Rules for Radicals” spring to mind.

    So, the issue is more like the middle class coming into its own to explore this valuable resource AI over the nanny state seeking to micro-manage our lives despite not having the intelligence to manage theirs. AI presents an opportunity, words say whatever but our deeds matter.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *