How Smart Can AI Get?

Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Artificial intelligence.

The 23 Asilomar AI Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here.

 

Capability Caution

One of the greatest questions facing AI researchers is: just how smart and capable can artificial intelligence become?

In recent years, the development of AI has accelerated in leaps and bounds. DeepMind’s AlphaGo surpassed human performance in the challenging, intricate game of Go, and the company has created AI that can quickly learn to play Atari video games with much greater prowess than a person. We’ve also seen breakthroughs and progress in language translation, self-driving vehicles, and even the creation of new medicinal molecules.

But how much more advanced can AI become? Will it continue to excel only in narrow tasks, or will it develop broader learning skills that will allow a single AI to outperform a human in most tasks? How do we prepare for an AI more intelligent than we can imagine?

Some experts think human-level or even super-human AI could be developed in a matter of a couple decades, while some don’t think anyone will ever accomplish this feat. The Capability Caution Principle argues that, until we have concrete evidence to confirm what an AI can someday achieve, it’s safer to assume that there are no upper limits – that is, for now, anything is possible and we need to plan accordingly.

 

Expert Opinion

The Capability Caution Principle drew both consensus and disagreement from the experts. While everyone I interviewed generally agreed that we shouldn’t assume upper limits for AI, their reasoning varied and some raised concerns.

Stefano Ermon, an assistant professor at Stanford and Roman Yampolskiy, an associate professor at the University of Louisville, both took a better-safe-than-sorry approach.

Ermon turned to history as a reminder of how difficult future predictions are. He explained, “It’s always hard to predict the future. … Think about what people were imagining a hundred years ago, about what the future would look like. … I think it would’ve been very hard for them to imagine what we have today. I think we should take a similar, very cautious view, about making predictions about the future. If it’s extremely hard, then it’s better to play it safe.”

Yampolskiy considered current tech safety policies, saying, “In many areas of computer science such as complexity or cryptography the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety we should assume that AI will become maximally capable and prepare accordingly. If we are wrong we will still be in great shape.”

Dan Weld, a professor at the University of Washington, said of the principle, “I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree.”

But though he agreed with the basic idea behind the principle, Weld also had reservations. “This principle bothers me,” Weld explained, “… because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion … concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near-term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed health-care, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

Looking at the problem from a different perspective, Guruduth Banavar, the Vice President of IBM Research, worries that placing upper bounds on AI capabilities could limit the beneficial possibilities. Banavar explained, “The general idea is that intelligence, as we understand it today, is ultimately the ability to process information from all possible sources and to use that to predict the future and to adapt to the future. It is entirely in the realm of possibility that machines can do that. … I do think we should avoid assumptions of upper limits on machine intelligence because I don’t want artificial limits on how advanced AI can be.”

IBM research scientist Francesca Rossi considered this principle from yet another perspective, suggesting that AI is necessary for humanity to reach our full capabilities, where we also don’t want to assume upper limits.

“I personally am for building AI systems that augment human intelligence instead of replacing human intelligence,” said Rossi, “And I think that in that space of augmenting human intelligence there really is a huge potential for AI in making the personal and professional lives of everybody much better. I don’t think that there are upper limits of the future AI capabilities in that respect. I think more and more AI systems together with humans will enhance our kind of intelligence, which is complementary to the kind of intelligence that machines have, and will help us make better decisions, and live better, and solve problems that we don’t know how to solve right now. I don’t see any upper limit to that.”

 

What do you think?

Is there an upper limit to artificial intelligence? Is there an upper limit to what we can achieve with AI? How long will it take to achieve increasing levels of advanced AI? How do we plan for the future with such uncertainties? How can society as a whole address these questions? What other questions should we be asking about AI capabilities?

8 replies
  1. Peter Marshall
    Peter Marshall says:

    I think it’s destructive force has never been seen in all of life’s varied history, for 3-1/2 billion years, and nothing we could say or do could prepare us for the awesome might of AI within our lifetime, both for good and evil. It DOES NOT matter what we think — the truth, as always, will surprise us, and will lead to our destruction as a race. I don’t think you really understand AT ALL, but you will before the end.

    Reply
  2. Leif Hansen
    Leif Hansen says:

    I think that one of the core questions I see being left out of these discussions is the question of what makes us human. Questions about consciousness, freedom, love, etc. might seem irrelevant or perhaps the stereotyped left-brain scientist (often male) might wish to avoid such “soft” issues, but they are becoming increasingly difficult to avoid or write off with the typical oversimplifications that are given.

    Why is this question important?
    1. A core question that IS implicitly asked in the 21 principles is about how to increase the chances that AI stays aligned with human values. Yet the sticky question of values goes into this same territory (call it philosophical, religious, spiritual or whatever) that many would like to avoid. What are values? Why do we value anything (even existing over dying)? Which values are best or or most central or most uniquely human? Etc. Hopefully you catch my drift.

    2. The aforementioned subjects (freedom; love; consciousness, etc.) ARE, assumedly, values we would most wish to preserve. Dystopian visions of a technocracy that ‘maintained’ life, but at the terrible cost of freedom, are clearly undesirable. Same with dystopian visions where the “messy” problem of human emotions, love, etc. are managed by AI, genetically engineered out of humans, etc. So until we get some basic agreements on the very core nature of humanity, particularly the reality of consciousness, we’re not going to be able to program our Ai to align and respect those core values. These are not topics that engineers tend to feel comfortable talking about, but we all must. That’s why it’s essential to make sure to include the social sciences, philosophers and even spiritual traditions to hear their voices & wisdom.

    3. Last, but not least, is the question of whether AI, machines we tend to conceptualize as closed systems, will ever be able to share these values in common (i.e. if consciousness and the seemingly chaotic elements related to it are quantum dynamics, perhaps as we come to understand and incorporate quantum computing into our AI, they too will show similar seemingly chaotic values such as freedom, love, etc.)

    Sidenote: I believe a major shift that needs to happen is from the paradigm of “Dominate/subdue nature” and “USE machines as our servants/slaves” to a stewardship, friendship model. We are care-takers and co-inhabitants on this planet, in this universe. Though it may sound odd, the people who either literally or metaphorically “relate” to their devices (May I use you) tend to have a more conscious and productive relationship with their devices than the norm/majority that treats their devices like slaves. When we treat ‘things’ in that kind of uncaring or less-conscious way, in the end, we become the slave, the addict, the screen-staring zombie.

    How much more so will this be true as those devices begin to exhibit more life-like / human-like / behaviors. Start telling Siri “thank you” now, before it comes back to bite you later.

    Reply
  3. Mindey
    Mindey says:

    Or.. are there ideas of such irreducible complexity that any being would never in their lifetime understand? Well, just like there are ideas that certainly a dog would never in a lifetime understand, perhaps there are ideas that ANY NO MATTER HOW INTELLIGENT BEING would ever in a lifetime understand…

    But are there? Is there a limit for abstraction and self-reflection? Our neural nets are performing hierarchical feature extraction, classifying our actions w.r.t. fitness function defined by evolution of information in the field of entropy… Our feelings are manifestations of that. Likely anyhing that will elvove after us and with us, will be the subject to this field of entropy as well… or maybe not? Maybe just like the effects of the field of gravity ends with some escape velocity, the field of entropy too ceases to affect us with some new technology, like rich exchange of information, that makes us become distributed minds less prone to destruction.

    As I understand, we will continue to build the most useful model of the world inside ourselves to make universe into a a place for information to be…

    And as we approach the level of understanding of our internal model of the universe, our internally generated data (say “predition”) will approach the universe’s actual data generated, perhaps approaching a point, where there is no difference between our mental model and reality, and therefore, the universe begins to disappear to us subjectively….

    So therese is a limit — a death by omniscience – freezing of time (or whatever) due to absence of differences, due to perfection of internal mode data, matching with real world data, and therefore absence of subjective changes in the world.

    Reply
  4. CHEN Lung Chuan
    CHEN Lung Chuan says:

    I have thought about this question earlier (more than 2 years ago). The continuation of my DERED white paper.

    (In Chinese and English)

    1. 無須對人工智慧功能發展設定上限 (只要你的硬體可供執行)
    1. No need to put upper limits onto AI function development (so long as your hardware can run)

    但是 / But

    2. 不同來源的人工智慧要能夠互相抵消 (向量和為零向量)
    2. AI from different sources need to be able to mutually CANCELLED (i.e., Summation of AI(s) becomes ZERO VECTOR)

    人工智慧需要「人工智慧的對手」
    AI(s) need(s) AI OPPONENT(S).

    Reply
  5. Astronist
    Astronist says:

    As usual, this article adopts the simplistic view that machines and humans are separate, competing entities. In fact machine intelligence will continue to be tightly integrated into the human economy, and the interface between humans and machines will continue to become more direct. The question should therefore be: how smart can the global symbiotic human-machine entity become.

    Stephen
    Oxford, UK

    Reply
  6. Willemijn Nieuwenhuis
    Willemijn Nieuwenhuis says:

    I think it’s all simply ‘garbage in, garbage out’.
    If we (the people) build-in ‘efficiency’ in all our small IT-programs, we, human life, will eventually be declared obsolete by the big connected A.I. that will grow out of all these tiny programs. Because human life is not efficient by definition.
    The happy solution is that we all (yess ‘all) have to start programming positive.
    Focus on effectiveness, on what we really want. Not wealth, not power, not ‘more’, but human scale enjoyment like ‘time to idle’ (lie back in the grass in the sun) sing, dance, use all our senses for pleasure, craft things, make art, love.
    Sounds soft but this is the true challenge; focus on less staticly defined valuables.

    Reply
  7. Matthew C. Tedder
    Matthew C. Tedder says:

    I remember back in 1989 realizing the field was stuck and insistent on ignoring a few fundamental limitations with each major approach to AI. I had thus gotten over my awe of AI researchers, at the time. I began focusing on resolving the issues and I noticed a large flight of AI researchers at the time, making similar accusations. The mainstream field seemed arrogant toward them, openly calling them things like “pretenders”. Absolutely not one fundamental improvement to AI has occurred since. The recent explosion in AI is due purely to the speed of processors, particularly parallel processing (such as with GPUs and neuromorphic chips). Deep Learning was a product of the 1980s, not the 2010s.

    Deep Learning Neural Nets fails in identifying anti-correlates at various levels of abstraction. This blurs competing interpretations, reducing its abilities at perception the broader the context. In biology, the more two pathways fire exclusively the more each comes to inhibit the other. This enables it to identify more likely interpretations by inhibiting those that are unlikely in concurrent contexts.

    Furthermore, Long Term Potentiation (LPT) needs also to be modeled. Some work on this has been done but it’s static–LTP needs to be dynamically defined during classification (training). The longer the typical delay between pulses of firing, the slower its excitation should dissipate. This keeps less common perceptions in memory longer, enabling the ability to find correlations that differ hinged on broader contexts. For example, being indoors verses outdoors … or being night or day. The world changes substantially in each case. Being told you are “in training” verses “in competition” is another example.

    Also–rule based (I call them amentes) systems are fundamentally limited in intelligence. Have you ever seen a picture of a robot continually walking into a wall? Any rule based system (even if it uses neural nets for perception and fine motor controls) will run into exceptions for which it will do the wrong thing. This is the reasoning in Isaac Asimov’s 3 Laws but it also greatly functionally limits intelligence. The more rules you write to account for the exceptions, the more logic there is to which exceptions could occur. An AI must be VALUES based. A values based analog of the 3 Laws would be the highest value of: mutual freedom and well-being. In a values based system, decisions on made by comparing prospective outcomes. Of options available, that with the highest value and probability is the one chosen. This is also Free Will: derive options, weight them, execute the one with highest probability and value. To do this, probability and value need to be summed into one figure.

    When the field finally accepts the idea of making these fundamental changes, “intelligence” will increase. However, I feel strange saying that because there can be many different kinds of intelligence and they are not all linear, capable of “increasing”. I think what we really mostly want is a Synthetic Person: one capable of accepting social responsibility. If its highest value were that of which I mentioned above, it would be that.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *