Skip to content

AI Researcher Bart Selman

Published:
September 30, 2016
Author:
Revathi Kumar

Contents

AI Safety Research




Bart Selman

Professor, Department of Computer Science

Cornell University

bart.selman@gmail.com

Project: Scaling-up AI Systems: Insights From Computational Complexity

Amount Recommended:    $24,950




Project Summary

There is general consensus within the AI research community that progress in the field is accelerating: it is believed that human-level AI will be reached within the next one or two decades. A key question is whether these advances will accelerate further after general human level AI is achieved, and, if so, how rapidly the next level of AI systems (?super-human?) will be achieved.

Since the mid 1970s, Computer scientists have developed a rich theory about the computational resources that are needed to solve a wide range of problems. We will use these methods to make predictions about the feasibility of super-human level cognition.

Technical Abstract

There is general consensus within the AI research community that progress in the field is accelerating: it is believed that human-level AI will be reached within the next one or two decades on a range of cognitive tasks. A key question is whether these advances will accelerate further after general human level AI is achieved, and, if so, how rapidly the next level of AI systems (‘super-human’) will be achieved. Having a better understanding of how rapidly we may reach this next phase will be useful in preparing for the advent of such systems.

Computational complexity theory provides key insights into the scalability of computational systems. We will use methods from complexity theory to analyze the possibility of the scale-up to super-human intelligence and the speed of such scale-up for different categories of cognition.


Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman

From Uber’s advanced computer vision system to Netflix’s innovative recommendation algorithm, machine learning technologies are nearly omnipresent in our society. They filter our emails, personalize our newsfeeds, update our GPS systems, and drive our personal assistants. However, despite the fact that such technologies are leading a revolution in artificial intelligence, some would contend that these machine learning systems aren’t truly intelligent.

The argument, in its most basic sense, centers on the fact that machine learning evolved from theories of pattern recognition and, as such, the capabilities of such systems generally extend to just one task and are centered on making predictions from existing data sets. AI researchers like Rodney Brooks, a former professor of Robotics at MIT, argue that true reasoning, and true intelligence, is several steps beyond these kinds of learning systems.

But if we already have machines that are proficient at learning through pattern recognition, how long will it be until we have machines that are capable of true reasoning, and how will AI evolve once it reaches this point?

Understanding the pace and path that artificial reasoning will follow over the coming decades is an important part of ensuring that AI is safe, and that it does not pose a threat to humanity; however, before it is possible to understand the feasibility of machine reasoning across different categories of cognition, and the path that artificial intelligences will likely follow as they continue their evolution, it is necessary to first define exactly what is meant by the term “reasoning.”

Understanding Intellect

Bart Selman is a professor of Computer Science at Cornell University. His research is dedicated to understanding the evolution of machine reasoning. According to his methodology, reasoning is described as taking pieces of information, combining them together, and using the fragments to draw logical conclusions or devise new information.

Sports provide a ready example of expounding what machine reasoning is really all about. When humans see soccer players on a field kicking a ball about, they can, with very little difficulty, ascertain that these individuals are soccer players. Today’s AI can also make this determination. However, humans can also see a person in a soccer outfit riding a bike down a city street, and they would still be able to infer that the person is a soccer player. Today’s AIs probably wouldn’t be able to make this connection.

This process— of taking information that is known, uniting it with background knowledge, and making inferences regarding information that is unknown or uncertain — is a reasoning process. To this end, Selman notes that machine reasoning is not about making predictions, it’s about using logical techniques (like the abductive process mentioned above) to answer a question or form an inference.

Since humans do not typically reason through pattern recognition and synthesis, but by using logical processes like induction, deduction, and abduction, Selman asserts that machine reasoning is a form of intelligence that is more like human intelligence. He continues by noting that the creation of machines that are endowed with more human-like reasoning processes, and breaking away from traditional pattern recognition approaches, is the key to making systems that not only predict outcomes but also understand and explain their solutions. However, Selman notes that making human-level AI is also the first step to attaining super-human levels of cognition.

And due to the existential threat this could pose to humanity, it is necessary to understand exactly how this evolution will unfold.

The Making of a (super)Mind

It may seem like truly intelligent AI are a problem for future generations. Yet, when it comes to machines, the consensus among AI experts is that rapid progress is already being made in machine reasoning. In fact, many researchers assert that human-level cognition will be achieved across a number of metrics in the next few decades. Yet, questions remain regarding how AI systems will advance once artificial general intelligence is realized. A key question is whether these advances can accelerate farther and scale-up to super-human intelligence.

This process is something that Selman has devoted his life to studying. Specifically, he researches the pace of AI scalability across different categories of cognition and the feasibility of super-human levels of cognition in machines.

Selman states that attempting to make blanket statements about when and how machines will surpass humans is a difficult task, as machine cognition is disjointed and does not draw a perfect parallel with human cognition. “In some ways, machines are far beyond what humans can do,” Selman explains, “for example, when it comes to certain areas in mathematics, machines can take billions of reasoning steps and see the truth of a statement in a fraction of a second. The human has no ability to do that kind of reasoning.”

However, when it comes to the kind of reasoning mentioned above, where meaning is derived from deductive or inductive processes that are based on the integration of new data, Selman says that computers are somewhat lacking. “In terms of the standard reasoning that humans are good at, they are not there yet,” he explains. Today’s systems are very good at some tasks, sometimes far better than humans, but only in a very narrow range of applications.

Given these variances, how can we determine how AI will evolve in various areas and understand how they will accelerate after general human level AI is achieved?

For his work, Selman relies on computational complexity theory, which has two primary functions. First, it can be used to characterize the efficiency of an algorithm used for solving instances of a problem. As Johns Hopkins’ Leslie Hall notes, “broadly stated, the computational complexity of an algorithm is a measure of how many steps the algorithm will require in the worst case for an instance of a given size.” Second, it is a method of classifying tasks (computational problems) according to their inherent difficulty. These two features provide us with a way of determining how artificial intelligences will likely evolve by offering a formal method of determining the easiest, and therefore most probable, areas of advancement. It also provides key insights into the speed of this scalability.

Ultimately, this work is important, as the abilities of our machines are fast-changing. As Selman notes, “The way that we measure the capabilities of programs that do reasoning is by looking at the number of facts that they can combine quickly. About 25 years ago, the best reasoning engines could combine approximately 200 or 300 facts and deduce new information from that. The current reasoning engines can combine millions of facts.” This exponential growth has great significance when it comes to the scale-up to human levels of machine reasoning.

As Selman explains, given the present abilities of our AI systems, it may seem like machines with true reasoning capabilities are still some ways off; however, thanks to the excessive rate of technological progress, we will likely start to see machines that have intellectual abilities that vastly outpace our own in rather short order. “Ten years from now, we’ll still find them very much lacking in understanding, but twenty or thirty years from now, machines will have likely built up the same knowledge that a young adult has,” Selman notes. Anticipating exactly when this transition will occur will help us better understand the actions that we should take, and the research that the current generation must invest in, in order to be prepared for this advancement.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Workshops

  1. The Future of Artificial Intelligence: New York University, NY.
  2. Control and Responsible Innovation in the Development of Autonomous Systems Workshop: April 24-26, 2016. The Hastings Center, Garrison, NY.
    • The four co-­chairs (Gary Marchant, Stuart Russell, Bart Selman, and Wendell Wallach) and The Hastings Center staff (particularly Mildred Solomon and Greg Kaebnick) designed this first workshop
    • This workshop was focused on exposing participants to relevant research progressing in an array of fields, stimulating extended reflection upon key issues and beginning a process of dismantling intellectual silos and loosely knitting the represented disciplines into a transdisciplinary community. Twenty-five participants gathered at The Hastings Center in Garrison, NY from April 24th – 26th, 2016.
    • The workshop included representatives from key institutions that have entered this space, including IEEE, the Office of Naval Research, the World Economic Forum, and of course AAAI.
    • They are planning a second workshop, scheduled for October 30-November 1, 2016
  3. Colloquium Series on Robust and Beneficial AI (CSRBAI):  May 27-June 17, 2016. MIRI, Berkeley, CA.
    • Selman participated in this 22-day June colloquium series (https://intelligence.org/colloquium-series/) with the Future of Humanity Institute, which included four additional workshops.
    • Specific Workshop: “Robustness and Error-Tolerance.” June 4-5.
      • How can humans ensure that when AI system fail, they fail gracefully and detectably? This is difficult for systems that must adapt to new or changing environments; standard PAC guarantees for machine learning systems fail to hold when the distribution of test data does not match the distribution of training data. Moreover, systems capable of means-end reasoning may have incentives to conceal failures that would result in their being shut down. Researchers would much prefer to have methods of developing and validating AI systems such that any mistakes can be quickly noticed and corrected.


This content was first published at futureoflife.org on September 30, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

AI Researcher Moshe Vardi

AI Safety Research Moshe Vardi Computer Scientist, Professor Department of Computer Science Rice University vardi@cs.rice.edu Project: Artificial Intelligence and the […]
1 October, 2016

AI Researcher Manuela Veloso

AI Safety Research Manuela M. Veloso Herbert A. Simon University Professor Head, Machine Learning, Department School of Computer Science Carnegie […]
1 October, 2016
Wendall Wallace discusses his work in the fields of machine ethics, emerging technology and Ai governance.

AI Researcher Wendell Wallach

AI Safety Research Wendell Wallach Lecturer Yale Interdisciplinary Center for Bioethics wendell.wallach@yale.edu Project: Control and Responsible Innovation in the Development […]
1 October, 2016

AI Researcher Michael Webb

AI Safety Research Michael Webb PhD Candidate Stanford University michaelwebb@gmail.com Project: Optimal Transition to the AI Economy Amount Recommended:    $76,318 Project […]
1 October, 2016
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram