The-Future-of-AI

The Future of AI: Quotes and highlights from Monday’s NYU symposium

A veritable who’s who in artificial intelligence spent today discussing the future of their field and how to ensure it will be a good one. This exciting conference was organized by Yann LeCun, head of Facebook’s AI Research, together with a team of his colleagues at New York University. We plan to post a more detailed report once the conference is over, but in the mean time, here are some highlights from today.

One recurrent theme has been optimism, both about the pace at which AI is progressing and about it’s ultimate potential for making the world a better place. IBM’s Senior VP John Kelly said, Nothing I have ever seen matches the potential of AI and cognitive computing to change the world,” while Bernard Schölkopf, Director of the the Max Planck Institute for Intelligent Systems, argued that we are now in the cybernetic revolution. Eric Horvitz, Director of Microsoft Research, recounted how 25 years ago, he’d been inspired to join the company by Bill Gates saying  “I want to build computers that can see, hear and understand,” and he described how we are now making great progress toward getting there. NVIDIA founder Jen-Hsun Huang said, “AI is the final frontier [..] I’ve watched it hyped so many times, and yet, this time, it looks very, very different to me.”

In contrast, there was much less agreement about if or when we’d get human-level AI, which Demis Hassabis from DeepMind defined as “general AI – one system or one set of systems that can do all these different things humans can do, better.” Whereas Demis hoped for major progress within decades, AAAI President Tom Dietterich spoke extensively about the many remaining obstacles and Eric Horvitz cautioned that this may be quite far off, saying, we know so little about the magic of the human mind.” On the other hand, Bart Selman, AI Professor at Cornell, said, “within the AI community  […] there are a good number of AI researchers that can see systems that cover let’s say 90% of human intelligence within a few decades.

Murray Shanahan, AI professor at Imperial College, appeared to capture the consensus about what we know and don’t know about the timeline, arguing that are two common mistakes made “particularly by the media and the public.” The first, he explained, “is that human level AI, general AI, is just around the corner, that it’s just […] a couple of years away,” while the second mistake is “to think that it will never happen, or that it will happen on a timescale that is so far away that we don’t need to think about it very much.”

Amidst all the enthusiasm about the benefits of AI technology, many speakers also spoke about the importance of planning ahead to ensure that AI becomes a force for good. Eric Schmidt, former CEO of Google and now Chairman of its parent company Alphabet, urged the AI community to rally around three goals, which were also echoed by Demis Hassabis from DeepMind:

   1. AI should benefit the many, not the few (a point also argued by Emma Brunskill, AI professor at Carnegie Mellon).

   2. AI R&D should be open, responsible and socially engaged.  

   3. Developers of AI should establish best practices to minimize risks and maximize the beneficial impact.

6 replies
  1. Jeffrey
    Jeffrey says:

    How do we leverage AI to enable the accelerating pace of change and requisite adaptation for humans as well as other species?

    Reply
  2. Steve Ericson
    Steve Ericson says:

    Even very small animals with tiny brains have consciousness (albeit nonhuman consciousness). Once we accomplish the circuitry and programming that enables computers to have computer consciousness, it will only be a matter of data storage and additional computing power which is already readily available via the Internet. A simple computer consciousness could grow in complexity and computing power very very fast.

    Reply
  3. Tony Cusano
    Tony Cusano says:

    How easily do we expect to extricate the elements of human intelligence that rely on the benefit of the individual so we can have a new intelligence that focuses on benefit of the many? Those 2 processes seem inextricably intertwined into the intelligence of the human species.

    Reply
  4. Mindey
    Mindey says:

    I hope the R&D is truly socially engaged, — that it does cross cultural and language barriers. If not, we run a risk of leaving a part of mankind to secretly create something to risk the future of all.

    Wikipedia is a good example of doing a truly multilingual project, except, the comments in “Talk” pages on Wikipedia are not common to all languages, and the words in discussions do not resolve to WikiData concepts (while with a simple IME [input method editor] they could)… There is a lot of space for improvement of translingual communication…

    Overall, really happy to see this agreement among the leading minds in information technologies. Looking forward to future updates.

    Reply
  5. Reuven Kofman
    Reuven Kofman says:

    Now is spring of applied AI, rapid flow of events and publications, but the Thinking Machine (TM) isn’t exist. There is a second era of AI based on machine learning, however, as the first one, which is based on the programming, it faces the challenge of complexity. Now more popular questions on the future impact of AI, AI security, application of AI; the question, why do we need Thinking Machines, but work on the AI continues and I hope that a cybernetic system approach to AI can contribute to its solution.

    Reply
  6. Jay Wilson
    Jay Wilson says:

    I appreciate the three goals of AI, but what world agency is going to enforce them, especially with the military taking the lead?

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *