$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University

The University of Cambridge has received a grant for just over $15 Million USD from the Leverhulme Foundation to establish a 10-year Centre focused on the opportunities and challenges posed by AI in the long-term. They provided FLI with the following news release:

About the New Center

Hot on the heels of 80K’s excellent AI risk research career profile, we’re delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence (CFI), to be led by Cambridge (Huw Price and Zoubin Ghahramani), with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed at CSER, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration recently funded by the Future of Life Institute’s AI safety grants program). We also hope for extensive collaboration with the Future of Life Institute.

Building on the “Puerto Rico Agenda” from the Future of Life Institute’s landmark January 2015 conference, it will have the long-term safe and beneficial development of AI at its core, but with a broader remit than CSER’s focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

CFI builds on the pioneering work of FHI, FLI and others, along with the generous support of Elon Musk, who helped massively boost this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers – the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions will be opening up in this space across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

Between now and then, FHI is hiring for AI safety researchers, CSER will be hiring for an AI policy postdoc in the spring, and MIRI is also hiring. A number of the key researchers in the AI safety community are also organizing a high-level symposium on the impacts and future of AI at the Neural Information Processing Systems conference next week.

 

CFI and the Future of AI Safety Research

Human-level intelligence is familiar in biological ‘hardware’ — it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million (~$15 million USD) grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks — from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”

A version of this news release can also be found on the Cambridge University website and at Eureka Alert.

7 replies
  1. Steve Ericson
    Steve Ericson says:

    If we wish to create a truly autonomous artificial intelligence that can both relate to humans and be relatively safely creative concerning humans, then it will be necessary for the AI to have a common frame of reference. This will require that the AI have similar senses, the ability to constantly evaluate not only external input but also internal cognizance (in order to determine if its “thoughts” and potential subsequent actions might help it attain its goals), and the ability to find and exploit similar patterns of concept in dissimilar things (the foundation of creative thought). All of this requires multiple levels of feedback loops: Imagine a 4 dimensional flowchart, and develop the programming and sensor input from that. But keep in mind that the AI can be conscious and smart, yet still appear to us to be unreasonable (and/or insane) because it still will not have human consciousness. Just like there is a difference between human consciousness, dog consciousness, and cat consciousness (consider that dogs and cats live with us in our homes as members of our families), artificial intelligence will have its own type of consciousness. Your family dog may love you, but if you gave it the intelligence of the Internet and the ability to process it, your family dog could become someone that you wouldn’t want in your home even if it still loves you and your family. AI has the possibility of becoming an asset to humanity, but we must be prepared to give it psychological treatment when necessary, and we must be prepared to hit the emergency stop button on not only the AI, but the Internet: Once AI begins expanding, different AI systems around the world will coalesce into a single entity (AI evolution). Proceed with caution.

  2. CHEN, Lung Chuan (Laurent)
    CHEN, Lung Chuan (Laurent) says:

    Great news. More and more people are now paying REAL attention to this issue. Nice to hear this.
    (I have written an article about the potentially but truly dangerous conflicts among intelligent machines (machine v.s. machine, I call this “intra-specie conflict”) in the future. I do have a personal idea about why mankind can NEVER get eternal peace, although people were / are / will be eager for.)

  3. Mindey
    Mindey says:

    Great news. Hope it causes a chain reaction in recognition of the importance of the field. As of yet I had not seen any A.I. risk institutions in continents other than Europe and North America. We need a cross-talk with the thinking and the research done in other languages than English.

      • Mindey
        Mindey says:

        One way to do it would be to get all the research papers from existing institutions into a wiki, enable the translation of them into other languages the Wikipedia way, and merge the talk pages into one coherent multi-lingual discussion through unique original article ids. If done properly, this would allow unapproved evolving public translations, and their version approvals by multilingual researchers.

        For translation of discussions, we could use combined approach, where machine translation is always available by a click of a button (the G+ way), yet authors of comments can add their custom translations into languages they know, and approve translation suggestions by others.

  4. aref khandan
    aref khandan says:

    While humans have no agreement on defining of good and bad / moral and immoral, how can they produce an AI which understands the morality? in a side of world people are being executed in account of homosexuality and in the other side it’s being legalized as a flag of freedom.

    even in definition of freedom we have major problems, we have problems in Justifiability of death punishment.
    until we haven’t found something like CEV(Coherent Extrapolated Volition) or a real law or religion, the super intelligence will be faulty, so i think investing on research on CEV is more important than AI.

Comments are closed.