Skip to content

$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University

Published:
December 3, 2015
Author:
Seán Ó hÉigeartaigh

Contents

The University of Cambridge has received a grant for just over $15 Million USD from the Leverhulme Foundation to establish a 10-year Centre focused on the opportunities and challenges posed by AI in the long-term. They provided FLI with the following news release:

About the New Center

Hot on the heels of 80K’s excellent AI risk research career profile, we’re delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence (CFI), to be led by Cambridge (Huw Price and Zoubin Ghahramani), with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed at CSER, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration recently funded by the Future of Life Institute’s AI safety grants program). We also hope for extensive collaboration with the Future of Life Institute.

Building on the “Puerto Rico Agenda” from the Future of Life Institute’s landmark January 2015 conference, it will have the long-term safe and beneficial development of AI at its core, but with a broader remit than CSER’s focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

CFI builds on the pioneering work of FHI, FLI and others, along with the generous support of Elon Musk, who helped massively boost this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers – the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions will be opening up in this space across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

Between now and then, FHI is hiring for AI safety researchers, CSER will be hiring for an AI policy postdoc in the spring, and MIRI is also hiring. A number of the key researchers in the AI safety community are also organizing a high-level symposium on the impacts and future of AI at the Neural Information Processing Systems conference next week.

 

CFI and the Future of AI Safety Research

Human-level intelligence is familiar in biological ‘hardware’ — it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million (~$15 million USD) grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks — from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”

A version of this news release can also be found on the Cambridge University website and at Eureka Alert.

This content was first published at futureoflife.org on December 3, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram