Skip to content

$11M AI safety research program launched

Published:
October 12, 2015
Author:
Max Tegmark

Contents

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research.

A new international grants program jump-starts tesearch to Ensure AI remains beneficial.

 

July 1, 2015
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

Contacts at the Future of Life Institute:

  • Max Tegmark: tegmark@mit.edu
  • Meia Chita-Tegmark: meia@bu.edu
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: aguirre@scipp.ucsc.edu
  • Viktoriya Krakovna: vika@futureoflife.org
  • Jesse Galef: jesse@futureoflife.org
This content was first published at futureoflife.org on October 12, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram