Skip to content

Grants RFP Overview

December 6, 2015




For many years, Artificial Intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant success. In an open letter in January 2015, a large international group of leading AI researchers from academia and industry argued that this success makes it important and timely to research also how to make AI systems robust and beneficial, and that this includes concrete research directions that can be pursued today. The aim of this request for proposals is to support such research.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. However, like any powerful technology, AI has also raised new concerns, such as humans being replaced on the job market and perhaps altogether. Success in creating general-purpose human- or superhuman-level AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. A crucial question is therefore what can be done now to maximize the future benefits of AI while avoiding pitfalls.

This research priorities document gives many examples of research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself. The focus is on delivering AI that is beneficial to society and robust in the sense that the benefits are guaranteed: our AI systems must do what we want them to do. This is a significant expansion in the definition of the field, which up to now has focused on techniques that are neutral with respect to purpose.


This 2015 grants competition is the first wave of the $10M program announced this month, and will give grants totaling about $6M to researchers in academic and other non-profit institutions for projects up to three years in duration, beginning September 1, 2015. Future competitions are anticipated to focus on the areas that prove most successful. Grant applications will be subject to a competitive process of confidential expert peer review similar to that employed by all major U.S. scientific funding agencies, with reviewers being recognized experts in the relevant fields.

Grants will be made in two categories: Project Grants and Center Grants. Project Grants (approx. $100K-$500K) will fund a small group of collaborators at one or more research institutions for a focused research project of up to three years duration. Center Grants (approx. $500K-$1.5M) will fund the establishment of a (possibly multi-institution) research center that organizes, directs and funds (via subawards) research.

Proposals for both grant types will be evaluated according to how topical and impactful they are:

TOPICAL: This RFP is limited to research that aims to help maximize the societal benefit of AI, explicitly focusing not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial.
Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.

Appropriate research topics for Project Grants span multiple fields; as a general rule of thumb, any project that focuses on making AI more robust and/or beneficial is eligible, even if the project’s topic is not specifically named here. For our most comprehensive list of example research questions, please refer to A survey of research questions for robust and beneficial artificial intelligence, but bear in mind that this list is not intended to be complete.

For the sake of convenience, a very incomplete list of example research topics is given here:

  1. Computer Science:
    • Verification: how to prove that a system satisfies certain desired formal properties. (“Did I build the system right?”)
    • Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. (“Did I build the right system?”)
    • Security: how to prevent intentional manipulation by unauthorized parties.
    • Control: how to enable meaningful human control over an AI system after it begins to operate.
  2. Law and ethics:
    • How should the law handle liability for autonomous systems? Must some autonomous systems remain under meaningful human control?
    • Should some categories of autonomous weapons be banned?
    • Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? Should such trade-offs be the subject of national standards?
    • To what extent can/should privacy be safeguarded as AI gets better at interpreting the data obtained from surveillance cameras, phone lines, emails, shopping habits, etc.?
  3. Economics:
    • Labor market forecasting
    • Labor market policy
    • How can a low-employment society flourish?
  4. Education and outreach:
    • Summer/winter schools on AI and its relation to society, targeted at AI graduate students and postdocs
    • Non-technical mini-schools/symposia on AI targeted at journalists, policymakers, philanthropists and other opinion leaders.

This RFP solicits Center Grants on the topic of AI policy, including forecasting. Proposed centers should address questions spanning (but not limited to) the following:

  • What is the space of AI policies worth studying? Possible dimensions include implementation level (global, national, organizational, etc.), strictness (mandatory regulations, industry guidelines, etc.) and type (policies/monitoring focused on software, hardware, projects, individuals, etc.)
  • Which criteria should be used to determine the merits of a policy? Candidates include verifiability of compliance, enforceability, ability to reduce risk, ability to avoid stifling desirable technology development, adoptability, and ability to adapt over time to changing circumstances to prevent intentional manipulation by unauthorized parties.
  • Which policies are best when evaluated against these criteria of merit? Addressing this question (which is anticipated to involve the lion’s share of the proposed work) would include detailed forecasting of how AI development will unfold under different policy options.

The relative amount of funding for different areas is not predetermined, but will be optimized to reflect the number and quality of applications received. Very roughly, the expectation is ~50% computer science, ~20% policy, ~15% law, ethics & economics, and ~15% education.

IMPACTFUL: Proposals will be rated according to their expected positive impact per dollar, taking all relevant factors into account, such as:

  1. Intrinsic intellectual merit, scientific rigor and originality
  2. A high product of likelihood for success and importance if successful (i.e., high-risk research can be supported as long as the potential payoff is also very high)
  3. The likelihood of the research opening fruitful new lines of scientific inquiry
  4. The feasibility of the research in the given time frame
  5. The qualifications of the Principal Investigator and team with respect to the proposed topic
  6. The part a grant may play in career development
  7. Cost effectiveness: Tight budgeting is encouraged in order to maximize the research impact of the project as a whole, with emphasis on scientific return per dollar rather than per proposal
  8. Potential to impact the greater community as well as the general public via effective outreach and dissemination of the research results

Strong proposals will make it easy for FLI to evaluate their impact by explicitly stating what
they aim to produce (publications, algorithms, software, events, etc.) when (after 1st, 2nd and 3rd year, say). Preference will be given to proposals whose deliverables are made freely available (open access publications, open source software, etc.).

To maximize its impact per dollar, this RFP is intended to complement, not supplement, conventional funding. We wish to enable research that, because of its long-term focus or its non-commercial, speculative or non-mainstream nature would otherwise go unperformed due to lack of available resources. Thus, although there will be inevitable overlaps, an otherwise scientifically rigorous proposal that is a good candidate for an FLI grant will generally not be a good candidate for funding by the NSF, DARPA, corporate R&D, etc.-and vice versa. To be eligible, research must focus on making AI more robust/beneficial as opposed to the standard goal of making AI more capable.

To aid prospective applicants in determining whether a project is appropriate for FLI, we have provided lists of questions and topics that make suitable targets for research funded under this program in the research priorities document.

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note if this is an issue with your institution, or if your organization is not non-profit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)

Subawards are discouraged in the case of Project Grants, but perfectly acceptable for Center Grants.


Applications will be accepted electronically through a standard form on our website (click here for the application) and evaluated in a two-part process, as follows:

  1. INITIAL PROPOSAL-DUE March 1 2015, 11:59PM Eastern Time-Must include:
    • A summary of the project, explicitly addressing why it is topical and impactful. These should be 300-500 words for Projects Grants and 500-1000 words for Center Grants.
    • A draft budget description not exceeding 200 words, including an approximate total cost over the life of the award and explanation of how funds would be spent
    • A Curriculum Vitae for the Principal Investigator, which MUST be in PDF format, including:
      • Education and employment history
      • A list of up to five representative publications. Optional: if the PI has any previous publications relevant to the proposed research, they may list to up to five of these as well, for a total of up to 10 representative and relevant publications. We do wish to encourage PIs to enter relevant research areas where they may not have had opportunities before, so prior relevant publications are not required.
      • Full publication list
    • For Center Grants only: listing and brief bio of Center Co-Investigators, including if applicable the lead investigator at each institution that is part of the center.

    A review panel assembled by FLI will screen each Initial Proposal according to the criteria in Section II. Based on their assessment, the Principal Investigator (PI) may be invited to submit a Full Proposal, on or about March 21 2015, perhaps with feedback from FLI on improving the proposal. Please keep in mind that however positive FLI may be about a proposal at any stage, it may still be turned down for funding after full peer review.

  2. FULL PROPOSAL-DUE May 17 2015-Must Include:
    • Cover sheet
    • A 200-word project abstract, suitable for publication in an academic journal
    • A project summary not exceeding 200 words, explaining the work and its significance to laypeople
    • A detailed description of the proposed research, not to exceed 15 (20 pages for Center Grants) single-spaced 11-point pages, including a short statement of how the application fits into the applicant’s present research program, and a description of how the results might be communicated to the wider scientific community and general public
    • A detailed budget over the life of the award, with justification and utilization distribution (preferably drafted by your institution’s grant officer or equivalent)
    • A list, for all project senior personnel, of all present and pending financial support, including project name, funding source, dates, amount, and status (current or pending)
    • Evidence of tax-exempt status of grantee institution, if other than a US university.
    • Names of three recommended referees
    • Curricula Vitae for all project senior personnel, including:
      • Education and employment history
      • A list of references of up to five previous publications relevant to the proposed research, and up to five additional representative publications
      • Full publication list
    • Additional material may be requested in the case of Center Grants, as specified in the invitation and feedback phase.

Completed Full Proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described in Section III. A review panel of scientists in the relevant fields will be convened to produce a final rank ordering of the proposals, which will determine the grant winners, and make budgetary adjustments if necessary. Public award recommendations will be made on or about July 1, 2015.


The peer review and administration of this grants program will be managed by the Future of Life Institute (FLI), FLI is an independent, philanthropically funded non-profit organization whose mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

FLI will direct these grants through a Donor Advised Fund (DAF) at the Silicon Valley Community Foundation. FLI will solicit grant applications and have them peer reviewed, and on the basis of these reviews, FLI will advise the DAF on what grants to make. After grants have been made by the DAF, FLI will work with the DAF to monitor the grantee’s performance via grant reports. In this way, researchers will continue to interact with FLI, while the DAF interacts mostly with their institutes’ administrative or grants management offices.

This content was first published at on December 6, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in […]
May 9, 2019

AI Policy Challenges

This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence […]
July 17, 2018
Myth of evil AI

The Top Myths About Advanced AI

Common myths about advanced AI distract from fascinating true controversies where even the experts disagree.
August 7, 2016

Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms -- or the creation of new ones -- will become effective, inexpensive, and pervasive.
November 14, 2015
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram