Skip to content
All Grant Programs

2023 Grants

Status:
Funds allocated
Applications closed

Grants archive

An archive of all grants provided within this grant program:
Project title

The University of Chicago

Amount recommended
$380,000.00

Project Summary

Support for the University of Chicago's Existential Risk Laboratory (XLab). At this stage, XLab is focused on building student and faculty interest in existential risk through a postdoctoral fellowship, undergraduate courses, and a summer research fellowship. These activities are aimed at ultimately being able to offer a minor in Existential Risk Studies through an entity with active faculty research groups.

Project title

Ought, Inc.

Amount recommended
$358,000.00

Project Summary

General support. Ought is a product-driven research lab that develops mechanisms for delegating high-quality reasoning to advanced machine learning systems. Ought is building Elicit, a research assistant using language models to scale up high-quality reasoning in the world. Ought also conducts research to advocate for supervising the process of machine learning systems, not just their outcomes, so that we can avoid alignment risks from goal misspecification and opaque AI systems.

Project title

FAR AI, Inc.

Amount recommended
$1,858,000.00

Project Summary

General support. FAR accelerates neglected but high-potential AI safety research agendas. It supports projects that are either too large to be led by academia or overlooked by the commercial sector as they are unprofitable. FAR AI’s mission is to ensure AI systems are trustworthy and beneficial to society.

Project title

Center for AI Safety, Inc.

Amount recommended
$22,000.00

Project Summary

General support. The Center for AI Safety (CAIS) exists to ensure the safe development and deployment of AI. AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

Project title

Berkeley Existential Risk Initiative

Amount recommended
$481,000.00

Project Summary

Support for the Berkeley Existential Risk Initiative's (BERI) collaboration with The Center for Human-Compatible Artificial Intelligence (CHAI). BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, its main strategy is collaborating with university research groups working to reduce existential risk by providing them with free services and support. CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

Project title

Redwood Research Group, Inc.

Amount recommended
$500,000.00

Project Summary

General Support. Redwood Research Group’s main work is doing theoretical and applied AI alignment research. They are especially interested in practical projects that are motivated by theoretical arguments for how the techniques we develop might successfully scale to the superhuman regime. They also run Constellation, a co-working space with members from ARC, MIRI, OpenPhil, Redwood, and some other organizations.

Project title

Alignment Research Center

Amount recommended
$1,401,000.00

Project Summary

Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around.

Our other grant programs

2022 Grants

Completed

Nuclear War Research

Funds allocated

PhD Fellowships

Open for submissions

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram