2023 Grants
Grants archive
The University of Chicago
Project Summary
Support for the University of Chicago's Existential Risk Laboratory (XLab). At this stage, XLab is focused on building student and faculty interest in existential risk through a postdoctoral fellowship, undergraduate courses, and a summer research fellowship. These activities are aimed at ultimately being able to offer a minor in Existential Risk Studies through an entity with active faculty research groups.
Ought, Inc.
Project Summary
General support. Ought is a product-driven research lab that develops mechanisms for delegating high-quality reasoning to advanced machine learning systems. Ought is building Elicit, a research assistant using language models to scale up high-quality reasoning in the world. Ought also conducts research to advocate for supervising the process of machine learning systems, not just their outcomes, so that we can avoid alignment risks from goal misspecification and opaque AI systems.
FAR AI, Inc.
Project Summary
General support. FAR accelerates neglected but high-potential AI safety research agendas. It supports projects that are either too large to be led by academia or overlooked by the commercial sector as they are unprofitable. FAR AI’s mission is to ensure AI systems are trustworthy and beneficial to society.
Center for AI Safety, Inc.
Project Summary
General support. The Center for AI Safety (CAIS) exists to ensure the safe development and deployment of AI. AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.
Berkeley Existential Risk Initiative
Project Summary
Support for the Berkeley Existential Risk Initiative's (BERI) collaboration with The Center for Human-Compatible Artificial Intelligence (CHAI). BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, its main strategy is collaborating with university research groups working to reduce existential risk by providing them with free services and support. CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
Redwood Research Group, Inc.
Project Summary
General Support. Redwood Research Group’s main work is doing theoretical and applied AI alignment research. They are especially interested in practical projects that are motivated by theoretical arguments for how the techniques we develop might successfully scale to the superhuman regime. They also run Constellation, a co-working space with members from ARC, MIRI, OpenPhil, Redwood, and some other organizations.
Alignment Research Center
Project Summary
Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around.