Skip to content
All Grant Programs

2023 Grants

This page highlights some of the institutional grants that the Future of Life Institute awarded in 2023.
Status:
Funds allocated

Grants archive

An archive of all grants provided within this grant program:

Project Summary

General support. AI Impacts performs research related to the future of AI. They aim to answer decision-relevant questions in the most neglected areas of AI strategy and forecasting. The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.

Project Summary

Support for the Talk to the City and Moral Mirror projects. AI Objectives Institute (AOI) is a non-profit research lab of leading builders and researchers. AOI brings together a network of volunteer researchers from top AI labs with product builders and experts from psychology, political theory and economics for a sociotechnical systems perspective on AI dynamics as well as post-singularity outcomes, improving odds that human values thrive in a world of rapidly deployed, extremely capable AI systems evolving with existing institutions and incentives.

Project Summary

Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around.

Project Summary

Support for the Berkeley Existential Risk Initiative's (BERI) collaboration with The Center for Human-Compatible Artificial Intelligence (CHAI). BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, its main strategy is collaborating with university research groups working to reduce existential risk by providing them with free services and support. CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

Project Summary

General support. The Center for AI Safety (CAIS) exists to ensure the safe development and deployment of AI. AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

Project Summary

Support for AI-related policy work and messaging cohesion within the AI xrisk community. The Center for Humane Technology works to align technology with humanity’s best interests. They envision a world with technology that respects our attention, improves our well-being, and strengthens communities.

Project Summary

General support. The Centre for Long-Term Resilience (CLTR) is an independent think tank based in the UK with a mission to transform global resilience to extreme risks. They do this by working with governments and other institutions to improve relevant governance, processes, and decision making. They believe that extreme risks pose some of the biggest challenges of the 21st Century, and want to work with a wide range of people and organisations who share their goals. As a result, they aim to collaborate with a broad coalition of policymakers, academics, think tanks and private-sector risk experts.

Project Summary

Support for the Alignment Assemblies for Legitimate AI Governance project. The Collective Intelligence Project is an experimental R&D organization that advances collective intelligence capabilities (meaning tools and systems for better constructing and cooperating towards shared goals) for effective collective governance of AI and other transformative technologies. Its mission is to direct technological development towards the collective good by making it possible to elicit and execute on collective values, and in doing so, to avoid disproportionate risks and harness benefits.

Project Summary

General support. FAR accelerates neglected but high-potential AI safety research agendas. It supports projects that are either too large to be led by academia or overlooked by the commercial sector as they are unprofitable. FAR AI’s mission is to ensure AI systems are trustworthy and beneficial to society.

Project Summary

Support for the Existential Hope and Tech Tree Programs. Foresight Institute supports the development of beneficial high-impact technology to make great futures more likely. Their work to make flourishing futures for life more likely is mainly done by selectively advancing beneficial uses of high-impact science and technology via five technical interest groups focusing on 1) Molecular Machines: atomically precise control of matter, 2) Biotech: reverse aging and improve cognition, 3) Computer Science: secure decentralized human AI cooperation, 4) Neurotech: Improving Cognition, 5) Space: Expanding outward.

Project Summary

General support. Ought is a product-driven research lab that develops mechanisms for delegating high-quality reasoning to advanced machine learning systems. Ought is building Elicit, a research assistant using language models to scale up high-quality reasoning in the world. Ought also conducts research to advocate for supervising the process of machine learning systems, not just their outcomes, so that we can avoid alignment risks from goal misspecification and opaque AI systems.

Project Summary

General support. The Oxford China Policy Lab (OCPL) is a non-partisan group of researchers based at the University of Oxford. Its overarching goal is to mitigate the possibility of global risk associated with US-China great power competition (GPC), with a particular focus on risks stemming from artificial intelligence and other emerging technologies.

Project Summary

General Support. Redwood Research Group’s main work is doing theoretical and applied AI alignment research. They are especially interested in practical projects that are motivated by theoretical arguments for how the techniques we develop might successfully scale to the superhuman regime. They also run Constellation, a co-working space with members from ARC, MIRI, OpenPhil, Redwood, and some other organizations.

Project Summary

General support. The Future Society (TFS) is an independent nonprofit organization based in US and Europe with a mission to align AI through better governance.

Project Summary

Support for the University of Chicago's Existential Risk Laboratory (XLab). At this stage, XLab is focused on building student and faculty interest in existential risk through a postdoctoral fellowship, undergraduate courses, and a summer research fellowship. These activities are aimed at ultimately being able to offer a minor in Existential Risk Studies through an entity with active faculty research groups.

Project Summary

The AI Policy Hub is housed at the AI Security Initiative at the University of California, Berkeley’s Center for Long-Term Cybersecurity (CLTC), and the University of California’s CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS). The UC Berkeley AI Policy Hub is training a new generation of AI policy talent to shape the future of artificial intelligence in the public interest. This prestigious fellowship program strengthens interdisciplinary research approaches to AI policy while expanding inclusion of diverse perspectives.The Hub supports annual cohorts of graduate student researchers who conduct innovative research and make meaningful contributions to the AI policy landscape. Fellows receive faculty and staff mentorship, access to world-renowned experts and hands-on training sessions, connections with policymakers and other decision-makers, and opportunities to share their work at a public symposium.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram