All Grant Programs
2018 AGI Safety Grant Program
FLI's second grant program focused on AI safety.
Status:
Completed
Grants archive
An archive of all grants provided within this grant program:
Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior
$200,000.00
Peter Stone, University of Texas
Peter Stone
Factored Cognition: Amplifying Human Cognition for Safely Scalable AGI
$225,000.00
Owain Evans, Oxford University
Owain Evans
Governance of AI Programme
$276,000.00
Allan Dafoe, Yale University
Allan Dafoe
Incentives for Safety Agreement Compliance in AI Race
$224,747.00
The Anh Han, Teesside University
The Anh Han
Paradigms of Artificial General Intelligence and Their Associated Risks
$220,000.00
Jose Hernandez-Orallo, University of Cambridge
Jose Hernandez-Orallo
Reverse Engineering Fair Cooperation
$150,000.00
Josh Tenenbaum, MIT
Josh Tenenbaum
Safe Learning and Verification of Human-AI Systems
$250,000.00
Dorsa Sadigh, Stanford University
Dorsa Sadigh
The Control Problem for Universal AI: A Formal Investigation
$276,000.00
Marcus Hutter, Australian National University
Marcus Hutter
Utility Functions: A Guide for Artificial General Intelligence Theorists
$78,289.00
James Miller, Smith College
James Miller
Value Alignment and Multi-agent Inverse Reinforcement Learning
$100,000.00
Stefano Ermon, Stanford University
Stefano Ermon
Our other grant programs

2024 Grants
Funds allocated

Multistakeholder Engagement for Safe and Prosperous AI
Closed for submissions
Deadline: 4 February 2025, 23:59 EST

How to mitigate AI-driven power concentration
Funds allocated
All Grant Programs