Fellowship Winners 2022
These are the winners of our 2022 grant programs for research concerning the safe development and deployment of AI.
Vitalik Buterin PhD Fellows
The Vitalik Buterin PhD Fellowship in AI Existential Safety is for students starting PhD programs in 2022 who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student’s PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field. Here you can read more about the program.
Mouse-over or tap a profile to reveal more information:

Anwar, Usman

Casper, Stephen

Chen, Xin Cynthia

Jenner, Erik

Jin, Zhijing

Jones, Erik

Pan, Alexander

Treutlein, Johannes
Vitalik Buterin Postdoctoral Fellows
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments starting in the fall semester of 2022 to work on AI existential safety research. Funding is for three years subject to annual renewals based on satisfactory progress reports. For host institutions in the US, UK, or Canada, the Fellowship includes an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the fellowship amount will be adjusted to match local conditions. Here you can read more about the program.
Mouse-over or tap a profile to reveal more information:

Stiennon, Nisan
AI Existential Safety Community
We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls. Click here to view our growing community of AI existential safety researchers.