AI Existential Safety Community
We are building a community of AI researchers who want AI to be safe, ethical, and beneficial.
On this page you'll find a growing group of AI researchers keen to ensure that AI remains safe and beneficial, even if it eventually supercedes human ability on essentially all tasks.
Faculty members
Click on a profile to view more information:
Alessandro Abate
Anca Dragan
Anqi Liu
Bart Selman
Clark Barrett
David Krueger
Dylan Hadfield-Menell
Florian Tramer
He He
Hong Zhu
Jacob Noah Steinhardt
Jaime Fernandez Fisac
José Hernández-Orallo
Max Tegmark
Michael Hippke
Michael Osborne
Olle Häggström
Patrick Shafto
Paul Matthew Salmon
Peter Vamplew
Richard Dazeley
Roger Grosse
Roman Yampolskiy
Samuel Albanie
Samuel Bowman
Scott Niekum
Sharon Li
Steve Petersen
Stuart Russell
Tegan Maharaj
The Anh Han
Victor Veitch
Vincent Conitzer
Yixin Wang
AI Researchers
Click on a profile to view more information:
Adrià Garriga Alonso
Aidan Kierans
Aishwarya Gurung
Alan Chan
Alex Chan
Alex Turner
Allan Suresh
Amir-Hossein Karimi
Andy Zou
Anna Katariina Wisakanto
Annachiara Ruospo
Benjamin Smith
Brian Green
Chad DeChant
Charlie Steiner
David Lindner
Eleonora Giunchiglia
Ethan Perez
Fazl Barez
Francis Rhys Ward
Hanlin Zhang
Harriet Farlow
John Burden
Jonathan Cefalu
Joseph Kwon
Kaylene Stocking
Kendrea Beers
Lewis Hammond
Linas Marius Nasvytis
Lorenz Kuhn
Matt MacDermott
Michael Cohen
Michele Campolo
Montaser Mohammedalamen
Mykyta Baliesnyi
Neil Crawford
Nell Watson
Nikolaus Howe
Pablo Antonio Moreno Casares
Paolo Bova
Paul de Font-Reaulx
Pingchuan Ma
Ryan Carey
Scott Emmons
Shoaib Ahmed Siddiqui
Sumeet Motwani
Vincent Lê
Wout Schellaert
Xiaohu Zhu
Yaodong Yu
How to join
Join the community
Vitalik Buterin Fellowships
If you're considering applying for the Vitalik Buterin postdoctoral fellowships or PhD fellowships, please use this page as a resource for finding a faculty mentor. All awarded fellows receive automatic community membership.
AI Professors
If you're a professor interested in free funding for grad students or postdocs working on AI existential safety, you can apply for community membership here.
AI Researchers
If you're a researcher working on AI existential safety, you're also welcome to apply for membership here, to showcase your research areas, apply for travel support, and get invited to our workshops and networking events.
Related pages
Were you looking for something else?
Here are a couple of other pages you might have been looking for:
Past Volunteers
In the past, we have had the support of a team of dedicated volunteers.
FLI Open Letters
We believe that scientists need to make their voices heard when it comes to matters of emerging technologies and their risks. The Future of Life Institute has facilitated this dialogue in the form of many open letters throughout the years.