AI Existential Safety Community
We are building a community of AI researchers who want AI to be safe, ethical, and beneficial.
On this page you'll find a growing group of AI researchers keen to ensure that AI remains safe and beneficial, even if it eventually supercedes human ability on essentially all tasks.
Faculty members
Click on a profile to view more information:
Alessandro Abate
Anca Dragan
Anqi Liu
Arnob Ghosh
Bart Selman
Brad Knox
Clark Barrett
David Krueger
Dylan Hadfield-Menell
Elad Hazan
Federico Faroldi
Florian Tramer
Load more
AI Researchers
Click on a profile to view more information:
Adrià Garriga Alonso
Aidan Kierans
Aishwarya Gurung
Alan Chan
Alex Chan
Alex Turner
Allan Suresh
Amir-Hossein Karimi
Andy Zou
Anna Katariina Wisakanto
Annachiara Ruospo
Benjamin Smith
Load more
Posts from the community
If you are a member of our community and wish to submit a guest post, please contact us.
Here are some guest posts from members of our AI Existential Safety Community on the topics of their work:
Can AI agents learn to be good?
AI agents are different from AI assistants because they can initiate actions independently. Here we discuss the safety concerns involved with AI agents and what we are doing to mitigate them.
29 August, 2024
How to join
Join the community
Vitalik Buterin Fellowships
If you're considering applying for the Vitalik Buterin postdoctoral fellowships or PhD fellowships, please use this page as a resource for finding a faculty mentor. All awarded fellows receive automatic community membership.
AI Professors
If you're a professor interested in free funding for grad students or postdocs working on AI existential safety, you can apply for community membership here.
AI Researchers
If you're a researcher working on AI existential safety, you're also welcome to apply for membership here, to showcase your research areas, apply for travel support, and get invited to our workshops and networking events.
Related pages
Were you looking for something else?
Here are a couple of other pages you might have been looking for:
Our Position on AI
We oppose developing AI that poses large-scale risks to humanity, including via power concentration, and favor AI built to solve real human problems. We believe frontier AI is currently being developed in an unsafe and unaccountable manner.
Past Volunteers
In the past, we have had the support of a team of dedicated volunteers.