Skip to content

AI Existential Safety Community

We are building a community of AI researchers who want AI to be safe, ethical, and beneficial.

On this page you'll find a growing group of AI researchers keen to ensure that AI remains safe and beneficial, even if it eventually supercedes human ability on essentially all tasks.

Faculty members

Click on a profile to view more information:

Alessandro Abate

Professor
University of Oxford

Anca Dragan

Associate Professor
UC Berkeley

Anqi Liu

Assistant Professor
Johns Hopkins University

Arnob Ghosh

Assistant Professor
NJIT

Bart Selman

Professor
Cornell University

Brad Knox

Research Associate Professor
University of Texas at Austin

Clark Barrett

Professor (Research)
Stanford University

David Krueger

Assistant Professor
University of Cambridge

Dylan Hadfield-Menell

Assistant Professor
Massachusetts Institute of Technology

Elad Hazan

Professor
Princeton University

Federico Faroldi

Professor of Ethics Law and AI
University of Pavia

Florian Tramer

Assistant Professor
ETH Zurich

Load more

AI Researchers

Click on a profile to view more information:

Adrià Garriga Alonso

FAR AI

Aidan Kierans

University of Connecticut

Aishwarya Gurung

University of Bath

Alan Chan

Mila, Université de Montréal

Alex Chan

University of Cambridge

Alex Turner

Google DeepMind

Allan Suresh

National Institute of Technology Karnataka

Amir-Hossein Karimi

University of Waterloo

Andy Zou

CMU

Anna Katariina Wisakanto

Chalmers University

Annachiara Ruospo

Politecnico di Torino

Benjamin Smith

University of Oregon

Load more

Posts from the community

If you are a member of our community and wish to submit a guest post, please contact us.
Here are some guest posts from members of our AI Existential Safety Community on the topics of their work:

Can AI agents learn to be good?

AI agents are different from AI assistants because they can initiate actions independently. Here we discuss the safety concerns involved with AI agents and what we are doing to mitigate them.
29 August, 2024

How to join

Join the community

Vitalik Buterin Fellowships

If you're considering applying for the Vitalik Buterin postdoctoral fellowships or PhD fellowships, please use this page as a resource for finding a faculty mentor. All awarded fellows receive automatic community membership.

AI Professors

If you're a professor interested in free funding for grad students or postdocs working on AI existential safety, you can apply for community membership here.

AI Researchers

If you're a researcher working on AI existential safety, you're also welcome to apply for membership here, to showcase your research areas, apply for travel support, and get invited to our workshops and networking events.

Verwandte Seiten

Haben Sie nach etwas anderem gesucht?

Hier sind ein paar andere Seiten, die Sie vielleicht finden wollten:

Unser Standpunkt zu KI

Wir lehnen die Entwicklung von KI ab, die große Risiken für die Menschheit birgt, inklusive Risiken durch Machtkonzentration, und befürworten KI, die zur Lösung von globalen Problemen entwickelt wird. Wir sind der Meinung, dass KI-Entwicklung derzeit auf unsichere und unverantwortliche Weise erfolgt.
Seite anzeigen

Past Volunteers

In the past, we have had the support of a team of dedicated volunteers.
Seite anzeigen

Abonnieren Sie den Newsletter des Future of Life Institutes.

Schließen Sie sich 40.000+ anderen an, die regelmäßig über unsere Arbeit und unsere Problembereiche informiert werden.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram