Skip to content

Karim Abdel Sadek

Position
(Incoming) PhD Student
Organisation
UC Berkeley
Biography

Why do you care about AI Existential Safety?

Recent years have demonstrated rapid growth in AI capabilities across a wide range of tasks and domains. AI systems that can generalize effectively and operate reliably at scale will significantly impact human society. However, the direction of this transformative impact is uncertain, and many technical gaps in AI alignment remain to be solved both from a theoretical and empirical

Please give at least one example of your research interests related to AI existential safety:

I am interested in topics at the intersection of both theory and practice for AI alignment. I am broadly working in topics at the intersection of Reinforcement Learning, Preference Learning, and Cooperative AI. My research focuses on understanding and developing adaptive, robust, and safe goal-directed AI systems that collaborate effectively with humans and among themselves.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram