
Karim Abdel Sadek
Why do you care about AI Existential Safety?
Recent years have demonstrated rapid growth in AI capabilities across a wide range of tasks and domains. AI systems that can generalize effectively and operate reliably at scale will significantly impact human society. However, the direction of this transformative impact is uncertain, and many technical gaps in AI alignment remain to be solved both from a theoretical and empirical
Please give at least one example of your research interests related to AI existential safety:
I am interested in topics at the intersection of both theory and practice for AI alignment. I am broadly working in topics at the intersection of Reinforcement Learning, Preference Learning, and Cooperative AI. My research focuses on understanding and developing adaptive, robust, and safe goal-directed AI systems that collaborate effectively with humans and among themselves.