![](https://futureoflife.org/wp-content/uploads/2022/05/Sumeet-scaled-1.jpeg)
Sumeet Motwani
University of California, Berkeley
Why do you care about AI Existential Safety?
AI is likely to exceed humans across most narrow domains in the short term. This could lead to important failure cases that could cause significant negative impacts to certain groups and must be studied carefully.
Please give one or more examples of research interests relevant to AI existential safety:
I’m working on topics such as collusion and safety for agentic systems trained with reinforcement learning.