![](https://futureoflife.org/wp-content/uploads/2022/05/Emmons_Scott-scaled-1.jpeg)
Scott Emmons
University of California, Berkeley
Why do you care about AI Existential Safety?
COVID-19 shows how important it is to plan ahead for catastrophic risk.
Please give one or more examples of research interests relevant to AI existential safety:
I’ve done work on the game theory of value alignment and the robustness of reinforcement learning.