Allan Suresh
Why do you care about AI Existential Safety?
I’ve always wanted to do work that positively affects our future. Up until a year ago, my goal was to work to use AI in research that could be beneficial to climate change. It was then I came across the effective altruism forum. As I learned more about longtermism outside of climate change, I began to realise that my skills could be developed and put to better use in the field of AI Safety, more so because the field itself is talent-limited. Also, I feel that AI Safety is the most pressing problem among long term risk issues.
Please give one or more examples of research interests relevant to AI existential safety:
I am currently in GCRI’s Research Collaboration and Mentorship Program right now, doing a project under Seth Baum. Currently my interests mostly lie in Value Learning and Inverse Reinforcement Learning, and also Deep Reinforcement Learning.