Skip to content

Kaylene Stocking

Organisation
UC Berkeley
Biography

Why do you care about AI Existential Safety?

I believe AI technology is likely to be one of the (if not the) biggest factors driving the shape of humanity’s long-term future. I’m not sure if existential risk from AI will be a problem in my lifetime or even several lifetimes from now, but given how uncertain we are about the rate of progress towards AGI, I think it’s a good idea to think seriously about what kind of future we want and how AI will play a role in it as soon as possible. Also, I think AI (as opposed to other important existential risks) is well-aligned with my skills and interests, making it the most likely place I can have a positive impact with my research.

Please give one or more examples of research interests relevant to AI existential safety:

I am interested in how we might give AI systems the ability to reason with explicit causal hypotheses, which should make it easier for humans to audit AI-based decisions, and decrease the risk of mistakes due to problems like causal confusion or the AI system failing to take into account the impact of its own decisions on its dynamic environment.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram