Skip to content

Ryan Carey

Organisation
Oxford University
Biography

Why do you care about AI Existential Safety?

The chances of human-level AI in the next few decades are high enough to be concerning, and it seems worthwhile to investigate how AI systems could be better aligned to human values.

Please give one or more examples of research interests relevant to AI existential safety:

I am interested in understanding the incentives of AI systems. This has included using causal models to model AI-environment interactions. Using graphs, we can study what variables an AI system might be incentivised to influence or respond to. This in turn can help us to understand whether or not an optimal system will behave safely.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram