Skip to content

Dezhi Luo

Organisation
University of Michigan
Biography

Why do you care about AI Existential Safety?

I believe there is a significant possibility that near-future AI systems could develop abilities that pose potentially catastrophic risks. I also believe there are emerging methods to study such risks scientifically, and that with greater attention, they could be mitigated to a controllable extent.

Please give at least one example of your research interests related to AI existential safety:

My research broadly concerns the foundations of cognitive science and AI. Among which, I particularly focus on consciousness, agency, and development. I believe that deriving a better understanding of the computational basis of such cognitive capabilities will help us to develop better theoretical frameworks for evaluating and mitigating risks associated with the extent to which current and future AI possess such abilities.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram