
Dezhi Luo
Why do you care about AI Existential Safety?
I believe there is a significant possibility that near-future AI systems could develop abilities that pose potentially catastrophic risks. I also believe there are emerging methods to study such risks scientifically, and that with greater attention, they could be mitigated to a controllable extent.
Please give at least one example of your research interests related to AI existential safety:
My research broadly concerns the foundations of cognitive science and AI. Among which, I particularly focus on consciousness, agency, and development. I believe that deriving a better understanding of the computational basis of such cognitive capabilities will help us to develop better theoretical frameworks for evaluating and mitigating risks associated with the extent to which current and future AI possess such abilities.
