Skip to content

Anqi Liu

Position
Assistant Professor
Organisation
Johns Hopkins University
Biography

Why do you care about AI Existential Safety?

AI technology has been already influencing humans and our society profoundly. Yet, it is not robust enough to the changing data and environments, cannot provide accurate and honest uncertainty estimates, and does not account for human preferences and values in the interaction. How to make sure AI is benefiting and not bringing harm to us is a question relevant to everyone. Therefore, I aim to help answer this question through fundamental and multidisciplinary research.

Please give at least one example of your research interests related to AI existential safety:

One established line of my work is in distributionally robust learning under covariate shift. We developed algorithms to extrapolate into a new domain conservatively and provide interpretable uncertainty estimation. This method has shown promise in the development of safe interactive learning and robust domain adaptation systems. My recent work is centered around invariance learning, model calibration and safety assessment of prediction through conformal prediction. I have been also working with social scientists on studying the trustworthy social media and intelligent tutoring systems.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram