Skip to content

Ching Lam Choi

Organisation
MIT
Biography

Why do you care about AI Existential Safety?

AI existential safety is one of the most promising approaches to bring together academia, industry and beyond, for a benevolent common cause. It is consistent with both principled scientific approaches and practical incentives: it confronts worst-case possibilities of emergent AI systems / capabilities—which it seeks to predict and preempt; furthermore, its programmes and projects are well-architected to incorporate contributions from different sectors (e.g. governance/law, natural sciences, computer sciences, cryptography). It is an invitation to research AI’s (ultimately) most crucial problems, and a coveted opportunity to drive real-world impact.

Please give at least one example of your research interests related to AI existential safety:

I am working on projects that examine multi-agent interaction dynamics, convergence and emergent properties. Some settings of interest include RLHF (with an explicit reward model) and knowledge distillation.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram