Skip to content

Joseph Kwon

Massachusetts Institute of Technology

Why do you care about AI Existential Safety?

AI capabilities are increasing at a rapid rate and I believe it will continue to trend this way. I believe that the default development of AI poses a lot of risks and I want to reduce risks from AI to increase the likelihood of having a flourishing future.

Please give at least one example of your research interests related to AI existential safety:

I’m exploring a wide variety of research, but I’m currently interested in understanding social and moral cognition, how human values form, and how we can get AI systems to learn human values robustly. I’m also interested in understanding how knowledge is represented in humans and machines and am excited about ELK-related ideas.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram