Skip to content

Jindong Gu

Organisation
University of Oxford & Google
Biography

Why do you care about AI Existential Safety?

I care about AI existential safety because ensuring the responsible development and deployment of artificial intelligence is crucial for safeguarding humanity’s future. As AI technologies become increasingly integrated into society and impact various aspects of our lives, it’s essential to address the potential risks and ensure that AI systems are designed and deployed in ways that prioritize safety, reliability, and ethical considerations. By advocating for AI existential safety, we can foster public trust in AI, promote transparency and accountability in AI development and deployment, and ultimately ensure that AI serves as a force for good in the world.

Please give at least one example of your research interests related to AI existential safety:

One example of my research interest related to AI existential safety is in the area of responsible generative AI. In particular, I recently have focused on responsible generative AI. I have developed techniques to enable responsible text-to-image generation.

My long-term research goal is to build Responsible AI. Specifically, I am interested in the interpretability, robustness, privacy, and safety of Visual Perception, Foundation Model-based Understanding and Reasoning, Robotic Policy and Planning, and their fusion towards General Intelligence Systems.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram