Skip to content

Jindong Wang

Position
Assistant Professor
Organisation
William & Mary
Biography

Why do you care about AI Existential Safety?

My research is highly related to AI existential safety because I believe that we should achieve better understanding and control of its potential risks and pitfalls before applying it to everyone. As a guy in machine learning, I thought I understand the current AI models. However, I was wrong especially I find it extremely difficult to really know why large language models can generalize (or not)… It is very different from traditional machine learning or deep learning where we can offer some kind of interpretability. Therefore, I shifted my focus to AI understanding and evaluation and luckily, we are not alone. I have been collaborating with Jose for quite a long time. it’s really nice to find someone that shares the same interest with you. Then, I know that FoL institute focuses even more on AI safety, which is good! I believe that with the efforts from many others, we can build better models with better control. Through this, we can say we are really doing AI for everyone!

Please give at least one example of your research interests related to AI existential safety:

DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks, ICLR 2024 spotlight.
In this paper, we propose a general framework to provide a holistic framework to understand AI capabilities to better assess their risks. This paper is a popular one that received a lot of attentions in the safety community.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram