Skip to content

Yaodong Yu

Organisation
University of California, Berkeley
Biography

Why do you care about AI Existential Safety?

Since AI systems are likely to outperform humans in many intellectual domains in the next few decades, I feel highly motivated to investigate how to avoid potential undesired outcomes and understand failure cases of powerful AI systems, for the purpose of ensuring safety of such AI technologies.

Please give one or more examples of research interests relevant to AI existential safety:

In order to build trustworthy machine learning systems, we need to first understand when and why machine learning models have good or bad performance. However, we still lack a fundamental understanding of the underlying principles behind the generalization, optimization, and neural network architecture design for reliable and robust machine learning. My research interests lie in understanding the theoretical foundations of “fragile” machine learning systems and developing principled approaches for robust machine learning. Current topics that I am working on include: (1). Theoretical framework for out-of-distribution generalization; (2). Min-max optimization for robust machine learning; (3). Uncertainty quantification with formal guarantees.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram