Skip to content

Lorenz Kuhn

Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

It seems plausible that we will develop highly capable AI systems in the near future. While AI has the potential to have a positive impact on the world, it also has the potential to cause significant harm if not developed responsibly. Even under relatively weak assumptions about future AI systems, it is likely that they will be more powerful than humans in some ways. If those AI systems are not sufficiently aligned with humans, this might lead to dangerous and unpredictable outcomes.

Please give at least one example of your research interests related to AI existential safety:

  • Scalable oversight, in particular the automatic evaluation of large language models.
  • Uncertainty estimation in large language models.
  • Generalization in deep learning.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram