
Lorenz Kuhn
Organisation
University of Oxford
Biography
Why do you care about AI Existential Safety?
It seems plausible that we will develop highly capable AI systems in the near future. While AI has the potential to have a positive impact on the world, it also has the potential to cause significant harm if not developed responsibly. Even under relatively weak assumptions about future AI systems, it is likely that they will be more powerful than humans in some ways. If those AI systems are not sufficiently aligned with humans, this might lead to dangerous and unpredictable outcomes.
Please give at least one example of your research interests related to AI existential safety:
- Scalable oversight, in particular the automatic evaluation of large language models.
- Uncertainty estimation in large language models.
- Generalization in deep learning.
Our content
Content from this author
Sort order
Category
Content type
Number of results
May 22, 2023
Halogen-catalyzed reactions on smoke destroy the ozone layer after regional nuclear war
grant
May 22, 2023
grant
May 22, 2023
grant
May 22, 2023
Improving the representation of crop production losses due to nuclear conflict (CODEC)
grant
May 22, 2023
grant
May 22, 2023
The cascading impacts of postnuclear ultraviolet radiation on photosynthesizers in the Earth system
grant
May 22, 2023
grant
May 22, 2023
WUDAPT-based framework for numerical simulations of nuclear urban fires and pyroconvective plumes
grant
May 22, 2023
grant
May 22, 2023
grant
May 22, 2023
Advanced ensemble projections for indirect impacts of nuclear war in global food systems
grant
May 22, 2023
grant
May 4, 2023
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
podcast
May 4, 2023
podcast