Skip to content

Nikolaus Howe

Organisation
Université de Montréal, Mila
Biography

Why do you care about AI Existential Safety?

There is a meaningful chance that AGI will be developed within the next 50 years. This will without doubt lead to transformative economic and societal change. There is no guarantee this transformation will be a positive one, and it could even lead to extinction of all life on Earth (or worse). As such, I believe the careful study of AGI safety is of fundamental importance for the future of humanity.

Please give at least one example of your research interests related to AI existential safety:

My focus is on RL safety, as I believe many of the greatest dangers of AGI arise due to the agentic nature of models trained in an RL setting. Previously, I worked on developing deep learning-based alternatives to RL (Howe et al., 2022) and on understanding the phenomenon of reward hacking in RL (Skalse et al., 2022). In the near future, I will be working on adversarial attacks and robustness of superhuman RL systems.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram