Skip to content

Hans Gundlach

Organisation
MIT
Biography

Why do you care about AI Existential Safety?

I care about the future of humanity and I want to help build a flourishing future.

Please give at least one example of your research interests related to AI existential safety:

I have several current priorities in my work: I’m trying to understand how scale impacts the safety of AI systems. Do larger models suffer from greater safety risks? Are they harder to align? Do they have more vulnerabilities and are these vulnerabilities harder to remove?I’m interested in predictions about AI timelines. Is the ability to run and train powerful models proliferating? Will new technologies like quantum computing affect AGI timelines? As part of Oxford’s future impact group, I do work looking at the moral welfare and sentience of future AI systems. I’ve done work in interpretability of AI models including mechanistic interpretability of Go models and scaling properties of dictionary learning.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram