Skip to content

Olle Häggström

Position
Professor
Organisation
Chalmers University of Technology
Biography

Why do you care about AI Existential Safety?

The world urgently needs advances in AI existential safety, as we need to have solved it by the time an AGI breakthrough happens, with timeline very much unknown. I feel that the best I can do to help ensure a blissful future for humanity (rather than premature destruction) is to try to contribute to such a solution.

Please give one or more examples of research interests relevant to AI existential safety:

Omohundro-Bostrom theory of instrumental vs final AI goals. Broader issues on emerging technologies and the long-term future of humanity, such as in my 2016 book Here Be Dragons.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram