Olle Häggström
Why do you care about AI Existential Safety?
The world urgently needs advances in AI existential safety, as we need to have solved it by the time an AGI breakthrough happens, with timeline very much unknown. I feel that the best I can do to help ensure a blissful future for humanity (rather than premature destruction) is to try to contribute to such a solution.
Please give one or more examples of research interests relevant to AI existential safety:
Omohundro-Bostrom theory of instrumental vs final AI goals. Broader issues on emerging technologies and the long-term future of humanity, such as in my 2016 book Here Be Dragons.