Skip to content

Michael Hippke

Position
Lecturer
Organisation
Sonneberg Observatory
Biography

Why do you care about AI Existential Safety?

Artificial Intelligence may pose existential risks to humanity if it can not be confined, aligned, or its progress over time controlled. Thus, we should research these and potential other options to reduce the risk.

Please give one or more examples of research interests relevant to AI existential safety:

AI box confinement; A new AI winter due to Landauer’s limit?; AI takeoff speed measured with hardware overhang.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram