Michael Hippke
Why do you care about AI Existential Safety?
Artificial Intelligence may pose existential risks to humanity if it can not be confined, aligned, or its progress over time controlled. Thus, we should research these and potential other options to reduce the risk.
Please give one or more examples of research interests relevant to AI existential safety:
AI box confinement; A new AI winter due to Landauer’s limit?; AI takeoff speed measured with hardware overhang.