Michael Cohen
University of Oxford
Why do you care about AI Existential Safety?
Without special design choices, advanced artificial agents planning actions over the long term in an unknown environment are likely to intervene in any physical system we set up that has the purpose of producing observations for the agent that it is programmed to treat as informative about its goal. Such tampering would likely lead to the extinction of biological life.
Please give one or more examples of research interests relevant to AI existential safety:
I am interested in how to design safe advanced artificial agents in theory and then how to construct tractable versions.