Skip to content

Michael Cohen

Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

Without special design choices, advanced artificial agents planning actions over the long term in an unknown environment are likely to intervene in any physical system we set up that has the purpose of producing observations for the agent that it is programmed to treat as informative about its goal. Such tampering would likely lead to the extinction of biological life.

Please give one or more examples of research interests relevant to AI existential safety:

I am interested in how to design safe advanced artificial agents in theory and then how to construct tractable versions.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram