Skip to content

Alex Turner

Organisation
Oregon State University
Biography

Why do you care about AI Existential Safety?

AI Existential Safety seems like a fork in the road for humanity’s future. AI is a powerful technology, and I think it will go very wrong by default. I think that we are on a “hinge of history”—that in retrospect, this century may considered be the most important century in human history. We still have time on the clock to make AGI go right. Let’s use it to the fullest.

Please give one or more examples of research interests relevant to AI existential safety:

I’m currently most interested in the statistical behavioral tendencies of different kinds of AI reasoning and training regimes. For example, when will most trained agent policies be power-seeking? What actions to expected utility maximizing agents tend to take? I have developed and published a formal theory which has begun to answer these questions.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram