Skip to content

Evžen Wybitul

Organisation
ETH Zurich
Biography

Why do you care about AI Existential Safety?

AI is easily the most impactful technology of this century, with yet unknown impacts on society. Even the happier universes in which intent-alignment is attainable, there are many important questions we need to answer about how AI will be integrated in society and how we will assure it is used to support a diverse set of human values and agency-preserving human nurturing in general. I would like to help answer some of these questions!

Please give at least one example of your research interests related to AI existential safety:

In my upcoming PhD, I want to focus on the defensive uses of AI, particularly to improve democratic institutions and democratic decision-making, with the goal of preventing gradual disempowerment-type problems. I’m also interested in other structural risks connecting the technical aspects of AI with its societal impacts. In the past, I worked on purely technical AI safety issues, including evaluations, model internals methods for knowledge isolation, and red-teaming.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram