Skip to content

David Lindner

Organisation
ETH Zurich
Biography

Why do you care about AI Existential Safety?

I think there is a non-negligible chance that we will develop very capable AI systems in the next decades that could pose an existential risk, and I believe that there is research we can do today to reduce this risk significantly. Such research should have a high priority because reducing such existential risk even a bit seems to have huge expected value.

Please give one or more examples of research interests relevant to AI existential safety:

Currently I’m interested in AI alignment research, specifically in the context of reinforcement learning. Most of my work currently is focused on improving the sample efficiency of reward learning methods, that allow us to design reinforcement learning agents that learn from human feedback instead of a specified reward function. I think this research is relevant for AI existential safety, because a lot of existential risks come from the difficulty of specifying objectives for very capable systems. However, if we want to have systems learn from human preferences, it is crucial to ensure that such systems are scalable and remain competitive. Therefore, it is crucial to make learning from human preferences more sample efficient.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram