Skip to content

Charlie Steiner

Organisation
Independent
Biography

Why do you care about AI Existential Safety?

It’s a rich vein of interesting philosophical and technical problems that also happens to be vital, urgently, for realizing the long-term potential of the human race.

Please give one or more examples of research interests relevant to AI existential safety:

I’m interested in how to make conceptual progress on the problem of value learning, and how to translate that progress to motivate experiments that can be carried out today using language models or model-based reinforcement learning. An example interest for conceptual progress would be how to translate values and policies between different learned ontologies.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram