Skip to content

Lewis Hammond

Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

I care most of all about having the greatest possible positive impact on humanity (and life more generally), including its future generations. I also believe that if we succeed in developing sophisticated AI systems that are broadly more capable than humans, then this may pose an existential risk that warrants our immediate and careful attention. As a result, I work to try to ensure that AI and other powerful technologies are developed and governed safely and responsibly, both now and in the future.

Please give one or more examples of research interests relevant to AI existential safety:

My research concerns safety, control, and incentives in multi-agent systems and spans game theory, formal methods, and machine learning. Currently my efforts are focused on developing techniques to help rigorously identify or induce particular properties of multi-agent systems under their game-theoretic equilibria, especially those systems that operate in uncertain (partially known, partially observable, stochastic, etc.) environments. Examples of my recent, ongoing, and planned work include: – Reasoning about causality in games, which in turn can be used to help define agent incentives. – Automatically verifying or synthesising equilibria of multi-agent systems that induce particular properties. – Coordination and commitment mechanisms that encourage cooperation among self-interested agents. – Representing and learning preferences and constraints in both the single- and multi-agent settings. – Studying and shaping the learning dynamics of multiple agents, as modelled by evolutionary games.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram