Skip to content

Francis Rhys Ward

Organisation
Imperial College London
Biography

Why do you care about AI Existential Safety?

I consider myself an effective altruist and long-termist. That is, I believe that the future of humanity is incredibly valuable and that AI is (probably) the most important influence on how the long-term future goes. I also think that we an make progress on both technical and societal problems related to AI in order to reduce existential risk and more generally increase the likelihood of positive futures.

Please give one or more examples of research interests relevant to AI existential safety:

The current focus of my PhD relates to the incentives that AI agents have to manipulate humans, especially in the multi-agent reward learning setting. I recently had a paper on this topic accepted to the Coop AI workshop and as an extended abstract to AAMAS.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram