Skip to content

Philip Torr

Position
Professor
Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

I am very much involved in applications of AI, start ups, advising big tech e.g. Google, Apple, Microsoft etc. I also teach AI and graduate students at Oxford – a world leader in this area – I run a large machine learning group there. In all these activities I would like to raise the profile of AI safety.

Please give at least one example of your research interests related to AI existential safety:

Currently I have a bunch of papers at the top tier conferences (Neurips, ICLR, ICML) all around aspects of robustness and certification of AI systems. I also published a paper with Fazl Barez identifying the existential risk that could be posed in the future by non robust systems. A full list of these and many other papers can be found here.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram