Philip Torr
Why do you care about AI Existential Safety?
I am very much involved in applications of AI, start ups, advising big tech e.g. Google, Apple, Microsoft etc. I also teach AI and graduate students at Oxford – a world leader in this area – I run a large machine learning group there. In all these activities I would like to raise the profile of AI safety.
Please give at least one example of your research interests related to AI existential safety:
I am very much involved in applications of AI, start ups, collaborations with big tech e.g. Google, Apple, Microsoft etc. I also teach AI and graduate students at Oxford – a world leader in this area – I run a large machine learning group there. In all these activities I would like to raise the profile of AI safety. I have always been interested in social equality and social justice, the coming AI revolution, combined with a revolution in robotics will radically change the balances of power in society with dangers that we might slip into a totalitarian state or big tech oligarchy. I would like to work towards a future in which the benefits of AI are fully realised for all society.