Skip to content

Harriet Farlow

Organisation
UNSW Canberra
Biography

Why do you care about AI Existential Safety?

My professional background has spanned consulting, academia, a tech start-up and Defence. All of these have been quite different in many ways, but they had one thing in common. They were all grappling with how to respond to new technologies – from the selling side, the buying side, the research side, or the implementation side. I have seen Artificial Intelligence from a lot of sides now and I’m excited but apprehensive. AI presents many opportunities, and it is already rapidly being adopted, but we still have a long way to go to ensure we implement it accurately, safely and securely. I am currently a PhD candidate in Cyber Security looking at adversarial machine learning – the ability to ‘hack’ machine learning models – and I hope that my research will further the field in AI Safety to ensure the inevitable rise of AI benefits, rather than harms, our communities.

Please give one or more examples of research interests relevant to AI existential safety:

Adversarial machine learning, Model evasion, Model optimization, Game theory, Cyber security, and AI bias and fairness.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram