Skip to content

Rosie Campbell

Position
Assistant Director
Organisation
Center for Human-Compatible AI (CHAI)
Biography

Field: AI Safety

Position & Organization: Assistant Director of the Center for Human-Compatible AI (CHAI) at UC Berkeley

How did you get started in this field? I was previously working as a Research Engineer at BBC R&D on novel broadcast technology. One of my projects involved using Machine Learning to automate the process of covering live events. As I learnt more about ML and AI, I was struck by the incredible potential opportunities and risks. Around the same time, I discovered the Effective Altruism movement and encountered arguments for working on AI safety. I’d studied Physics, Philosophy and Computer Science at university so it seemed like I might be a good fit for the field. I had a coaching session with 80,000 Hours, who pointed me towards the role at CHAI. It was quite different from my previous work (moving from the technical side to the operations/managerial side) but I took a risk and luckily it worked out!

What do you like about your work? I am surrounded by mission-driven, highly-intelligent people and get to regularly experience fascinating intellectual conversations. I feel like I’m making a significant contribution to a new and exciting field which has the potential to be really important for humanity’s future flourishing.

What do you not like about your work? Although there are lots of benefits to being based within an academic institution, I’m sometimes envious of my friends who work in lean startups and don’t have to handle so much bureaucracy!

Do you have any advice for women who want to enter this field? I’m a big advocate for diversity. We’re trying to solve big, important problems, and it’s worrying to think we could be missing out on important perspectives. I’d love to see more women in AI safety! The good news is there are a number of ways to contribute: If you have technical skills (or potential), I’d highly recommend studying ML and going into either technical research or research engineering. The great thing about this is that you’ll never be short of intellectually challenging, well-paid work. If you don’t have technical skills, we often find we need to consult the expertise of social scientists and philosophers, so that’s a great option for those who enjoy academic research. Finally, another increasingly important field is AI policy work, which is likely to be key for ensuring any technical solutions we come up with get implemented and adhered to.

What makes you hopeful for the future? When thinking about existential risk, I tend to veer wildly from feeling pretty hopeless to cautiously optimistic. As I learn more about the nuances of different approaches to solving technical AI safety problems, I’m constantly impressed at the novelty, and it feels like the speed of progress is picking up (although we still have a long way to go). I’m encouraged that so many thoughtful, passionate people are starting to take seriously the existential challenges faced by our society today, and are working towards building a positive future for humanity.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram