Skip to content

Marie-Therese Png

Position
PhD Student
Organisation
Oxford Internet Institute
Biography

Field: AI Safety, Critical Data Studies & Decolonial Theory

Position & Organization: PhD Student, Oxford Internet Institute

How did you get started in this field? Mine is one of the many non-linear trajectories into this space. Like many people in the AI Ethics and Safety, I came in via Effective Altruism, being invested in poverty alleviation. I did my undergraduate in biological & social sciences and was part of an existential risk student group at the Future of Humanity Institute, where I started thinking about the stratified distribution of benefits and risks in emerging tech. I did my Master’s at Harvard looking at intergroup neurocognition — and honed in on the idea that social dominance and group identity play a part in inequitable design, development, deployment, and governance of emerging tech.

I then spent a year between Harvard and the MIT Media Lab as an AI policy research associate, becoming familiar with issues across Fairness Accountability and Transparency and AI safety. The critical gap between is where my PhD lives, investigating geographic & epistemic representation in AI governance or ethics development. One recent project has been working with DeepMind on geographic representation in AGI value alignment.

What do you like about your work? I’m thinking a lot about the stratification of risk based on social and global stratification, and how to ensure benefits of advanced AI systems are widely distributed so as to avoid ‘locking in’ existing global power imbalances.

I like my particular line of work because it’s addressing a potential blindspot within a space that is important, neglected, and concerns large scales across time and space. I am convinced that AI safety is an incredibly consequential field, so I feel grateful to be contributing to an important and ever-growing area which I’m passionate and curious about.

Because it is a fast-growing area and questions are still being disentangled, there is a lot of cross-pollination of ideas and space to find your niche. Meeting with people and co-growing ideas is core to testing ideas and adjusting my approach, which I really enjoy. The community shows genuine and profound investment in improving our collective future, so there is a genuine incentive to update pre-existing perspectives based on evidence. This intellectual humility is necessary when working at this level of uncertainty.

What do you not like about your work? In AI safety, because we are working in an unprecedented area of inquiry (at least at these scales), there is an inherent difficulty in seeing the counterfactual if X initiative or research project didn’t happen. As is sometimes the case in research, it can feel disconnected from being operationalised or directly addressing issues of harm. For example, extreme natural disasters or genocides currently happening in the ‘Global South’ are directly relevant to, but feel overlooked by, the ex-risk discourse.

Given that AI safety and ethics are to a certain extent in their initial research stages, I would argue that we need to:

  • Bridge the dichotomy between long term AI safety and near term societal issues in AI ethics. There is emerging work being done to bridge intellectual and social gaps between Fairness, Accountability & Transparency and AI Safety communities, exploring how the future builds on the present.
  • Promote a better shared definition or taxonomy for AI ethics and AI safety and their sub-categories; though they overlap, we often conflate them.
  • Critique notions of neutrality, the possibility of ‘universal values’, and engage in self-reflection about community norms, worldviews and commitments to assumptions which will lead to blindspots.
  • Have a better understanding the stratification of harms locally and globally, and how this affects the desire to develop globally beneficial AI (ie: Beneficial for who? Alignment based on whose values and norms?).

Finally, the AI Safety community is concerned with questions of civilisation’s trajectory and world building. It may be problematic that such a small and unrepresentative community should be answering this question

Do you have any advice for women who want to enter this field?

  • Underrepresented perspectives — women, people of colour, and other intersectional identities — are highly valuable at this point in uncovering blindspots. Your concerns may not currently be represented in the research community, but it doesn’t mean they shouldn’t be. There is low replaceability because if you weren’t there your concern wouldn’t be any single person’s main focus. When you’re a minority in the room it’s even more important to overcome audience inhibition and speak up, or a blindspot may persist.
  • Build your arguments and skillset, test and develop them by talking to as many people as you can. If you don’t have access to the research community, check out resources such as the 80,000 hours podcast, FLI podcast, MIT Tech Review, Jeffrey Ding’s AI Newsletter, FATML conference videos, FHI and AINow reports, etc.
  • Discern between doubt and humility and apply for opportunities even if you don’t fulfil the full brief — women tend to under-value their skillsets.
  • If you’re in a technical field, encourage the use of feminist frameworks such as ‘design for the margins’, as well as multidisciplinarity.
  • Highlight other women and other minoritised individuals’ work — be a sign post and amplify their work.
  • Find a mentor and be a mentor to others — become a librarian of sources, advice and recommendations.
  • Remember you are a human being and not a human doing.

What makes you hopeful for the future? Increasing awareness in AI safety and ethics research for the need to draw from a wider scope of perspectives and disciplines. Also, increased bridging and community building between near and long term issues.

I’m hopeful that more global justice and critical links are being made, and more work is emerging from underrepresented perspectives.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram