Skip to content

Tammy Mackenzie

Position
Research Director
Organisation
The Aula Fellowship / NTNU
Biography

Why do you care about AI Existential Safety?

I care about AI existential safety for three reasons:
1: Self-Preservation and Science: I like life and want human understanding and universal exploration to continue to grow, into the future.
2: Utility. As a risk analyst, it’s clear that there is potentially a great severity of risks in terms of creating systems that can out think us, especially as we give them more tools. I come from climate change and human rights advocacy, and there are parallels that we must either embrace or avoid. It’s a very big problem, among our biggest ever, and I have some understanding of how we can proceed. I think I can be of use.
3: Love. as a Mom and a philosopher, I want the world to get stronger from our tools, and thrive. I believe we can do better, and that AI as a new space is a place where we can build systems that will reflect learnings from the past and inspire actions across society to address similar problem sets.

Please give at least one example of your research interests related to AI existential safety:

I am the founder, director, and research lead for the Aula Fellowship on AI Research and Policy. Our mission is to help with the responsible proliferation of AI. Our main tool is a permanent symposium, in the making, to get all of humanity to the table to work on the hard questions in AI. Things that policymakers and specialists can’t (or won’t) answer on their own, and for which we need a wide consensus. We do research in support of our work, and help each other as Fellows in our projects, which span across fields from engineering to rhetoric, by way of media, gender, democracy, robotics, data science, philosophy, governance, ecology, management, and public security.

I was recruited by MILA to be trained in Responsible AI because they perceive a need in this space for people who can cross disciplines and with advocacy skills.

I am also independently funded. This gives me a great deal of freedom to question, speak, and act. I use this judiciously in international conferences, by dispelling fallacies, noting and sharing on hard questions, and seeking out the missing voices. I am fearless and I want to engage on the taboo topics. Like killer robots. Like potential misalignment problems. I am actively doing advocacy on this.

This role with the Aula Fellowship and the independence it affords is the most direct way I know of to build political legitimacy from public will. I think if we don’t get that, we don’t change the systems, and if we don’t change the systems, they will pollute, AI, plague, or war us to as close to extinction that it won’t matter what we ever thought we knew of how the universe works. I love humans, and life. I have to try to help. So I am also researching the levers of power, and practicing social strategy.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram