Skip to content

Michael Noetel

Position
Associate Professor from 2025
Organisation
The University of Queensland
Biography

Why do you care about AI Existential Safety?

I’m chair and director of Effective Altruism Australia so leading one of the key communities focusing on this problem, down under. I have facilitated Blue Dot Impact’s AI Safety Fundamentals course twice, and helped their team train other facilitators. Like many subject-matter experts (Karger et al., 2023), I think AI is the most likely existential risk over the coming century. I fear it’s neglected and not likely to be solved by default. I think I have useful skills that can contribute to reducing existential risks from artificial intelligence (see research examples below). On a personal level, I have children and think there’s a realistic probability they will not have long flourishing lives due to humanity losing control of AI.

Please give at least one example of your research interests related to AI existential safety:

I am an author on the AI Risk Repository. My colleagues at MIT presented the work at a United Nations meeting and it has received attention in the field and the media (e.g., this article) My role was as the senior researcher at UQ (one of the two university partners). I led Alexander and Jessica who did the majority of the work.

I am also the senior author of the Survey Assessing Risks from Artificial Intelligence (SARA). As the senior author I led and funded this project, supporting Alexander and Jessica who again did most of the technical work. This work was the second citation in the Australian Government’s report outlining their approach to AI safety (page 3).

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram