Skip to content

Moritz von Knebel

Position
Project Manager
Organisation
FAR AI
Biography

Why do you care about AI Existential Safety?

My first contact with this topic comes from a philosophical perspective, as I hold a Masters degree in Political Science, Philosophy and Education. I found the case for strong longtermism compelling and I see advanced artificial intelligence as one of the key threats to humanity’s flourishing and our realizing of the human potential. I am also bewildered by the fact that this community – in spite of recent publicity and rising public awareness – remains so underrepresented in the discourse around existential threats, and how little money is allocated to this issue (again, this recently changed a bit but still remains true if you consider the seriousness of the threat). Hence, I committed to pivoting from work on GHD and community-building of the EA movement to working on AI Safety full-time. I am driven by a strong urge to help and protect the weakest and those who are most in need, and currently, those are the unborn future generations who we expose to significant risk by failing to address safety issues with AI (on top of any casualties resulting from nearterm catastrophes). In summary: I care about AI Safety because it is the most important challenge humanity has ever faced.

Please give at least one example of your research interests related to AI existential safety:

I am particularly interested in the role of standard-setting as a preventive or mitigative measure, and some of my previous work on this (4 case studies for Holden Karnofsky) explored the precedence for this. I have also worked on concerns around corporate governance and the possible securitization of Artificial Intelligence, as well as the potential (and pitfalls) of international institutions for advanced AI. More recently, in my capacity as a Project Manager at FAR AI, I have focused on field-building initiatives to create a common language around AI, together with a newly incubated project led by Fynn Heide. Earlier this summer, I spent 2 months in Taiwan at a military think tank, investigating the national security implications of the semiconductor supply chain from a Taiwanese perspective, with a special focus on the effects of this on the development of AGI. My work for the OECD focuses on AI Futures, the threats posed by foundation models and mapping out the solution space, including analogues in other emerging technologies and previous attempts of regulating these.
My newest project for Holden will focus on responsible scaling policies and risk management practices overall (forthcoming).

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram