Skip to content

Moritz von Knebel

Organisation
FAR AI
Biography

Why do you care about AI Existential Safety?

Even without fast takeoffs and Artificial General Intelligence within the next few years, it is likely that we will see transformative effects of advanced AI systems on our societies, the economy and our political systems. These transformations harbour enormous potential for scientific discovery, economic growth and human empowerment, but they also bring with them significant risks from misuse to loss-of-control scenarios and structural risks as a result of race dynamics. Stewarding this transformation – harnessing the benefits while mitigating the risks – is a crucial challenge for humanity in the 21st century. To be successful, we will need solid and sustainable governance mechanisms and institutions that require joint international and cross-sectoral collaboration.

Please give at least one example of your research interests related to AI existential safety:

I have previously published on safety standards and risk management practices for AI development and deployment, informed by a number of case studies of other high-risk or dual-use technologies. During my time in Taiwan, I investigated the (inter-)national security implications of bottlenecks in the semiconductor supply chain. These days, I think a lot about the direct and indirect effects that advanced AI systems could have on democracy, what other challenges will emerge during a potential intelligence explosion (incl. rapid economic growth), and how to manage these challenges, including how to develop robust collective decision making procedures.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram