Skip to content

Joel Christoph

Organisation
European University Institute
Biography

Why do you care about AI Existential Safety?

I believe the development of advanced AI systems is one of the most important challenges facing humanity in the coming decades. While AI has immense potential to help solve global problems, it also poses existential risks if not developed thoughtfully with robust safety considerations. My research background in economics, energy, and global governance has underscored for me the complexity of steering technological progress to benefit society. I’m committed to dedicating my career to research and policy efforts to ensure advanced AI remains under human control and is deployed in service of broadly shared ethical principles. Our generation has a responsibility to proactively address AI risks.

Please give at least one example of your research interests related to AI existential safety:

One of my key research interests is the global governance of AI development, including what international agreements and institutions may be needed to coordinate AI policy and manage AI risks across nations. My background in global economic governance, including research on reform of the Bretton Woods institutions with the Atlantic Council, has heightened my awareness of the challenges of international cooperation on emerging technologies.
I’m particularly interested in exploring global public-private partnerships and multi-stakeholder governance models that could help steer AI progress, for example through joint research initiatives, standards setting, monitoring, and enforcement. Economic incentives and intellectual property regimes will also be crucial levers. I believe applying an international political economy lens, informed by international relations theory and economics, can yield policy insights.
Additional areas of interest include: the impact of AI on global catastrophic and existential risks; the intersection of AI safety with nuclear and other emerging risks; scenario planning for transformative AI; and strengthening societal resilience to possible AI-related disruptions. I’m also keen to explore the economics of AI development, including R&D incentives, labor market impacts, inequality, and growth theory with transformative AI. I believe AI safety research must be grounded in a holistic understanding of socio-economic realities.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram