
Josephine Liu
Why do you care about AI Existential Safety?
Having lived and worked across Asia, Europe, and the U.S., I have seen firsthand the ugliness of reckless arms races, from conventional weapons to nuclear proliferation. I’ve worked on international security and technology policy, engaging with governments and policymakers to mitigate these risks. If nuclear weapons can wipe out lives in seconds, irresponsible AI will erode human well-being every second, often unnoticed—until we find ourselves in an irreversible crisis.
In my work on AI governance and policy, I’ve observed how unregulated competition and corporate influence can drive unsafe deployment. Without proactive safety measures, AI could accelerate cyber threats, economic instability, and geopolitical conflict. My research on Big Tech vs. Government and Technology in Great Power Competition highlights these dangers. AI existential safety isn’t just an abstract concern—it’s about ensuring that AI remains a tool for progress, not an unchecked force that undermines human autonomy and security.
Please give at least one example of your research interests related to AI existential safety:
One of my core research interests in AI existential safety focuses on the intersection of AI governance and geopolitical competition—particularly how reckless AI development and deployment could lead to uncontrollable risks at a global scale.
In my research project, “Big Tech vs. Government”, I analyze how large AI firms and state actors compete for dominance, often prioritizing speed over safety. The AI arms race mirrors past nuclear competition—nations rushed to develop advanced weapons without fully considering long-term consequences. Today, AI development follows a similar trajectory, with little global coordination, fragmented regulations, and minimal accountability. The result? AI systems deployed before robust safety measures exist, increasing risks of misuse, cyber threats, and destabilization of global security.
In the other research project, Technology in Great Power Competition, I also explore AI’s role in asymmetric warfare and autonomous decision-making. AI-driven military and surveillance technologies are already being integrated into defense systems and intelligence operations, raising concerns about loss of human oversight, accidental escalations, and AI-driven misinformation campaigns. Unlike nuclear weapons, which require explicit activation, AI systems could influence global stability through economic disruption, cyber warfare.
Ultimately, my research aims to prevent AI from becoming a destabilizing force and to ensure that AI remains aligned with human values. By promoting global cooperation regard for safety protocols, international coordination, or ethical deployment. If nuclear weapons can destroy cities in seconds, reckless AI policies may erode human autonomy, economic stability, and security over time—without immediate realization.
Beyond research, I actively work to bridge technical and policy communities. Through my podcast, Bridging, I’ve engaged with AI researchers, policymakers, and industry leaders to discuss AI safety, governance, and existential risks. These conversations reinforce my belief that without responsible AI oversight, we risk unintended societal and geopolitical crises.
AI existential safety is not just an academic interest for me—it’s a policy imperative. My work aims to identify the risks, propose governance frameworks, and advocate for international cooperation to ensure AI development serves humanity, rather than threatening it.