
Avyay Casheekar
Why do you care about AI Existential Safety?
Current AI governance is fundamentally unprepared for advanced AI systems. We’re developing AI that can deceive, evade safety measures, and potentially cause irreversible harm, yet our legal frameworks treat AI like traditional software. The mismatch between AI capabilities and regulatory capacity creates existential risk – not as a distant possibility, but as an immediate governance failure. Understanding both the technical risks and legal constraints has convinced me that building effective AI governance is humanity’s most urgent challenge.
Please give at least one example of your research interests related to AI existential safety:
I’m particularly interested in the legal challenges of governing transformative AGI. Having worked on both sides – building systems to prevent jailbreaks and deception, and evaluating how governments try to regulate these risks – I see a critical translation gap. Technical researchers know what makes AI systems dangerous (ability to deceive, evade controls, pursue misaligned goals), but this rarely translates into effective law. I am interested in focusing on creating legal frameworks that actually capture these technical risks. For example, when we talk about “deceptive AI” technically, we mean specific capabilities: hidden reasoning, strategic information disclosure, adversarial behavior against oversight. But current laws use vague terms like “transparency” without defining what that means for a system that can selectively reveal information to pass audits while hiding dangerous capabilities. I want to help develop more precise legal definitions and testing requirements that courts can actually enforce. This means creating statutory language that specifies exactly what safety properties AI systems must demonstrate, how to verify them, and what happens when systems evolve past their initial assessments.
