
Heramb Podar
Why do you care about AI Existential Safety?
AI has the potential to be transformative for human society if it exceeds human capabilities and gets good at meta-learning and/or a generalized form of power-seeking behavior. This might lead to AI agents optimizing for agency, incorrigibility, and actions that would be harmful to or simply not comprehensible to humans.
Human values are messy and difficult to imbibe in AI systems, leading to misaligned behavior. These factors together might cause a global catastrophic risk fuelled by a race to the bottom of getting the most capable AI system.
I also want more youth voices to be given a seat at the table, as my generation is ultimately the one that will have to grapple with the consequences of how AI turns out.
Please give at least one example of your research interests related to AI existential safety:
My research is on the governance of frontier AI systems, focusing on preventing the misuse of AI models, which will set a precedent for any x-risk-related legislation. Recently, I have been focused on the governance of Lethal Autonomous Weapon Systems and global AI governance with the Center for AI and Digital Policy, where I write statements on legislative AI policy drafts. I also lead the India chapter of Encode Justice, the world’s largest youth movement focused on risks from AI.
- Here is a report I wrote with Encode Justice on Lethal Autonomous Weapon Systems as a response to the rolling text of the Group of Governmental Experts at the UN CCW.
- Here’s a report I co-authored with the Policy Network on AI of IGF on AI governance interoperability and best practices.