Skip to content

📣 Just announced: Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, actors, and more. Join them as a signatory today.

Aishwarya Gurung

Organisation
University of Bath
Biography

Why do you care about AI Existential Safety?

I think we should do everything we can to minimize the burden for future generations.

Please give one or more examples of research interests relevant to AI existential safety:

I am broadly interested in Artificial General Intelligence Safety, AI Governance and forecasting future tech-security issues relevant for policymaking.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram