
Trupti Bavalatti
Why do you care about AI Existential Safety?
I care about AI existential safety because I’ve seen firsthand how quickly powerful technologies can outpace our ability to govern them responsibly. Working on Generative AI Trust & Safety, I’ve learned that even small oversights or misalignments can cause real-world harm at scale. AI could reshape societies for generations, and if we don’t address issues like bias, misuse, and transparency early on, we risk losing control over outcomes that affect millions or billions of people. Ensuring AI aligns with humanity’s values isn’t just a technical challenge, it’s a moral imperative to safeguard our collective future. Beyond the immediate concerns of misinformation and bias, existential safety addresses the far-reaching consequences of advanced AI systems that might surpass human control. I believe that if we prioritize robust safeguards, collaborate across disciplines, and foster transparency, we can harness AI’s transformative potential without undermining our humanity. By investing in alignment research, we stand a better chance of guiding AI development in ways that benefit present and future generations.
Please give at least one example of your research interests related to AI existential safety:
One of my key research interests, demonstrated through both my work on the MLCommons benchmarks and my ongoing focus on text-to-image (T2I) generative AI safety, lies in ensuring that the datasets driving these advanced models are both comprehensive and ethically sound. In novel T2I research, I’ve analyzed publicly available datasets in terms of their collection methods, prompt diversity, and distribution of harm types. By highlighting dataset strengths, limitations, and potential gaps, my work helps researchers select the most relevant datasets for each use case, critically assess the downstream safety implications of their systems, and improve alignment with human values – a vital step in mitigating existential AI risks.