Oishi Deb
Why do you care about AI Existential Safety?
I care about “AI existential safety” because it is of utmost importance as the development of sophisticated AI technologies presents both opportunities & dangers. It’s vital to align AI’s goals with human values and priorities to avert negative consequences stemming from divergent objectives. This is particularly crucial as AI’s abilities approach/exceed those of humans. Prioritizing existential safety means actively working to protect our future, maintain our independence, and ensure a peaceful coexistence with AI. By doing so, we aim to direct AI’s vast potential towards enhancing human existence and tackling major global issues, rather than allowing it to evolve into a force of unpredictable and possibly detrimental outcomes. This involves a comprehensive understanding of AI’s capabilities, robust ethical frameworks, effective governance strategies, and continuous monitoring to ensure that AI systems do not deviate from intended ethical guidelines. Additionally, there’s a need for international cooperation in setting standards for AI development and use, ensuring that these technologies are managed responsibly on a global scale.
Please give at least one example of your research interests related to AI existential safety:
I have worked at Rolls-Royce contributing to the safety-critical software development for aircraft engines. Following that, I started my PhD research at the University of Oxford. My research, including a paper on uncertainty quantification in Remaining Useful Life prediction for aircraft engines, aligns with AI safety. It emphasizes the importance of estimating uncertainties in AI/Deep Learning models for safe AI development. Another focus of my research is on Generative AI, specifically regarding AI safety. Foundation models, a type of generative model, present unique risks if their development does not reflect human values. My work explores methods to enhance these models for greater alignment with user goals, especially through Computer Vision tasks. This research aims to refine the application of foundation models, ensuring they are more versatile and practical for diverse needs.
My publication list can be found here.