Skip to content

Eleonora Giunchiglia

Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

AI is becoming increasingly ubiquitous, and it is likely to be applied in almost every aspect of our lives in the next few decades. However, the careless application of AI-based models in the real world can have (and, to some extent, has already had!) disastrous consequences. As AI researchers, I believe it is our responsibility to develop novel AI models that can be deemed safe and trustworthy, and hence can be applied reliably in the real-world.

Please give one or more examples of research interests relevant to AI existential safety:

My research focuses on how to create safer and more trustworthy deep learning models via the exploitation of logical constraints. The goal of my research is to develop models: ⁃ That are guaranteed by construction to always be compliant with the given set of requirements, expressed as logical constraints, and ⁃ That will have a human-like understanding of the world due to the exploitation of the background knowledge expressed by the constraints.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram