Skip to content

Victor Veitch

Position
Assistant Professor
Organisation
University of Chicago
Biography

Why do you care about AI Existential Safety?

I’m generally concerned with doing work that has the greatest impact on human wellbeing. I think it’s plausible that we can achieve strong AI in the near-term future. This will have a major impact on the rest of human history – so, we should get it right. As a pleasant bonus, I find that working on AI Safety leads to problems that are of fundamental importance to our understanding of machine learning and AI generally.

Please give one or more examples of research interests relevant to AI existential safety:

My main current interest in this area is the application of causality to trustworthy machine learning. Informally, the causal structure of the world seems key to making sound decisions, and so causal reasoning must be a key component of any future AI system. Accordingly, determining exactly how causal understanding can be baked into systems – and in particular how this affects their trustworthiness – is key. Additionally, this research programme offers insight into near-term trustworthiness problems, which can offer concrete directions for development. For example, the tools of causal inference play a key role in understanding domain shift, the failures of machine-learning models under (apparently) benign perturbations of input date, and in explaining (and enforcing) the rationale for decisions made by machine learning systems. For a concrete example of this type of work, see here.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram