Skip to content

Amir-Hossein Karimi

Organisation
University of Waterloo
Biography

Why do you care about AI Existential Safety?

The values of this community align with my research:
A long-standing goal of artificial intelligence (AI) is to build machines that intelligently augment humans across a variety of tasks. The deployment of such automated systems in ever more areas of societal life has had unforeseen ramifications pertaining to robustness, fairness, security, and privacy concerns that call for human oversight. It remains unclear how humans and machines should interact in ways that promote transparency and trustworthiness between them. Hence, there is a need for systems that make use of the best of both human and machine capabilities. My research agenda aims to build trustworthy systems for human- machine collaboration.

Please give at least one example of your research interests related to AI existential safety:

My doctoral research focused on the intersecting domain of causal inference and explainable AI—which given the increasing use of often intransparent (“blackbox”) ML models for consequential decision-making—is of rapidly- growing societal and legal importance. In particular, I consider the task of fostering trust in AI by enabling and facilitating algorithmic recourse, which aims to provide individuals with explanations and recommendations on how best (i.e., efficiently and ideally at low cost) to recover from unfavorable decisions made by an automated system. To address this task, my work is inspired by the philosophy of science and how explanations are sought and explained between human agents, and builds on the framework of causal modeling, which constitutes a principled and mathematically rigorous way to reason about the downstream effects of causal interventions. In this regard, I contributed novel formulations for the counterfactual explanation and consequential recommendation problems, and delivered open-source solutions built using tools from gradient-based and combinatorial optimization, probabilistic modeling, and formal verification.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram