AI Safety Research

Stefano Ermon

Assistant Professor of Computer Science

Fellow of the Woods Institute for the Environment

Stanford University

ermon@cs.stanford.edu

Project: Robust probabilistic inference engines for autonomous agents

Amount Recommended:    $250,000

Project Summary:

As we close the loop between sensing-reasoning-acting, autonomous agents such as self-driving cars are required to act intelligently and adaptively in increasingly complex and uncertain real-world environments. To make sensible decisions under uncertainty, agents need to reason probabilistically about their environments, e.g., estimate the probability that a pedestrian will cross or that a car will change lane. Over the past decades, AI research has made tremendous progress in automated reasoning. Existing technology achieves super-human performance in numerous domains, including chess-playing and crossword-solving. Unfortunately, current approaches do not provide worst-case guarantees on the quality of the results obtained. For example, it is not possible to rule out completely unexpected behaviors or catastrophic failures. Therefore, we propose to develop novel reasoning technology focusing on soundness and robustness. This research will greatly improve the reliability and safety of next-generation autonomous agents.

Technical Abstract: 

To cope with the uncertainty and ambiguity of real world domains, modern AI systems rely heavily on statistical approaches and probabilistic modeling. Intelligent autonomous agents need to solve numerous probabilistic reasoning tasks, ranging from probabilistic inference to stochastic planning problems. Safety and reliability depend crucially on having both accurate models and sound reasoning techniques. To date, there are two main paradigms for probabilistic reasoning: exact decomposition-based techniques and approximate methods such as variational and MCMC sampling. Neither of them is suitable for supporting autonomous agents interacting with complex environments safely and reliably. Decomposition-based techniques are accurate but are not scalable. Approximate techniques are more scalable, but in most cases do not provide formal guarantees on the accuracy. We therefore proposes to develop probabilistic reasoning technology which is both scalable and provides formal guarantees, i.e., “certificates” of accuracy, as in formal verification. This research will bridge probabilistic and deterministic reasoning, drawing from their respective strengths, and has the potential to greatly improve the reliability and safety of AI and cyber-physical systems.

Publications

  1. Achim, T, et al. Beyond parity constraints: Fourier analysis of hash functions for inference. Proceedings of The 33rd International Conference on Machine Learning, pages 2254–2262, 2016.
    • New analysis of hash functions families for inference in terms of Fourier analysis.
  2. Hsu, L.K., et al. Tight variational bounds via random projections and i-projections. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1087–1095, 2016.
    • This paper discusses new techniques to obtain provable guarantees from traditional variational inference algorithms, using random projections and hash functions.
  3. Kim, C., et al. Exact sampling with integer linear programs and random perturbations. Proc. 30th AAAI Conference on Artificial Intelligence, 2016.
    • This paper discusses development of a new exact inference algorithm for discrete probabilistic models that can leverage the reasoning power (and proof certificates) of integer linear programming solvers.
  4. S. Zhao, et al. Closing the gap between short and long xors for model counting. Thirtieth AAAI Conference on Artificial Intelligence, 2016.
    • New analysis of error correcting codes for inference, showing that low-density parity check codes can be used and provide the same accuracy guarantees as traditional (dense) ones, but are much more efficient in practice.

Ongoing Projects/Recent Progress

This team will proceed as planned for year 2, expanding the scope of probabilistic inference algorithms with provable guarantees on the accuracy. In particular, they plan to explore how to apply these new ideas to obtain provable worst-case guarantees in decision-making under uncertainty frameworks.