AI Safety Research

Jacob Steinhardt

Graduate Student

Stanford University

jacob.steinhardt@gmail.com

Project: Summer Program in Applied Rationality and Cognition

Amount Recommended:    $88,050

Project Summary

The impact of AI on society depends not only on the technical state of AI research, but also its sociological state. Thus, in addition to current AI safety research, we must also ensure that the next generation of AI researchers is composed of thoughtful, intelligent, safety-conscious individuals. The more the AI community as a whole consists of such skilled, broad-minded reasoners, the more likely AI is to be developed in a safe and beneficial manner.

Therefore, we propose running a summer program for extraordinarily gifted high school students (such as competitors from the International Mathematics Olympiad), with an emphasis on artificial intelligence, cognitive debiasing, and choosing a high-positive-impact career path, including AI safety research as a primary consideration. Many of our classes will be about AI and related technical areas, with two classes specifically about the impacts of AI on society.

Publications

  1. Steinhardt, J. and Liang, P. Unsupervised Risk Estimation with only Conditional Independence Structure. Neural Information Processing Systems (NIPS), 2016.
    • This paper is on the problem of assessing the error (risk) of a model that has been deployed in the wild, but where the researchers only have unlabeled data. Steinhardt and Liang are interested in settings in which the test distribution could be completely different from the training distribution. To obtain leverage, they show that by just assuming that the test distribution has certain a conditional independence structure (three-view assumption), they can estimate the risk accurately enough to even do things like unsupervised learning. This was quite a surprising result.

Course Materials

Course Names:

  1. “Inference in graphical models” – Summer Program in Applied Rationality and Cognition (SPARC)