Skip to content

Stephen Casper

Massachusetts Institute of Technology
Class of
Member of

Advisor: Dylan Hadfield-Menell
Research on interpretable and robust AI

Stephen “Cas” Casper is a Ph.D student at MIT in Computer Science (EECS) in the Algorithmic Alignment Group advised by Dylan Hadfield-Menell. Formerly, he has worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. His main focus is in developing tools for more interpretable and robust AI. Research interests of his include interpretability, adversaries, robust reinforcement learning, and decision theory. He is particularly interested in finding (mostly) automated ways of finding/fixing flaws in how deep neural networks handle human-interpretable concepts. He is also an Effective Altruist trying to do the most good he can. You can visit his website here.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram