Stephen Casper
Advisor: Dylan Hadfield-Menell
Research on interpretable and robust AI
Stephen “Cas” Casper is a Ph.D student at MIT in Computer Science (EECS) in the Algorithmic Alignment Group advised by Dylan Hadfield-Menell. Formerly, he has worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. His main focus is in developing tools for more interpretable and robust AI. Research interests of his include interpretability, adversaries, robust reinforcement learning, and decision theory. He is particularly interested in finding (mostly) automated ways of finding/fixing flaws in how deep neural networks handle human-interpretable concepts. He is also an Effective Altruist trying to do the most good he can. You can visit his website here.