Skip to content

Samuel Bowman

Position
Assistant Professor of Data Science, Linguistics, and Computer Science
Organisation
New York University
Biography

Why do you care about AI Existential Safety?

I find it likely that state-of-the-art machine learning systems will continue to be deployed in increasingly high-stakes settings as their capabilities continue to improve, and that this trend will continue even if these systems are not conclusively shown to be robust, leading to potentially catastrophic accidents. I also find it plausible that more powerful future systems could share building blocks in common with current technology, making it especially worthwhile to identify potentially dangerous or surprising failure modes in current technology and to develop scalable ways of mitigating these issues.

Please give one or more examples of research interests relevant to AI existential safety:

My group generally works with neural network models for language (and potentially similar multimodal models), with a focus on benchmarking, data collection, human feedback, and empirical analysis, rather than model design, theory, or systems research. Within these constraints, I’m broadly interested in work that helps to document and mitigate potential negative impacts from these systems, especially impacts that we expect may become more serious as models become more capable. I’m also open to co-advising students who are interested in these risks but are looking to pursue a wider range of methods.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram