Skip to content

Andrea Lincoln

Position
Assistant Professor
Organisation
Boston University
Biography

Why do you care about AI Existential Safety?

The AI alignment problem is fundamental and I believe solutions are necessary for human flourishing. Approaches that seek to get provable guarantees of alignment, like those of the Alignment Project, are fundamentally appropriate to circumstances where your solution must be robust to optimization pressure. I support AISI’s efforts to produce alignment protocols with provable guarantees.

Please give at least one example of your research interests related to AI existential safety:

I am interested in approaches to AI safety which would result in provable guarantees.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram