
Andrea Lincoln
Why do you care about AI Existential Safety?
The AI alignment problem is fundamental and I believe solutions are necessary for human flourishing. Approaches that seek to get provable guarantees of alignment, like those of the Alignment Project, are fundamentally appropriate to circumstances where your solution must be robust to optimization pressure. I support AISI’s efforts to produce alignment protocols with provable guarantees.
Please give at least one example of your research interests related to AI existential safety:
I am interested in approaches to AI safety which would result in provable guarantees.
