
Juan Carlos Rocamonde Quintela
Organisation
Biography
Why do you care about AI Existential Safety?
Continued progress in AI has the potential to radically transform society and bring about profound progress, including major scientific breakthroughs and reduction in illness, poverty, and suffering. However, AI systems also present substantial risks, where misuse or accidental misalignment of advanced AI systems could have potentially catastrophic consequences in the future of humanity. I would like to help ensure that AI benefits all of humanity.
Please give at least one example of your research interests related to AI existential safety:
I am especially interested in the interpretability of language models and the science of deep learning, and how they relate to human cognition, learning and intelligence.