Skip to content

Pablo Antonio Moreno Casares

Organisation
Universidad Complutense de Madrid
Biography

Why do you care about AI Existential Safety?

I think creating or understanding AGI is one of the most important scientific endeavors of our time. It is fascinating indeed and has the potential to improve the lives of everyone a lot. On the other hand, however, we must make sure that this transition goes well, and we are capable of making advanced AI systems understand what we want. In the same way that we cannot manually specify the behavior of advanced AI systems, we should not expect to be able to write down the specific objectives they should pursue. For that reason, we need to research how to make AI systems safer.

Please give one or more examples of research interests relevant to AI existential safety:

At the moment I am quite enthusiastic about using causality to ensure AI systems have the correct instrumental incentives. The work from causalincentives.com is very relevant here. I am also interested in how to do causal representation learning of human preferences, and whether it can make systems more robust and interpretable.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram