Skip to content

Wout Schellaert

Organisation
Univ. Politècnica de València
Biography

Why do you care about AI Existential Safety?

Like many others, I am adamant: artificial intelligence will radically transform the world. Whether it’s for better or worse though, is left to observe. While I personally hope for “better”, I am concerned about the apparent lack of societal control over the development and direction of AI. As someone who is close to this technology, I also feel a sense of responsibility. I want to help create the control and safeguards we need, so the question isn’t left to chance, but a deliberate opportunity to benefit humanity.

Please give at least one example of your research interests related to AI existential safety:

Starting from a perspective of AI evaluation, I investigate how we an anticipate the performance of AI systems on new problems or instances of those problems. A requirement for the safe deployment of any kind of machine is the expectation that it will perform well in the scenarios the machine is subjected to. In the eye of safety, one must be confident the system won’t fail catastrophically or otherwise assume it will.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram