Skip to content

Neil Crawford

Organisation
UC Irvine
Biography

Why do you care about AI Existential Safety?

I think there are real near-term AI risks. These range from bad to catastrophic. I don’t want to live in a world where the very technology we’ve created to make our lives easier and better ends up causing irreparable harm to humanity. But beyond the practical considerations, I care about AI existential safety because I care about the future of humanity. I believe that our species has the potential to achieve incredible things, but we are also vulnerable to making catastrophic mistakes. AI could be one of the most powerful tools we have to help us solve some of the biggest problems facing our world today, but only if we can ensure its safe and responsible development. Ultimately, I care about AI existential safety because I care about the well-being of all sentient beings. We have a responsibility to create a future that is both technologically advanced and morally just, and ensuring the safety of AI is a critical part of that mission.

Please give at least one example of your research interests related to AI existential safety:

One thing I’ve been concerned about is how undesirable traits can be selected for in a series of repeated games. There are classes of games which can potentially reinforce certain undesirable traits. For example, in repeated games where the players are in competition with each other, traits like spitefulness can be selected for if they prove advantageous. Spiteful behaviour involves intentionally harming others, even if it comes at a cost to oneself. In certain games, such behaviours can lead to gains and thus get reinforced. Another example is in social networks, where algorithms may prioritise content that is more likely to elicit strong emotional responses from users, such as anger or outrage. This can result in the reinforcement of negative and polarising views, leading to further division and conflict in society. The concern is that if AI systems are designed to optimise for certain outcomes without considering the broader impact on society, they may inadvertently reinforce undesirable traits or behaviours. This could have serious consequences, particularly in the case of autonomous systems where there is no human oversight.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram