Skip to content

Muhammad Chaudhry

Organisation
UCL
Biography

Why do you care about AI Existential Safety?

I believe even if there is less than 0.01% chance of AI wiping out humanity, it’s worth every effort to avoid it. With the recent advances in generative AI and its integration with tech in every sphere of human life is a testament to the dangers it poses. The least we can do is create awareness.

I was at an ed-tech panel in the Ed-tech summit in Birmingham recently where everyone among the panelists admitted that they can’t prevent students from using LLMs, but none of them were willing to admit that AI has already penetrated our education system in ways that we can’t control. There was a clear contradiction in both these beliefs and I was surprised how casually they ignored the dangers of AI.

Please give at least one example of your research interests related to AI existential safety:

I am passionate about two main research interests:
Firstly, the power of AI to manipulate humans into taking actions that might be harmful. My Phd thesis on the Transparency of AI revealed that our limited understanding of AI systems can lead to trusting these systems and ignoring their drawbacks.
Secondly, AI going rogue and taking actions that can lead to human extinction. This can be intentional from bad actors or a mistake where optimises for its end goal that leads to human extinction.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram