Skip to content

Matt MacDermott

Organisation
Imperial College London
Biography

Why do you care about AI Existential Safety?

It seems likely that the AI we develop in the next few decades could have a transformative effect on the world, and whether that effect is good or bad depends on the particulars of how it works. The AI safety research we do in the meantime could therefore be hugely beneficial.

Please give at least one example of your research interests related to AI existential safety:

I’m interested in understanding the foundations of goal-directed agency: what are goals, how do agents get them, and what do they do about them? To that end I’m currently investigating reinforcement learning algorithms through the lens of decision theory.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram