Skip to content

Stuart Russell

UC Berkeley

Why do you care about AI Existential Safety?

It is increasingly important to ask, “What if we succeed?” Our intelligence gives us power over the world and over other species; we will eventually build systems with superhuman intelligence; therefore, we face the problem of retaining power, forever, over entities that are far more powerful than ourselves.

Please give one or more examples of research interests relevant to AI existential safety:

Rebuilding AI on a new and broader foundation, with the goal of creating AI systems that are provably beneficial to humans.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram