Skip to content

Roger Grosse

Position
Assistant Professor
Organisation
University of Toronto
Biography

Why do you care about AI Existential Safety?

Humanity has produced some powerful and dangerous technologies, but as of yet none that deliberately pursued long-term goals that may be at odds with our own. If we succeed in building machines smarter than ourselves — as seems likely to happen in the next few decades — our only hope for a good outcome is if we prepare well in advance.

Please give one or more examples of research interests relevant to AI existential safety:

So far, my research has primarily focused on understanding and improving neural networks, and my research style can be thought of as theory-driven empiricism. I’m intending to focus on safety as much as I can while maintaining the quality of the research. Here are some of my group’s current and planned AI safety research directions, which build on our expertise in deep learning:

  • Incentivizing neural networks to give answers which are easily checkable. We are doing this using prover-verifier games for which the equilibrium requires finding a proof system.
  • Understanding (in terms of neural net architectures) when mesa-optimizers are likely to arise, their patterns of generalization, and how this should inform the design of a learning algorithm.
  • Better tools for understanding neural networks.
  • Better understanding of neural net scaling laws (which are an important input to AI forecasting).

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram