Skip to content

Max Tegmark

Position
Professor
Organisation
Massachusetts Institute of Technology
Biography

Why do you care about AI Existential Safety?

I’m convinced that AI will become the most powerful technology in human history, and end up being either the best or worst thing ever to happen to humanity. I therefore feel highly motivated to work on research that can tip the balance toward the former outcome.

Please give one or more examples of research interests relevant to AI existential safety:

I believe that our best shot at beneficial AGI involves replacing black-box neural networks by intelligible intelligence. The only way I’ll trust a superintelligence to be beneficial is if I can prove it, since no matter how smart it is, it can’t do the impossible. My MIT research group therefore focuses on using tools from physics and information theory to transform black-box neural networks into more understandable systems. Recent applications have included auto-discovery of symbolic formulas and invariants as well as hidden symmetries and modularities.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram