Skip to content

Ram Rachum

Position
Researcher
Organisation
Bar-Ilan University
Biography

Why do you care about AI Existential Safety?

Artificial Intelligence is the most exciting technology to be developed in our lifetime. Artificial General Intelligence, if it is achievable, would be the culmination of thousands of years of human technological development. It has the potential to vastly improve our lives, vastly deteriorate them, or possibly end human civilization.

We have a short window of time to prepare and strategize for the emergence of AGI. If AGI does emerge, we will reach a point of no return in our history. Our actions in the next few years could determine whether our species will thrive or flounder.

A major source of difficulty in the endeavor for AI existential safety is our lack of information. We’re preparing for something that has never happened before. Part of our work is to consider a variety of different AGI scenarios and prepare for as many of them as possible.

Please give at least one example of your research interests related to AI existential safety:

I’m interested in applying Multi-Agent Reinforcement Learning to AI existential safety, specifically to the problems of AI Interpretability and AI Corrigibility.

I believe that a group of agents working together on a task can provide interpretability and corrigibility in the same way that a team of humans works more transparently than a single human. To that end, I am working to provide a solution for AI Interpretability and AI Corrigibility by using social behavior of RL agents. The major challenge I’m tackling is to replicate the intricate social dynamics that humans use so RL agents could use them in simulated environments.

I’m currently most excited about using the LOLA family of algorithms, and especially the M-FOS algorithm, in order to elicit emergent reciprocity and team formation in agents operating in a general-sum environment.

I serve on the scientific council of The Israeli Association for Ethics in Artificial Intelligence.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram