Skip to content

Roman Yampolskiy

Position
Associate Professor
Organisation
University of Louisville
Biography

Why do you care about AI Existential Safety?

I care about AI existential safety for very selfish reasons, I don’t want future advanced AI to cause harm to me, my family, my friends, my community, my country, my planet, my descendants, my universe, or the multiverse. I want to avoid existential catastrophe and suffering risks for my specie and the biosphere on this planet and beyond. A superintelligence aligned with human values would be the greatest invention ever made, which would allow us to greatly improve quality of life for all people and to mitigate many other dangers both natural and man-made. I have dedicated my life to pursuing the goal of making future advanced AI globally beneficial, safe, and secure.

Please give one or more examples of research interests relevant to AI existential safety:

I am an experienced AI safety and security researcher, with over 10 years of research leadership in the domain of transformational AI. I have been a Fellow (2010) and a Research Advisor (2012) of the Machine Intelligence Research Institute (MIRI), an AI Safety Fellow (2019) of the Foresight Institute and a Research Associate (2018) of the Global Catastrophic Research Institute (GCRI). I am currently a Tenured Faculty Member in the department of Computer Science and Engineering at an R1 university in Louisville, USA and the director of our Cybersecurity laboratory. My work has been funded by NSF, NSA, DHS, EA Ventures and FLI. I have published hundreds of peer-reviewed papers, including multiple books on AI safety, such as “Artificial Superintelligence” and more recently “AI Safety and Security”.

My early work on AI Safety Engineering, AI Containment and AI Accidents has become seminal in the field and is very well-cited. I have given over a 100 public talks, served on program committees of multiple AI Safety conferences and journal editorial boards and have awards for my teaching and service to the community. I have given 100s of interviews on AI safety, including multiple appearances on the FLI podcast. My current research focus is on the theoretical limits to explainability, predictability and controllability of advanced intelligent systems. With collaborators, I continue my project related to analysis, handling and prediction/avoidance of AI accidents and failures. New projects related to monitorability, and forensic analysis of AI are currently in the pipeline. You can learn more about my ongoing and completed research from my publications: https://scholar.google.com/citations? user=0_Rq68cAAAAJ&hl=en

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram