Skip to content

Roman Yampolskiy Interview

Published:
January 19, 2017
Author:
Ariel Conn

Contents

The following is an interview with Roman Yampolskiy about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Yampolskiy is an Associate Professor of Computer Engineering & Computer Science at the University of Louisville and the Founding Director of the Cyber Security Lab.

Q. From your perspective what were the highlights of the conference?

“The Conference brought together an unprecedented number of thought leaders from industry, academia, non-profits, NGOs, US government, the UN and charities. It was wonderful to see so much cognitive diversity with dozens of different domains represented – philosophy, computer science, economics, security, policy, physics, law, political science, to name just a few. Participants were able to share their ideas in an atmosphere of freedom guaranteed by the Chatham House Rule.

“I personally benefited the most from insider information I got from AI industry leaders, which will certainly help me guide and improve my future work. It was also great to see that AI Safety is no longer a fringe domain of computer science but a major area of research in AI which is now recognized as important by people who have the potential to impact the future of AI as a transformative technology and by extension the future of humanity.

“I suspect that in a decade this Asilomar conference will be considered as important as the Asilomar Recombinant DNA conference from 1975.”

Q. Why did you choose to sign the AI principles that emerged from discussions at the conference?

“The principles address issues of fundamental importance, and while we are far from offering any settled scientific solutions to most of them, it is important to show the world that they are taken seriously by the leaders in the field. The suggested principles also do a great job of articulating important directions for future research. It was an honor for me to endorse them along with many distinguished colleagues.”

Q. Why do you think that AI researchers should weigh in on such issues as opposed to simply doing technical work?

“In my opinion, the principles outlined at Asilomar contain the most important unfinished technical work. For example “Value Alignment” is just a fancy way of saying that we actually should retain control of our robots and digital assistants, instead of them deciding what to do with us. Similarly, maintaining “Personal Privacy” requires sophisticated filtering software to work alongside our data-mining algorithms.”

Q. Explain what you think of the following principles:

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

“Given that all the jobs (physical and mental) will be gone, it is the only chance we have to be provided for.”

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

“Weaponized AI is a weapon of mass destruction and an AI Arms Race is likely to lead to an existential catastrophe for humanity.”

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“In many areas of computer science such as complexity or cryptography the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety we should assume that AI will become maximally capable and prepare accordingly. If we are wrong we will still be in great shape.”

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

“Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

“Even a small probability of existential risk becomes very impactful once multiplied by all the people it will impact. Nothing could be more important than avoiding the extermination of humanity.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

“The world’s dictatorships are looking forward to opportunities to target their citizenry with extreme levels of precision. The tech we will develop will most certainly become available throughout the world and so we have a responsibility to make privacy a fundamental cornerstone of any data analysis.”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

“It is very difficult to encode human values in a programming language, but the problem is made more difficult by the fact that we as humanity do not agree on common values and even parts we do agree on change with time.”

Q. Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?

“I am skeptical about our chances to create beneficial AI. We may succeed in the short term with narrow domain systems but intelligence and control are inversely related and as superintelligent systems appear we will lose all control over them. The future may not have a meaningful place for us. In my work I am trying to determine if the control problem is solvable, what obstacles are in our way and perhaps buy us a bit more time to look for a solution if one exists. In my book Artificial Superintelligence: a Futuristic Approach I talk about a number of important problems we would have to solve to have any chance for success, such as wire-heading, boxing, and utility function security.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on January 19, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram