Kay Firth-Butterfield Interview
Contents
The following is an interview with Kay Firth-Butterfield about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Firth-Butterfield is the Executive Director of AI-Austin.org, and an adjunct Professor of Law at the University of Texas at Austin.
Q. From your perspective what were the highlights of the conference?
“The opportunity to meet old friends and colleagues but also to meet and hear the views of new people of who are making important contributions to our work. It was a super interdisciplinary gathering. Also, it was a very interesting and valuable choice of speaking and panel topics.”
Q. Why did you choose to sign the AI principles that emerged from discussions at the conference?
“As Vice-Chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and Executive Director of AI-Austin I am looking at ways of using the important discussions we are having in practical ways, for example creating standards through our work at IEEE or, at AI-Austin, practical use cases in our community. Thus, signing the principles is very important to me. It is vital that this interdisciplinary group representing academia, business and society starts setting out principles and showing how much we are doing, and need to do, to create safe, beneficial AI.”
Q. Why do you think that AI researchers should weigh in on such issues as opposed to simply doing technical work?
“AI will change everything and will do so at a fast pace. Those who research in the discipline of AI are well suited to inform the discussion and help shape it so that responsible beneficial design is at the forefront of policy decisions at international, national, business and community levels.”
Q. Explain what you think of the following principles:
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
“My background is in law and international relations. AI is a technology with such great capacity to benefit all of humanity but also the chance of simply exacerbating the divides between the developed and developing world, and the haves and have nots in our society. To my mind that is unacceptable and so we need to ensure, as Elon Musk said, that AI is truly democratic and its benefits are available to all.”
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I believe that any arms race should be avoided but particularly this one where the stakes are so high and the possibility of such weaponry, if developed, being used within domestic policing is so terrifying.”
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology. As humans we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“I would re-write this principle as “Given AI systems’ power to analyze and utilize data, people should have the right to access, manage and control the data they generate. I agree with this principle on a number of levels.
- a. for the reasons expounded by the IEEE Initiative’s privacy committee
- b. as AI becomes more powerful we need to take steps to ensure that it cannot use our personal data against us if it falls into the wrong hands
- c. Data is worth money and as individuals we should be able to choose when and how to monetize our own data whilst being encouraged to share data for public health and other benefits.”
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“Yes!”
Q. Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?
“I would like to see a world in which all can equally benefit from the use of beneficial, ethically and responsibly designed AI. I am working to help make that a reality and create an environment where AI enables humans to get the best out of ourselves, create the greatest good for humanity as a whole and achieve excellent outcomes for the flora and fauna with which we share our planet.”
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.