AI Safety Research

Michael Wooldridge

Head of Department of Computer Science, Professor of Computer Science

University of Oxford

Senior Research Fellow, Hertford College

mjw@cs.ox.ac.uk

Project: Towards a Code of Ethics for AI Research

Amount Recommended:    $125,000

Project Summary

Codes of ethics play an important role in many sciences. Such codes aim to provide a framework within which researchers can understand and anticipate the possible ethical issues that their research might raise, and to provide guidelines about what is, and is not, regarded as ethical behaviour. In the medical sciences, for example, codes of ethics are fundamentally embedded within the research culture of the discipline, and explicit consideration of ethical issues is a standard expectation when research projects are planned and undertaken. In this project, we aim to start developing a code of ethics for AI research by learning from this interdisciplinary experience and extending its lessons into new areas. The project will bring together three Oxford researchers with expertise in artificial intelligence, philosophy, and applied ethics.

Technical Abstract

Codes of ethics play an important role in many sciences. Such codes aim to provide a framework within which researchers can understand and anticipate the possible ethical issues that their research might raise, and to provide guidelines about what is, and is not, regarded as ethical behaviour. In the medical sciences, especially, codes of ethics are fundamentally embedded within the research culture, and explicit consideration of ethical issues is a standard expectation when research projects are planned and undertaken. The aim of this project is to develop a solid basis for a code of artificial intelligence (AI) research ethics, learning from the scientific and medical community’s experience with existing ethical codes, and extending its lessons into three important and representative areas where artificial intelligence comes into contact with ethical concerns: AI in medicine and biomedical technology, autonomous vehicles, and automated trading agents. We will also explore whether the design of ethical research codes might usefully anticipate, and potentially ameliorate, the risks of future research into superintelligence. The project brings together three Oxford researchers with highly relevant expertise in artificial intelligence, philosophy, and applied ethics, and will also draw strongly on other research activity within the University of Oxford.

Publications

  1. Boddington, Paula. EPSRC Principles of Robotics: Commentary on safety, robots as products, and responsibility. Ethical Principles of Robotics, special issue, 2016.
  2. Boddington, Paula. The Distinctiveness of AI Ethics, and Implications for Ethical Codes. Presented at IJCAI-16 Workshop 6 Ethics for Artificial Intelligence, New York, July 2016.

Workshops

  1. A day of ethical AI at Oxford: June 8, 2016. Oxford Martin School.
    • The goal of the workshop was collaborative discussion between those working in AI and ethics and related areas, between geographically close and linked centres. Participants were invited from the Oxford Martin School, The Future of Humanity Institute, the Cambridge Centre for the Study of Existential Risk, and the Leverhulme Centre for the Future of Intelligence, plus others. Participants included FLI grantholders. This workshop included participants from diverse disciplines, including computing,philosophy and psychology, to facilitate cross disciplinary conversation and understanding.
  2. EPSRC Systems-Net Grand Challenge Workshop, “Ethics in Autonomous Systems”: November 25, 2015. Sheffield University.
    • Paula Boddington attended and contributed to discussions.
  3. AISB workshop on Principles of Robotics: April 4, 2016. Sheffield University.
    • Workshop examined the EPSRC (Engineering and Physical Sciences Research Council) Principles of Robotics. Boddington presented a paper, “Commentary on responsibility, product design and notions of safety”, and contributed to discussion.
    • Outcome of workshop: Paper for Special Issue of Connection Science on Ethical Principles of Robotics, ‘EPSRC principles of robotics: Commentary on Safety, Robots as Products, and Responsibility” – Paula Boddington
  4. Ethics for Artificial Intelligence: July 9, 2016. NY
    • These researchers organized an IJCAI-16 workshop that focused on issues of the law and autonomous vehicles, the ethics of autonomous trading systems, and superintelligence.

Ongoing Projects

  1. IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems:
    • Paula Boddington has been working with this initiative. She also served time on the LAWS (Lethal Autonomous Weapons) sub-committee, and is on the Ecosystem Mapping Committee.
  2. These researchers were invited to guest-edit a Special Edition of Minds and Machines on issues of the law and autonomous vehicles, the ethics of autonomous trading systems, and superintelligence. It will be published in 2017.