Skip to content

AI Researcher Francesca Rossi

Published:
September 30, 2016
Author:
Revathi Kumar

Contents

AI Safety Research




Francesca Rossi

Professor of Computer Science

University of Padova, Italy

frossi@math.unipd.it

Project: Safety Constraints and Ethical Principles in Collective Decision Making Systems

Amount Recommended:    $275,000




Project Summary

The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need.

In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans’), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

We will study the embedding and learning of safety constraints, moral values, and ethical principles in collective decision making systems for societies of machines and humans.

Technical Abstract

The future will see autonomous agents acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. In this scenario, collective decision making will be the norm. We will study the embedding of safety constraints, moral values, and ethical principles in agents, within the context of hybrid human/agents collective decision making. We will do that by adapting current logic-based modelling and reasoning frameworks, such as soft constraints, CP-nets, and constraint-based scheduling under uncertainty. For ethical principles, we will use constraints specifying the basic ethical “laws”, plus sophisticated prioritised and possibly context-dependent constraints over possible actions, equipped with a conflict resolution engine. To avoid reckless behavior in the face of uncertainty, we will bound the risk of violating these ethical laws. We will also replace preference aggregation with an appropriately developed constraint/value/ethics/preference fusion, an operation designed to ensure that agents’ preferences are consistent with the system’s safety constraints, the agents’ moral values, and the ethical principles of both individual agents and the collective decision making system. We will also develop approaches to learn ethical principles for artificial intelligent agents, as well as predict possible ethical violations.


Publications

  1. Greene, J. D. Our driverless dilemma. Science, 352(6293), 1514-1515. 2016
    • This article highlights (a) the poor incentives for transparency faced by AV manufacturers, (b) shifts in thinking that may occur as AVs go from sets of uncoordinated individual vehicles to coordinated AV systems, and (c) the shortcomings in contemporary ethical theory and commonsense moral intuition that must be overcome to imbue AVs and other autonomous systems with satisfactory ethical guidance.
    • It has been cited by several news outlets including the New York Times, the Washington Post, the Los Angeles Times, and the Daily Mail.
  2. Greene, J. et al. Embedding Ethical Principles in Collective Decision Support Systems. In Proc. Thirtieth AAAI Conference on Artificial Intelligence. March 2016
    • This article was honored with a AAAI Blue Sky award, showing that it has been regarded a very innovative and promising endeavor by one of the most prestigious international conferences in AI.


Workshops

  1. Colloquium Series on Robust and Beneficial AI (CSRBAI): May 27-June 17. MIRI, Berkeley, CA.
    • Rossi participated in this 22-day June colloquium series (https://intelligence.org/colloquium-series/) with the Future of Humanity Institute, which included four additional workshops.
    • Specific Workshop: “Transparency.” May 28-29.
      • In many cases, it can be prohibitively difficult for humans to understand AI systems’ internal states and reasoning. This makes it more difficult to anticipate such systems’ behavior and correct errors. On the other hand, there have been striking advances in communicating the internals of some machine learning systems, and in formally verifying certain features of algorithms. These researchers would like to see how far they can push the transparency of AI systems while maintaining their capabilities.


Course Materials


Course Names:

  1. “Ethics for Artificial Intelligence” – Professor Kristen Brent Venable, IHMC. Spring 2016.
    • This was an ad-hoc independent study course with the goal of carrying out an in depth state of the review of models for ethical issues and ethical values in AI.

This content was first published at futureoflife.org on September 30, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

AI Researcher Michael Wellman

AI Safety Research Michael Wellman Lynn A. Conway Professor of Computer Science and Engineering Professor, Electrical Engineering and Computer Science […]
1 October, 2016

AI Researcher Michael Wooldridge

AI Safety Research Michael Wooldridge Head of Department of Computer Science, Professor of Computer Science University of Oxford Senior Research […]
1 October, 2016

AI Researcher Brian Ziebart

AI Safety Research Brian Ziebart Assistant Professor Department of Computer Science University of Illinois at Chicago bziebart@uic.edu Project: Towards Safer […]
1 October, 2016

AI Researcher Jacob Steinhardt

AI Safety Research Jacob Steinhardt Graduate Student Stanford University jacob.steinhardt@gmail.com Project: Summer Program in Applied Rationality and Cognition Amount Recommended:    $88,050 […]
1 October, 2016
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram