Skip to content

From the WP: How do you teach a machine to be moral?

Published:
November 18, 2015
Author:
Ariel Conn

Contents

In case you missed it…

Francesca Rossi, member of the FLI scientific advisory board and one of 37 recipients of the AI safety research program, recently wrote an article for the Washington Post in which she describes the challenges associated with building an artificial intelligence that has the same ethics and morals as people. In the article, she highlights her work, which includes a team of not just AI researchers, but also philosophers and psychologists, who are working together to teach AI to be both trustworthy and trusted by the people it will work with.

Learn more about Rossi’s work here.

This content was first published at futureoflife.org on November 18, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram