Skip to content

Who’s In Control?

Published:
December 27, 2015
Author:
Ariel Conn
benefits and risks of artificial intelligence

Contents

The Washington Post just asked one of the most important questions in the field of artificial intelligence: “Are we fully in control of our technology?”

There are plenty of other questions about artificial intelligence that are currently attracting media attention, such as: Is superintelligence imminent and will it kill us all? As necessary as it is to consider those questions now, there are others that are equally relevant and timely, but often overlooked by the press:

How much destruction could be caused today — or within the next few years — by something as simple as an error in the algorithm?

Is the development of autonomous weapons worth the risk of an AI arms race and all the other risks it creates?

And…

How much better could life get if we design the right artificial intelligence?

Joel Achenbach, the author of the Washington Post article, considers these questions and more, as he writes about his interviews with people like Nick Bostrom, Stuart Russell, Marvin Minsky, our very own Max Tegmark, and many other leading AI researchers. Achenbach provides a balanced look at artificial intelligence, as he talks about Bostroms hopes and concerns, the current state of AI research, what the future might hold, and the many accomplishments of FLI this year.

Read the full story here.

 

 

This content was first published at futureoflife.org on December 27, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram