Skip to content

Assorted Sunday Links #3

Published:
April 13, 2015
Author:
a guest blogger

Contents

1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.

2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”

3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.

4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.

5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!

This content was first published at futureoflife.org on April 13, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram