Skip to content

Assorted Sunday Links #3

Published:
April 13, 2015
Author:
a guest blogger

Contents

1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.

2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”

3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.

4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.

5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!

This content was first published at futureoflife.org on April 13, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Could we switch off a dangerous AI?

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
27 December, 2024

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram