Assorted Sunday Links #3
Contents
1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.
2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”
3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.
4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.
5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.