Skip to content

X-risk News of the Week: Nuclear Winter and a Government Risk Report

Published:
February 13, 2016
Author:
Ariel Conn

Contents

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The big news this week landed squarely in the x-risk end of the spectrum.

First up was a New York Times op-ed titled, Let’s End the Peril of a Nuclear Winter, and written by climate scientists, Drs. Alan Robock and Owen Brian Toon. In it, they describe the horrors of nuclear winter — the frigid temperatures, the starvation, and the mass deaths — that could terrorize the entire world if even a small nuclear war broke out in one tiny corner of the globe.

Fear of nuclear winter was one of the driving forces that finally led leaders of Russia and the US to agree to reduce their nuclear arsenals, and concerns about nuclear war subsided once the Cold War ended. However, recently, leaders of both countries have sought to strengthen their arsenals, and the threat of a nuclear winter is growing again. While much of the world struggles to combat climate change, the biggest risk could actually be that of plummeting temperatures if a nuclear war were to break out.

In an email to FLI, Robock said:

“Nuclear weapons are the greatest threat that humans pose to humanity.  The current nuclear arsenal can still produce nuclear winter, with temperatures in the summer plummeting below freezing and the entire world facing famine.  Even a ‘small’ nuclear war, using less than 1% of the current arsenal, can produce starvation of a billion people.  We have to solve this problem so that we have the luxury of addressing global warming.

 

Also this week, the Senate Armed Services Committee, led by James Clapper, released the Worldwide Threat Assessment of the US Intelligence Community for 2016. The document is 33 pages of potential problems the government is most concerned about in the coming year, a few of which can fall into the category of existential risks:

  1. The Internet of Things (IoT). Though this doesn’t technically pose an existential risk, it does have the potential to impact quality of life and some of the freedoms we typically take for granted. The report states: “In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”
  2. Artificial Intelligence. Clapper’s concerns are broad in this field. He argues: “Implications of broader AI deployment include increased vulnerability to cyberattack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment. […] The increased reliance on AI for autonomous decision making is creating new vulnerabilities to cyberattacks and influence operations. […] AI systems are susceptible to a range of disruptive and deceptive tactics that might be difficult to anticipate or quickly understand. Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks.”
  3. Nuclear. Under the category of Weapons of Mass Destruction (WMD), Clapper dedicated the most space to concerns about North Korea’s nuclear weapons. However he also highlighted concerns about China’s work to modernize its nuclear weapons, and he argues that Russia violated the INF Treaty when they developed a ground-launch cruise missile.
  4. Genome Editing. Interestingly, gene editing was also listed in the WMD category. As Clapper explains, “Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products.” Though he doesn’t explicitly refer to the CRISPR-Cas9 system, he does worry that the low cost and ease-of-use for new technologies will enable “deliberate or unintentional misuse” that could “lead to far reaching economic and national security implications.”

The report, though long, is an easy read, and it’s always worthwhile to understand what issues are motivating the government’s actions.

 

With our new series by Matt Scherer about the legal complications of some of the anticipated AI and autonomous weapons developments, the big news should have been about all of the headlines this week that claimed the federal government now considers AI drivers to be real drivers. Scherer, however, argues this is bad journalism. He provides his interpretation of the NHTSA letter in his recent blog post, “No, the NHTSA did not declare that AIs are legal drivers.”

 

While the headlines of the last few days may have veered toward x-risk, this week also marks the start of the 30th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference. For almost a week, AI researchers will convene in Phoenix to discuss their developments and breakthroughs, and on Saturday, FLI grantees will present some of their research at the AI Ethics and Society Workshop. This is expected to be an event full of hope and excitement about the future!

 

This content was first published at futureoflife.org on February 13, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , , , ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram