Skip to content

Assorted Sunday Links #1

Published:
February 21, 2015
Author:
a guest blogger

Contents

1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks.

2. The Union of Concerned Scientists posts their nuclear threat-themed Cartoon of the Month.

3. The World Economic Forum releases their comprehensive report for 2015 of Global Risks.

4. Physics Today reports that ‘The US could save $70 billion over the next 10 years by taking “common sense” measures to trim its nuclear forces, yet still deploy the maximum number of warheads permitted under the New START Treaty, according to a new report by the Arms Control Association. Those steps include cutting the number of proposed new ballistic missile submarines to eight from 12, delaying plans to build new nuclear-capable bombers, scaling back the upgrade of a nuclear bomb, and forgoing development of a new intercontinental ballistic missile system.’

This content was first published at futureoflife.org on February 21, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is the world’s oldest and largest AI think tank, with a team of 35+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Governor DeSantis Directs Florida State Agencies to Partner with Future of Life Institute to Shield Families from AI Harm

The collaboration will produce a Crisis Counselor Training Curriculum and a statewide AI Harms Reporting Form targeting dangerous AI companion applications
9 March, 2026

Statement from Max Tegmark on the Department of War’s ultimatum

"Our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law."
27 February, 2026

The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AI

Three‑quarters of U.S. adults want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than industry "self‑regulation."
19 October, 2025
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram