Why 2016 Was Actually a Year of Hope

Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.

Artificial Intelligence

Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.

AI and Health

Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.

More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.

AI and Society

Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests  artificial intelligence could help us use satellite data to combat global poverty.

It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.

Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.

AI in Action

And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.

And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.

And speaking of addressing climate change…

Climate Change

With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.

Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.

But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.

Biotechnology

Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.

CRISPR

In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.

Gene Drive

Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.

Other Biotech News

The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.

Nuclear Weapons

If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.

In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.

As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.

The United Nations Takes on Weapons

But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.

The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.

Looking Ahead

And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.

Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.

Podcast: FLI 2016 – A Year In Review

For FLI, 2016 was a great year, full of our own success, but also great achievements from so many of the organizations we work with. Max, Meia, Anthony, Victoria, Richard, Lucas, David, and Ariel discuss what they were most excited to see in 2016 and what they’re looking forward to in 2017.

AGUIRRE: I’m Anthony Aguirre. I am a professor of physics at UC Santa Cruz, and I’m one of the founders of the Future of Life Institute.

STANLEY: I’m David Stanley, and I’m currently working with FLI as a Project Coordinator/Volunteer Coordinator.

PERRY: My name is Lucas Perry, and I’m a Project Coordinator with the Future of Life Institute.

TEGMARK: I’m Max Tegmark, and I have the fortune to be the President of the Future of Life Institute.

CHITA-TEGMARK: I’m Meia Chita-Tegmark, and I am a co-founder of the Future of Life Institute.

MALLAH: Hi, I’m Richard Mallah. I’m the Director of AI Projects at the Future of Life Institute.

KRAKOVNA: Hi everyone, I am Victoria Krakovna, and I am one of the co-founders of FLI. I’ve recently taken up a position at Google DeepMind working on AI safety.

CONN: And I’m Ariel Conn, the Director of Media and Communications for FLI. 2016 has certainly had its ups and downs, and so at FLI, we count ourselves especially lucky to have had such a successful year. We’ve continued to progress with the field of AI safety research, we’ve made incredible headway with our nuclear weapons efforts, and we’ve worked closely with many amazing groups and individuals. On that last note, much of what we’ve been most excited about throughout 2016 is the great work these other groups in our fields have also accomplished.

Over the last couple of weeks, I’ve sat down with our founders and core team to rehash their highlights from 2016 and also to learn what they’re all most looking forward to as we move into 2017.

To start things off, Max gave a summary of the work that FLI does and why 2016 was such a success.

TEGMARK: What I was most excited by in 2016 was the overall sense that people are taking seriously this idea – that we really need to win this race between the growing power of our technology and the wisdom with which we manage it. Every single way in which 2016 is better than the Stone Age is because of technology, and I’m optimistic that we can create a fantastic future with tech as long as we win this race. But in the past, the way we’ve kept one step ahead is always by learning from mistakes. We invented fire, messed up a bunch of times, and then invented the fire extinguisher. We at the Future of Life Institute feel that that strategy of learning from mistakes is a terrible idea for more powerful tech, like nuclear weapons, artificial intelligence, and things that can really alter the climate of our globe.

Now, in 2016 we saw multiple examples of people trying to plan ahead and to avoid problems with technology instead of just stumbling into them. In April, we had world leaders getting together and signing the Paris Climate Accords. In November, the United Nations General Assembly voted to start negotiations about nuclear weapons next year. The question is whether they should actually ultimately be phased out; whether the nations that don’t have nukes should work towards stigmatizing building more of them – with the idea that 14,000 is way more than anyone needs for deterrence. And – just the other day – the United Nations also decided to start negotiations on the possibility of banning lethal autonomous weapons, which is another arms race that could be very, very destabilizing. And if we keep this positive momentum, I think there’s really good hope that all of these technologies will end up having mainly beneficial uses.

Today, we think of our biologist friends as mainly responsible for the fact that we live longer and healthier lives, and not as those guys who make the bioweapons. We think of chemists as providing us with better materials and new ways of making medicines, not as the people who built chemical weapons and are all responsible for global warming. We think of AI scientists as – I hope, when we look back on them in the future – as people who helped make the world better, rather than the ones who just brought on the AI arms race. And it’s very encouraging to me that as much as people in general – but also the scientists in all these fields – are really stepping up and saying, “Hey, we’re not just going to invent this technology, and then let it be misused. We’re going to take responsibility for making sure that the technology is used beneficially.”

CONN: And beneficial AI is what FLI is primarily known for. So what did the other members have to say about AI safety in 2016? We’ll hear from Anthony first.

AGUIRRE: I would say that what has been great to see over the last year or so is the AI safety and beneficiality research field really growing into an actual research field. When we ran our first conference a couple of years ago, they were these tiny communities who had been thinking about the impact of artificial intelligence in the future and in the long-term future. They weren’t really talking to each other; they weren’t really doing much actual research – there wasn’t funding for it. So, to see in the last few years that transform into something where it takes a massive effort to keep track of all the stuff that’s being done in this space now. All the papers that are coming out, the research groups – you sort of used to be able to just find them all, easily identified. Now, there’s this huge worldwide effort and long lists, and it’s difficult to keep track of. And that’s an awesome problem to have.

As someone who’s not in the field, but sort of watching the dynamics of the research community, that’s what’s been so great to see. A research community that wasn’t there before really has started, and I think in the past year we’re seeing the actual results of that research start to come in. You know, it’s still early days. But it’s starting to come in, and we’re starting to see papers that have been basically created using these research talents and the funding that’s come through the Future of Life Institute. It’s been super gratifying. And seeing that it’s a fairly large amount of money – but fairly small compared to the total amount of research funding in artificial intelligence or other fields – but because it was so funding-starved and talent-starved before, it’s just made an enormous impact. And that’s been nice to see.

CONN: Not surprisingly, Richard was equally excited to see AI safety becoming a field of ever-increasing interest for many AI groups.

MALLAH: I’m most excited by the continued mainstreaming of AI safety research. There are more and more publications coming out by places like DeepMind and Google Brain that have really lent additional credibility to the space, as well as a continued uptake of more and more professors, and postdocs, and grad students from a wide variety of universities entering this space. And, of course, OpenAI has come out with a number of useful papers and resources.

I’m also excited that governments have really realized that this is an important issue. So, while the White House reports have come out recently focusing more on near-term AI safety research, they did note that longer-term concerns like superintelligence are not necessarily unreasonable for later this century. And that they do support – right now – funding safety work that can scale toward the future, which is really exciting. We really need more funding coming into the community for that type of research. Likewise, other governments – like the U.K. and Japan, Germany – have all made very positive statements about AI safety in one form or another. And other governments around the world.

CONN: In addition to seeing so many other groups get involved in AI safety, Victoria was also pleased to see FLI taking part in so many large AI conferences.

KRAKOVNA: I think I’ve been pretty excited to see us involved in these AI safety workshops at major conferences. So on the one hand, our conference in Puerto Rico that we organized ourselves was very influential and helped to kick-start making AI safety more mainstream in the AI community. On the other hand, it felt really good in 2016 to complement that with having events that are actually part of major conferences that were co-organized by a lot of mainstream AI researchers. I think that really was an integral part of the mainstreaming of the field. For example, I was really excited about the Reliable Machine Learning workshop at ICML that we helped to make happen. I think that was something that was quite positively received at the conference, and there was a lot of good AI safety material there.

CONN: And of course, Victoria was also pretty excited about some of the papers that were published this year connected to AI safety, many of which received at least partial funding from FLI.

KRAKOVNA: There were several excellent papers in AI safety this year, addressing core problems in safety for machine learning systems. For example, there was a paper from Stuart Russell’s lab published at NIPS, on cooperative IRL. This is about teaching AI what humans want – how to train an RL algorithm to learn the right reward function that reflects what humans want it to do. DeepMind and FHI published a paper at UAI on safely interruptible agents, that formalizes what it means for an RL agent not to have incentives to avoid shutdown. MIRI made an impressive breakthrough with their paper on logical inductors. I’m super excited about all these great papers coming out, and that our grant program contributed to these results.

CONN: For Meia, the excitement about AI safety went beyond just the technical aspects of artificial intelligence.

CHITA-TEGMARK: I am very excited about the dialogue that FLI has catalyzed – and also engaged in – throughout 2016, and especially regarding the impact of technology on society. My training is in psychology; I’m a psychologist. So I’m very interested in the human aspect of technology development. I’m very excited about questions like, how are new technologies changing us? How ready are we to embrace new technologies? Or how our psychological biases may be clouding our judgement about what we’re creating and the technologies that we’re putting out there. Are these technologies beneficial for our psychological well-being, or are they not?

So it has been extremely interesting for me to see that these questions are being asked more and more, especially by artificial intelligence developers and also researchers. I think it’s so exciting to be creating technologies that really force us to grapple with some of the most fundamental aspects, I would say, of our own psychological makeup. For example, our ethical values, our sense of purpose, our well-being, maybe our biases and shortsightedness and shortcomings as biological human beings. So I’m definitely very excited about how the conversation regarding technology – and especially artificial intelligence – has evolved over the last year. I like the way it has expanded to capture this human element, which I find so important. But I’m also so happy to feel that FLI has been an important contributor to this conversation.

CONN: Meanwhile, as Max described earlier, FLI has also gotten much more involved in decreasing the risk of nuclear weapons, and Lucas helped spearhead one of our greatest accomplishments of the year.

PERRY: One of the things that I was most excited about was our success with our divestment campaign. After a few months, we had great success in our own local Boston area with helping the City of Cambridge to divest its $1 billion portfolio from nuclear weapon producing companies. And we see this as a really big and important victory within our campaign to help institutions, persons, and universities to divest from nuclear weapons producing companies.

CONN: And in order to truly be effective we need to reach an international audience, which is something Dave has been happy to see grow this year.

STANLEY: I’m mainly excited about – at least, in my work – the increasing involvement and response we’ve had from the international community in terms of reaching out about these issues. I think it’s pretty important that we engage the international community more, and not just academics. Because these issues – things like nuclear weapons and the increasing capabilities of artificial intelligence – really will affect everybody. And they seem to be really underrepresented in mainstream media coverage as well.

So far, we’ve had pretty good responses just in terms of volunteers from many different countries around the world being interested in getting involved to help raise awareness in their respective communities, either through helping develop apps for us, or translation, or promoting just through social media these ideas in their little communities.

CONN: Many FLI members also participated in both local and global events and projects, like the following we’re about  to hear from Victoria, Richard, Lucas and Meia.

KRAKOVNA: The EAGX Oxford Conference was a fairly large conference. It was very well organized, and we had a panel there with Demis Hassabis, Nate Soares from MIRI, Murray Shanahan from Imperial, Toby Ord from FHI, and myself. I feel like overall, that conference did a good job of, for example, connecting the local EA community with the people at DeepMind, who are really thinking about AI safety concerns like Demis and also Sean Legassick, who also gave a talk about the ethics and impacts side of things. So I feel like that conference overall did a good job of connecting people who are thinking about these sorts of issues, which I think is always a great thing.  

MALLAH: I was involved in this endeavor with IEEE regarding autonomy and ethics in autonomous systems, sort of representing FLI’s positions on things like autonomous weapons and long-term AI safety. One thing that came out this year – just a few days ago, actually, due to this work from IEEE – is that the UN actually took the report pretty seriously, and it may have influenced their decision to take up the issue of autonomous weapons formally next year. That’s kind of heartening.

PERRY: A few different things that I really enjoyed doing were giving a few different talks at Duke and Boston College, and a local effective altruism conference. I’m also really excited about all the progress we’re making on our nuclear divestment application. So this is an application that will allow anyone to search their mutual fund and see whether or not their mutual funds have direct or indirect holdings in nuclear weapons-producing companies.

CHITA-TEGMARK:  So, a wonderful moment for me was at the conference organized by Yann LeCun in New York at NYU, when Daniel Kahneman, one of my thinker-heroes, asked a very important question that really left the whole audience in silence. He asked, “Does this make you happy? Would AI make you happy? Would the development of a human-level artificial intelligence make you happy?” I think that was one of the defining moments, and I was very happy to participate in this conference.

Later on, David Chalmers, another one of my thinker-heroes – this time, not the psychologist but the philosopher – organized another conference, again at NYU, trying to bring philosophers into this very important conversation about the development of artificial intelligence. And again, I felt there too, that FLI was able to contribute and bring in this perspective of the social sciences on this issue.

CONN: Now, with 2016 coming to an end, it’s time to turn our sites to 2017, and FLI is excited for this new year to be even more productive and beneficial.

TEGMARK: We at the Future of Life Institute are planning to focus primarily on artificial intelligence, and on reducing the risk of accidental nuclear war in various ways. We’re kicking off by having an international conference on artificial intelligence, and then we want to continue throughout the year providing really high-quality and easily accessible information on all these key topics, to help inform on what happens with climate change, with nuclear weapons, with lethal autonomous weapons, and so on.

And looking ahead here, I think it’s important right now – especially since a lot of people are very stressed out about the political situation in the world, about terrorism, and so on – to not ignore the positive trends and the glimmers of hope we can see as well.

CONN: As optimistic as FLI members are about 2017, we’re all also especially hopeful and curious to see what will happen with continued AI safety research.

AGUIRRE: I would say I’m looking forward to seeing in the next year more of the research that comes out, and really sort of delving into it myself, and understanding how the field of artificial intelligence and artificial intelligence safety is developing. And I’m very interested in this from the forecast and prediction standpoint.

I’m interested in trying to draw some of the AI community into really understanding how artificial intelligence is unfolding – in the short term and the medium term – as a way to understand, how long do we have? Is it, you know, if it’s really infinity, then let’s not worry about that so much, and spend a little bit more on nuclear weapons and global warming and biotech, because those are definitely happening. If human-level AI were 8 years away… honestly, I think we should be freaking out right now. And most people don’t believe that, I think most people are in the middle it seems, of thirty years or fifty years or something, which feels kind of comfortable. Although it’s not that long, really, on the big scheme of things. But I think it’s quite important to know now, which is it? How fast are these things, how long do we really have to think about all of the issues that FLI has been thinking about in AI? How long do we have before most jobs in industry and manufacturing are replaceable by a robot being slotted in for a human? That may be 5 years, it may be fifteen… It’s probably not fifty years at all. And having a good forecast on those good short-term questions I think also tells us what sort of things we have to be thinking about now.

And I’m interested in seeing how this massive AI safety community that’s started develops. It’s amazing to see centers kind of popping up like mushrooms after a rain all over and thinking about artificial intelligence safety. This partnership on AI between Google and Facebook and a number of other large companies getting started. So to see how those different individual centers will develop and how they interact with each other. Is there an overall consensus on where things should go? Or is it a bunch of different organizations doing their own thing? Where will governments come in on all of this? I think it will be interesting times. So I look forward to seeing what happens, and I will reserve judgement in terms of my optimism.

KRAKOVNA: I’m really looking forward to AI safety becoming even more mainstream, and even more of the really good researchers in AI giving it serious thought. Something that happened in the past year that I was really excited about, that I think is also pointing in this direction, is the research agenda that came out of Google Brain called “Concrete Problems in AI Safety.” And I think I’m looking forward to more things like that happening, where AI safety becomes sufficiently mainstream that people who are working in AI just feel inspired to do things like that and just think from their own perspectives: what are the important problems to solve in AI safety? And work on them.

I’m a believer in the portfolio approach with regards to AI safety research, where I think we need a lot of different research teams approaching the problems from different angles and making different assumptions, and hopefully some of them will make the right assumption. I think we are really moving in the direction in terms of more people working on these problems, and coming up with different ideas. And I look forward to seeing more of that in 2017. I think FLI can also help continue to make this happen.

MALLAH: So, we’re in the process of fostering additional collaboration among people in the AI safety space. And we will have more announcements about this early next year. We’re also working on resources to help people better visualize and better understand the space of AI safety work, and the opportunities there and the work that has been done. Because it’s actually quite a lot.

I’m also pretty excited about fostering continued theoretical work and practical work in making AI more robust and beneficial. The work in value alignment, for instance, is not something we see supported in mainstream AI research. And this is something that is pretty crucial to the way that advanced AIs will need to function. It won’t be very explicit instructions to them; they’ll have to be making decision based on what they think is right. And what is right? It’s something that… or even structuring the way to think about what is right requires some more research.

STANLEY: We’ve had pretty good success at FLI in the past few years helping to legitimize the field of AI safety. And I think it’s going to be important because AI is playing a large role in industry and there’s a lot of companies working on this, and not just in the US. So I think increasing international awareness about AI safety is going to be really important.

CHITA-TEGMARK: I believe that the AI community has raised some very important questions in 2016 regarding the impact of AI on society. I feel like 2017 should be the year to make progress on these questions, and actually research them and have some answers to them. For this, I think we need more social scientists – among people from other disciplines – to join this effort of really systematically investigating what would be the optimal impact of AI on people. I hope that in 2017 we will have more research initiatives, that we will attempt to systematically study other burning questions regarding the impact of AI on society. Some examples are: how can we ensure the psychological well-being for people while AI creates lots of displacement on the job market as many people predict. How do we optimize engagement with technology, and withdrawal from it also? Will some people be left behind, like the elderly or the economically disadvantaged? How will this affect them, and how will this affect society at large?

What about withdrawal from technology? What about satisfying our need for privacy? Will we be able to do that, or is the price of having more and more customized technologies and more and more personalization of the technologies we engage with… will that mean that we will have no privacy anymore, or that our expectations of privacy will be very seriously violated? I think these are some very important questions that I would love to get some answers to. And my wish, and also my resolution, for 2017 is to see more progress on these questions, and to hopefully also be part of this work and answering them.

PERRY: In 2017 I’m very interested in pursuing the landscape of different policy and principle recommendations from different groups regarding artificial intelligence. I’m also looking forward to expanding out nuclear divestment campaign by trying to introduce divestment to new universities, institutions, communities, and cities.

CONN: In fact, some experts believe nuclear weapons pose a greater threat now than at any time during our history.

TEGMARK: I personally feel that the greatest threat to the world in 2017 is one that the newspapers almost never write about. It’s not terrorist attacks, for example. It’s the small but horrible risk that the U.S. and Russia for some stupid reason get into an accidental nuclear war against each other. We have 14,000 nuclear weapons, and this war has almost happened many, many times. So, actually what’s quite remarkable and really gives a glimmer of hope is that – however people may feel about Putin and Trump – the fact is they are both signaling strongly that they are eager to get along better. And if that actually pans out and they manage to make some serious progress in nuclear arms reduction, that would make 2017 the best year for nuclear weapons we’ve had in a long, long time, reversing this trend of ever greater risks with ever more lethal weapons.

CONN: Some FLI members are also looking beyond nuclear weapons and artificial intelligence, as I learned when I asked Dave about other goals he hopes to accomplish with FLI this year.

STANLEY: Definitely having the volunteer team – particularly the international volunteers – continue to grow, and then scale things up. Right now, we have a fairly committed core of people who are helping out, and we think that they can start recruiting more people to help out in their little communities, and really making this stuff accessible. Not just to academics, but to everybody. And that’s also reflected in the types of people we have working for us as volunteers. They’re not just academics. We have programmers, linguists, people having just high school degrees all the way up to Ph.D.’s, so I think it’s pretty good that this varied group of people can get involved and contribute, and also reach out to other people they can relate to.

CONN: In addition to getting more people involved, Meia also pointed out that one of the best ways we can help ensure a positive future is to continue to offer people more informative content.

CHITA-TEGMARK: Another thing that I’m very excited about regarding our work here at the Future of Life Institute is this mission of empowering people to information. I think information is very powerful and can change the way people approach things: they can change their beliefs, their attitudes, and their behaviors as well. And by creating ways in which information can be readily distributed to the people, and with which they can engage very easily, I hope that we can create changes. For example, we’ve had a series of different apps regarding nuclear weapons that I think have contributed a lot to peoples knowledge and has brought this issue to the forefront of their thinking.

CONN: Yet as important as it is to highlight the existential risks we must address to keep humanity safe, perhaps it’s equally important to draw attention to the incredible hope we have for the future if we can solve these problems. Which is something both Richard and Lucas brought up for 2017.

MALLAH: I’m excited about trying to foster more positive visions of the future, so focusing on existential hope aspects of the future. Which are kind of the flip side of existential risks. So we’re looking at various ways of getting people to be creative about understanding some of the possibilities, and how to differentiate the paths between the risks and the benefits.

PERRY: Yeah, I’m also interested in creating and generating a lot more content that has to do with existential hope. Given the current global political climate, it’s all the more important to focus on how we can make the world better.

CONN: And on that note, I want to mention one of the most amazing things I discovered this past year. It had nothing to do with technology, and everything to do with people. Since starting at FLI, I’ve met countless individuals who are dedicating their lives to trying to make the world a better place. We may have a lot of problems to solve, but with so many groups focusing solely on solving them, I’m far more hopeful for the future. There are truly too many individuals that I’ve met this year to name them all, so instead, I’d like to provide a rather long list of groups and organizations I’ve had the pleasure to work with this year. A link to each group can be found at futureoflife.org/2016, and I encourage you to visit them all to learn more about the wonderful work they’re doing. In no particular order, they are:

Machine Intelligence Research Institute

Future of Humanity Institute

Global Catastrophic Risk Institute

Center for the Study of Existential Risk

Ploughshares Fund

Bulletin of Atomic Scientists

Open Philanthropy Project

Union of Concerned Scientists

The William Perry Project

ReThink Media

Don’t Bank on the Bomb

Federation of American Scientists

Massachusetts Peace Action

IEEE (Institute for Electrical and Electronics Engineers)

Center for Human-Compatible Artificial Intelligence

Center for Effective Altruism

Center for Applied Rationality

Foresight Institute

Leverhulme Center for the Future of Intelligence

Global Priorities Project

Association for the Advancement of Artificial Intelligence

International Joint Conference on Artificial Intelligence

Partnership on AI

The White House Office of Science and Technology Policy

The Future Society at Harvard Kennedy School

 

I couldn’t be more excited to see what 2017 holds in store for us, and all of us at FLI look forward to doing all we can to help create a safe and beneficial future for everyone. But to end on an even more optimistic note, I turn back to Max.

TEGMARK: Finally, I’d like – because I spend a lot of my time thinking about our universe – to remind everybody that we shouldn’t just be focused on the next election cycle. We have not decades, but billions of years of potentially awesome future for life, on Earth and far beyond. And it’s so important to not let ourselves get so distracted by our everyday little frustrations that we lose sight of these incredible opportunities that we all stand to gain from if we can get along, and focus, and collaborate, and use technology for good.

Nuclear Winter with Alan Robock and Brian Toon

The UN voted last week to begin negotiations on a global nuclear weapons ban, but for now, nuclear weapons still jeopardize the existence of almost all people on earth.

I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.

Toon and Robock have studied and modeled nuclear winter off and on for over 30 years, and they joined forces ten years ago to use newer climate models to look at the climate effects of a small nuclear war.

The following interview has been heavily edited, but you can listen to it in its entirety here or read the complete transcript here.

Ariel: How is it that you two started working together?

Toon: This was initiated by a reporter. At the time, Pakistan and India were having a conflict over Kashmir and threatening each other with nuclear weapons. A reporter wanted to know what effect this might have on the rest of the planet. I calculated the amount of smoke and found, “Wow that was a lot of smoke!”

Alan had a great volcano model, so at the American Geophysical Union meeting that year, I tried to convince him to work on this problem. Alan was pretty skeptical.

Robock: I don’t remember being skeptical. I remember being very interested. I said, “How much smoke would there be?” Brian told me 5,000,000 tons of smoke, and I said, “That sounds like a lot!”

We put it into a NASA climate model and found it would be the largest climate change in recorded human history. The basic physics is very simple. If you block out the Sun, it gets cold and dark at the Earth’s surface.

We hypothesized that if each country used half of their nuclear arsenal, that would be 50 weapons on each side. We assumed the simplest bomb, which is the size dropped on Hiroshima and Nagasaki — a 15 kiloton bomb.

The answer is the global average temperature would go down by about 1.5 degrees Celsius. In the middle of continents, temperature drops would be larger and last for a decade or more.

We took models that calculate agricultural productivity and calculated how wheat, corn, soybeans, and rice production would change. In the 5 years after this war, using less than 1% of the global arsenal on the other side of the world, global food production would go down by 20-40 percent for 5 years, and for the next 5 years, 10-20 percent.

Ariel: Could you address criticisms of whether or not the smoke would loft that high or spread globally?

Toon: The only people that have been critical are Alan and I. The Departments of Energy and Defense, which should be investigating this problem, have done absolutely nothing. No one has done studies of fire propagation in big cities — no fire department is going to go put out a nuclear fire.

As far as the rising smoke, we’ve had people investigate that and they all find the same things: it goes into the upper atmosphere and then self-lofts. But, these should be investigated by a range of scientists with a range of experiences.

Robock: What are the properties of the smoke? We assume it would be small, single, black particles. That needs to be investigated. What would happen to the particles as they sit in the stratosphere? Would they react with other particles? Would they degrade? Would they grow? There are additional questions and unknowns.

Toon: Alan made lists of the important issues. And we have gone to every agency that we can think of, and said, “Don’t you think someone should study this?” Basically, everyone we tried so far has said, “Well, that’s not my job.”

Ariel: Do you think there’s a chance then that as we acquired more information that even smaller nuclear wars could pose similar risks? Or is 100 nuclear weapons the minimum?

Robock: First, it’s hard to imagine how once a nuclear war starts, it could be limited. Communications are destroyed, people panic — how would people even be able to rationally have a nuclear war and stop?

Second, we don’t know. When you get down to small numbers, it depends on what city, what time of year, the weather that day. And we don’t want to emphasize India and Pakistan – any two nuclear countries could do this.

Toon: The most common thing that happens when we give a talk is someone will stand up and say, “Oh, but a war would only involve one nuclear weapon.” But the only nuclear war we’ve had, the nuclear power, the United States, used every weapon that it had on civilian targets.

If you have 1000 weapons and you’re afraid your adversary is going to attack you with their 1000 weapons, you’re not likely to just bomb them with one weapon.

Robock: Let me make one other point. If the United States attacked Russia on a first strike and Russia did nothing, the climate change resulting from that could kill almost everybody in the United States. We’d all starve to death because of the climate response. People used to think of this as mutually assured destruction, but really it’s being a suicide bomber: it’s self-assured destruction.
Ariel: What scares you most regarding nuclear weapons?

Toon: Politicians’ ignorance of the implications of using nuclear weapons. Russia sees our advances to keep Eastern European countries free — they see that as an attempt to move military forces near Russia where [NATO] could quickly attack them. There’s a lack of communication, a lack of understanding of [the] threat and how people see different things in different ways. So Russians feel threatened when we don’t even mean to threaten them.

Robock: What scares me is an accident. There have been a number of cases where we came very close to having nuclear war. Crazy people or mistakes could result in a nuclear war. Some teenaged hacker could get into the systems. We’ve been lucky to have gone 71 years without a second nuclear war. The only way to prevent it is to get rid of the nuclear weapons.

Toon: We have all these countries with 100 weapons. All those countries can attack anybody on the Earth and destroy most of the country. This is ridiculous, to develop a world where everybody can destroy anybody else on the planet. That’s what we’re moving toward.

Ariel: Is there anything else you think the public needs to understand about nuclear weapons or nuclear winter?

Robock: I would think about all of the countries that don’t have nuclear weapons. How did they make that decision? What can we learn from them?

The world agreed to a ban on chemical weapons, biological weapons, cluster munitions, land mines — but there’s no ban on the worst weapon of mass destruction, nuclear weapons. The UN General Assembly voted next year to negotiate a treaty to ban nuclear weapons, which will be a first step towards reducing the arsenals and disarmament. But people have to get involved and demand it.

Toon: We’re not paying enough attention to nuclear weapons. The United States has invested hundreds of billions of dollars in building better nuclear weapons that we’re never going to use. Why don’t we invest that in schools or in public health or in infrastructure? Why invest it in worthless things we can’t use?

The Historic UN Vote On Banning Nuclear Weapons

By Joe Cirincione

History was made at the United Nations today. For the first time in its 71 years, the global body voted to begin negotiations on a treaty to ban nuclear weapons.

Eight nations with nuclear arms (the United States, Russia, China, France, the United Kingdom, India, Pakistan, and Israel) opposed or abstained from the resolution, while North Korea voted yes. However, with a vote of 123 for, 38 against and 16 abstaining, the First Assembly decided “to convene in 2017 a United Nations conference to negotiate a legally binding instrument to prohibit nuclear weapons, leading towards their total elimination.”

The resolution effort, led by Mexico, Austria, Brazil Ireland, Nigeria and South Africa, was joined by scores of others.

“There comes a time when choices have to be made and this is one of those times,” said Helena Nolan, Ireland’s director of Disarmament and Non-Proliferation, “Given the clear risks associated with the continued existence of nuclear weapons, this is now a choice between responsibility and irresponsibility. Governance requires accountability and governance requires leadership.”

The Obama Administration was in fierce opposition. It lobbied all nations, particularly its allies, to vote no. “How can a state that relies on nuclear weapons for its security possibly join a negotiation meant to stigmatize and eliminate them?” argued Ambassador Robert Wood, the U.S. special representative to the UN Conference on Disarmament in Geneva, “The ban treaty runs the risk of undermining regional security.”

The U.S. opposition is a profound mistake. Ambassador Wood is a career foreign service officer and a good man who has worked hard for our country. But this position is indefensible.

Every president since Harry Truman has sought the elimination of nuclear weapons. Ronald Reagan famously said in his 1984 State of the Union:

“A nuclear war cannot be won and must never be fought. The only value in our two nations possessing nuclear weapons is to make sure they will never be used. But then would it not be better to do away with them entirely?”

In case there was any doubt as to his intentions, he affirmed in his second inaugural address that, “We seek the total elimination one day of nuclear weapons from the face of the Earth.”

President Barack Obama himself stigmatized these weapons, most recently in his speech in Hiroshima this May:

“The memory of the morning of Aug. 6, 1945, must never fade. That memory allows us to fight complacency. It fuels our moral imagination. It allows us to change,” he said, “We may not be able to eliminate man’s capacity to do evil, so nations and the alliances that we form must possess the means to defend ourselves. But among those nations like my own that hold nuclear stockpiles, we must have the courage to escape the logic of fear and pursue a world without them.”

The idea of a treaty to ban nuclear weapons is inspired by similar, successful treaties to ban biological weapons, chemical weapons, and landmines. All started with grave doubts. Many in the United States opposed these treaties. But when President Richard Nixon began the process to ban biological weapons and President George H.W. Bush began talks to ban chemical weapons, other nations rallied to their leadership. These agreements have not yet entirely eliminated these deadly arsenals (indeed, the United States is still not a party to the landmine treaty) but they stigmatized them, hugely increased the taboo against their use or possession, and convinced the majority of countries to destroy their stockpiles.I am engaged in real, honest debates among nuclear security experts on the pros and cons of this ban treaty. Does it really matter if a hundred-plus countries sign a treaty to ban nuclear weapons but none of the countries with nuclear weapons join? Will this be a serious distraction from the hard work of stopping new, dangerous weapons systems, cutting nuclear budgets, or ratifying the nuclear test ban treaty?

The ban treaty idea did not originate in the United States, nor was it championed by many U.S. groups, nor is within U.S. power to control the process. Indeed, this last seems to be one of the major reasons the administration opposes the talks.

But this movement is gaining strength. Two years ago, I covered the last of the three conferences held on the humanitarian impact of nuclear weapons for Defense One. Whatever experts and officials thought about the goals of the effort, I said, “the Vienna conference signals the maturing of a new, significant current in the nuclear policy debate. Government policy makers would be wise to take this new factor into account.”

What began as sincere concerns about the horrendous humanitarian consequences of using nuclear weapons has now become a diplomatic process driving towards a new global accord. It is fueled less by ideology than by fear.

The movement reflects widespread fears that the world is moving closer to a nuclear catastrophe — and that the nuclear-armed powers are not serious about reducing these risks or their arsenals. If anything, these states are increasing the danger by pouring hundreds of billions of dollars into new Cold War nuclear weapons programs.

The fears in the United States that, if elected, Donald Trump would have unfettered control of thousands of nuclear weapons has rippled out from the domestic political debate to exacerbate these fears. Rising US-Russian tensions, new NATO military deployments on the Russian border, a Russian aircraft carrier cruising through the Straits of Gibraltar, the shock at the Trump candidacy and the realization — exposed by Trump’s loose talk of using nuclear weapons – that any US leader can unleash a nuclear war with one command, without debate, deliberation or restraint, have combined to convince many nations that dramatic action is needed before it is too late.

As journalist Bill Press said as we discussed these developments on his show, “He scared the hell out of them.”

There is still time for the United States to shift gears. We should not squander the opportunity to join a process already in motion and to help guide it to a productive outcome. It is a Washington trope that you cannot defeat something with nothing. Right now, the US has nothing positive to offer. The disarmament process is dead and this lack of progress undermines global support for the Non-Proliferation Treaty and broader efforts to stop the spread of nuclear weapons.

The new presidential administration must make a determined effort to mount new initiatives that reduce these weapons, reduce these risks. It should also support the ban treaty process as a powerful way to build global support for a long-standing American national security goal. We must, as President John F. Kennedy said, eliminate these weapons before they eliminate us.

This article was originally posted on the Huffington Post.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Obama’s Nuclear Legacy

The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.

Trillion Dollar Nukes

Trillion Dollar Nukes

Would you spend $1 trillion tax dollars on nuclear weapons?

How much are nuclear weapons really worth? Is upgrading the US nuclear arsenal worth $1 trillion – in the hopes of never using it – when that money could be used to improve lives, create jobs, decrease taxes, or pay off debts? How far can $1 trillion go if it’s not all spent on nukes?

The application below helps answer those questions. Click on the icons on the left to ‘shop’ for items to add to your cart. See something you want to add? Just click on the title, and it will automatically be placed in your cart. To view your items or make changes, click on the shopping cart. From there, you can increase or decrease the amount of money allotted to each. If you want to maintain deterrence, but don’t support the whole upgrade, then just don’t spend all of the money – whatever is left over can be what you think nuclear upgrades are worth. When you’re done, click on the shopping cart, and share on social media to let your voice be heard!

[avia_codeblock_placeholder uid="0"]

Former Defense Secretary William Perry Launches MOOC on Nuclear Risks

Today, the danger of some sort of a nuclear catastrophe is greater than it was during the Cold War and most people are blissfully unaware of this danger.” – William J. Perry, 2015

The following description of Dr. Perry’s new MOOC is courtesy of the William J. Perry Project.

Nuclear weapons, far from being historical curiosities, are existential dangers today. But what can you do about this? The first step is to educate yourself on the subject. Now it’s easy to do that in the first free, online course devoted to educating the public about the history and dangers of nuclear weapons. This 10-week course, created by former Secretary of Defense William J. Perry and 10 other highly distinguished educators and public servants is hosted by Stanford University and starts October 4, 2016; sign up now here.

This course has a broad range, from physics to history to politics and diplomacy. You will have the opportunity to obtain a Statement of Accomplishment by passing the appropriate quizzes, but there are no prerequisites other than curiosity and a passion for learning.  Our faculty is an unprecedented group of internationally recognized academic experts, scientists, journalists, political activists, former ambassadors, and former cabinet members from the United States and Russia. Throughout the course you will have opportunities to engage with these faculty members, as well as guest experts and your fellow students from around the world, in weekly online discussions and forums.

In Weeks 1 and 2 you will learn about the creation of the first atomic bomb and the nuclear physics behind these weapons, taught by Dr. Joseph Martz, a physicist at the Los Alamos National Laboratories, and Dr. Siegfried Hecker, former Los Alamos director and a Stanford professor. Drs. Perry, Martz and Hecker describe the early years of the Atomic Age starting from the first nuclear explosion in New Mexico and the atomic bombing of Japan, followed by proliferation of these weapons to the Soviet Union and the beginning of the terrifying nuclear arms race underpinning the Cold War. You also will learn about ICBMs, deterrence and the nuclear triad, nuclear testing, nuclear safety (and the lack of it), the extent and dangers of nuclear proliferation, the connections between nuclear power and nuclear weapons, and the continuing fears about “loose nukes” and unsecured fissile material.

In Weeks 3 and 4 of Living at the Nuclear Brink, Dr. Perry outlines the enormous challenges the United States and its allies faced during the early frightening years of what came to be known as the Cold War. Then Dr. David Holloway, an international expert on the development of the Soviet nuclear program, will lead you on a tour of the Cold War, from its beginnings with Soviet nuclear tests and the Berlin Crisis, the Korean War, the Berlin Wall, and the Cuban Missile Crisis in 1962, probably the closest the world has come to nuclear war. Dr. Holloway will then cover the dangerous years of the late 1970s and early 1980s when détente between the Soviet Union and the West broke down; both sides amassed huge arsenals of nuclear weapons with increasingly sophisticated delivery methods including multiple warheads, and trust was strained with the introduction of short-range ballistic missiles in Europe. Finally, Dr. Holloway and Dr. Perry will describe the fascinating story of how this spiraling international tension was quelled, in part by the new thinking of Gorbachev, and how the Cold War ended with surprising speed and with minimal bloodshed.

In Week 5, you will hear from acclaimed national security journalist Philip Taubman about the remarkable efforts of scientists and engineers in the United States to develop technical methods for filling the gap of knowledge about the nuclear capabilities of the Soviet Union, including spy planes like the U-2 and satellite systems like Corona. In Week 6, you will hear from a recognized expert on nuclear policy, Dr. Scott Sagan of Stanford. Dr. Sagan will explore the theories behind nuclear deterrence and stability; you will learn how this theoretical stability is threatened by proclivities for preventive wars, commitment traps and accidents. You will hear hair-raising stories of accidents, miscalculations and bad intelligence during the Cuban Missile Crisis that that brought the world much closer to a nuclear catastrophe than most people realized.

Weeks 7 and 8 are devoted to exploring the nuclear dangers of today. Dr. Martha Crenshaw, an internationally recognized expert on terrorism, will discuss this topic, and examine the terrifying possibility of the nuclear terrorism. You will see a novel graphic-art video from the William J Perry Project depicting Dr. Perry’s nightmare scenario of a nuclear bomb exploded in Washington, D.C. Week 8 is devoted to current problems of nuclear proliferation. Dr. Hecker gives a first-hand account of the nuclear program in the dangerously unpredictable regime of North Korea, and goes over the fluid situation in Iran. The most dangerous region may be South Asia, where bitter enemies Pakistan and India face off with nuclear weapons. The challenges and possibilities in this confrontation are explored in depth by Dr. Sagan, Dr. Crenshaw, Dr. Hecker, and Dr. Perry; Dr. Andrei Kokoshin, former Russian Deputy Minister of Defense in the 1990s, offers a Russian perspective.

In the final two weeks of Living at the Nuclear Brink, we will explore ways to address the urgent problems of nuclear weapons. Dr. Perry describes the struggles by U.S. administrations to contain these dangers, and highlights some of the success stories, notably the Nunn-Lugar program that led to the dismantling of thousands of nuclear weapons in the former Soviet Union and the United States. James Goodby had a decades long career in the U.S. foreign service; he covers the long and often frustrating history of attempts to limit and control nuclear weapons through treaties and international agreements. Former Secretary of State George Shultz describes the momentous Reykjavik Summit between Presidents Reagan and Gorbachev, in which he participated, and gives his take on the prospects for global security. Finally, you will hear an impassioned plea for active engagement on the nuclear issue by Joseph Cirincione, author and President of the Ploughshares Fund.

Please join us in this exciting and novel online course; we welcome your participation!

For more, watch Gov. Jerry Brown discuss the importance of learning about nuclear weapons, and watch former Secretary of Defense William Perry introduce this MOOC.

Podcast: What Is Our Current Nuclear Risk?

A conversation with Lucas Perry about nuclear risk

Participants:

  • Ariel Conn— Ariel oversees communications and digital media at FLI, and as such, she works closely with members of the nuclear community to help present information about the costs and risks of nuclear weapons.
  • Lucas Perry—Lucas has been actively working with the Mayor and City Council of Cambridge, MA to help them divest from nuclear weapons companies, and he works closely with groups like Don’t Bank on the Bomb to bring more nuclear divestment options to the U.S.

Summary

In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe. (You can find more links to information about these issues at the bottom of the page.)

Transcript

Ariel:  I’m Ariel Conn with the Future of Life Institute, and I’m here with Lucas Perry, also a member of FLI, to talk about the increasing risks of nuclear weapons, and what we can do to decrease those risks.

With the end of the Cold War, and the development of the two new START treaties, we’ve dramatically decreased the number of nuclear weapons around the world. Yet even though there are fewer weapons, they still represent a real and growing threat. In the last few months, FLI has gotten increasingly involved in efforts to decrease the risks of nuclear weapons.

One of the first things people worry about when it comes to decreasing the number of nuclear weapons or altering our nuclear posture is whether or not we can still maintain effective deterrence.

Lucas, can you explain how deterrence works?

Lucas: Sure, deterrence is the idea that to protect ourselves from other nuclear states who might want to harm us through nuclear strikes, if we have our own nuclear weapons primed and ready to be fired, it would deter another nuclear state from firing on us, knowing that we would retaliate with similar, or even more, nuclear force.

Ariel:  OK, and along the same lines, can you explain what hair trigger alert is?

Lucas: Hair trigger alert is a Cold War-era strategy that has nuclear weapons armed and ready for launch within minutes. It ensures mutual and total annihilation, and thus acts as a means of deterrence. But the problem here is that it also increases the likelihood of accidental nuclear war.

Ariel:  Can you explain how an accidental nuclear war could happen? And, also, has it almost happened before?

Lucas: Having a large fraction of our nuclear weapons on hair trigger alert creates the potential for accidental nuclear war through the fallibility of the persons and instruments involved with the launching of nuclear weapons, in junction with the very small amount of time actually needed to fire the nuclear missiles.

Us humans are known to be prone to making mistakes on a daily basis, and we even make the same mistakes multiple times. Computers, radars, and all of the other instruments and technology that go into the launching and detecting of nuclear strikes are intrinsically fallible, as well, as they are prone to breaking and committing error.

So there is the potential for us to fire missiles when an instrument gives us false alarm or a person—say, the President—under the pressure of needing to make a decision within only a few minutes, decides to fire missiles due to some misinterpretation of a situation. This susceptibility to error is actually so great that groups such as the Union of Concerned Scientists have been able to identify at least 21 nuclear close calls where nuclear war was almost started by mistake.

Ariel:  How long does the President actually have to decide whether or not to launch a retaliatory attack?

Lucas: The President actually only has about 12 minutes to decide whether or not to fire our missiles in retaliation. After our radars have detected the incoming missiles, and after this information has been conveyed to the President, there has already been some non-negligible amount of time—perhaps 5 to 15 minutes—where nuclear missiles might already be inbound. So he only has another few minutes—say, about 10 or 12 minutes—to decide whether or not to fire ours in retaliation. But this is also highly contingent upon where the missiles are coming from and how early we detected their launch.

Ariel:  OK, and then do you have any examples off the top of your head of times where we’ve had close calls that almost led to an unnecessary nuclear war?

Lucas: Out of the twenty-or-so nuclear close calls that have been identified by the Union of Concerned Scientists, among other organizations, a few that stand out to me are—for example, in 1980, the Soviet Union launched four submarine-based missiles from near the Kuril Islands as part of a training exercise, which led to a triggering of American early-warning sensors.

And even in 1995, Russian early-warning radar detected a missile launch off the coast of Norway with flight characteristics very similar to that of US submarine missiles. This led to all Russian nuclear forces going into full alert, and even the President at the time got his nuclear football ready and was prepared for full nuclear retaliation. But they ended up realizing that this was just a Norwegian scientific rocket.

These examples really help to illustrate how hair trigger alert is so dangerous. Persons and instruments are inevitably going to make mistakes, and this is only made worse when nuclear weapons are primed and ready to be launched within only minutes.

Ariel:  Going back to deterrence: Do we actually need our nuclear weapons to be on hair trigger alert in order to have effective deterrence?

Lucas: Not necessarily. The current idea is that we keep our intercontinental ballistic missiles (ICBMs), which are located in silos, on hair trigger alert so that these nuclear weapons can be launched before the silos are destroyed by an enemy strike. But warheads can be deployed without being on hair trigger alert, on nuclear submarines and bombers, without jeopardizing national security. If nuclear weapons were to be fired at the United States with the intention of destroying our nuclear missile silos, then we could authorize the launch of our submarine- and bomber-based missiles over the time span of hours and even days. These missiles wouldn’t be able to be intercepted, and would thus offer a means of retaliation, and thus deterrence, without the added danger of being on hair trigger alert.

Ariel:  How many nuclear weapons does the Department of Defense suggest we need to maintain effective deterrence?

Lucas: Studies have shown that only about 300 to 1,000 nuclear weapons are necessary for deterrence. An example of this would be, about 450 of these bombs could be located on submarines and bombers spread out throughout the world, with about another 450 at home on reserve and in silos.

Ariel:  So how many nuclear weapons are there in the US and around the world?

Lucas: There are currently about 15,700 nuclear weapons on this planet. Russia and the US are the main holders of these, with Russia having about 7,500 and the US having about 7,200. Other important nuclear states to note are China, Israel, the UK, North Korea, France, India, and Pakistan.

Ariel:  OK, so basically we have a lot more nuclear weapons than we actually need.

Lucas: Right. If only about 300 to 1,000 are needed for deterrence, then the amount of nuclear weapons on this planet could be exponentially less than it is currently. And the amount that we have right now is actually just blatant overkill. It’s a waste of resources and it increases the risk of accidental nuclear war, making both the countries that have them and the countries that don’t have them, more at risk.

Ariel:  I want to consider this idea of the countries that don’t have them being more at risk. I’m assuming you’re talking about nuclear winter. Can you explain what nuclear winter is?

Lucas: Nuclear winter is an indirect effect of nuclear war. When nuclear weapons go off they create large firestorms from all of the infrastructure, debris, and trees that are set on fire surrounding the point of detonation. These massive firestorms release enormous amounts of soot and smoke into the air that goes into the atmosphere and can block out the sun for months and even years at a time. This drastically reduces the amount of sunlight that is able to get to the Earth, and it thus causes a significant decrease in average global temperatures.

Ariel:  How many nuclear weapons would actually have to go off in order for us to see a significant drop in temperature?

Lucas: About 100 Hiroshima-sized nuclear weapons would decrease average global temperatures by about 1.25 degrees Celsius. When these 100 bombs go off, they would release about 5 million tons of smoke lofted high into the stratosphere. And now, this change of 1.25 degrees Celsius of average global temperatures might seem very tiny, but studies actually show that this will lead to a shortening of growing seasons by up to 30 days and a 10% reduction in average global precipitation. Twenty million people would die directly from the effects of this, but then hundreds of millions of people would die in the following months from a lack of food due to the decrease in average global temperatures and a lack of precipitation.

Ariel:  And that’s hundreds of millions of people around the world, right? Not just in the regions where the war took place?

Lucas: Certainly. The soot and smoke from the firestorms would spread out across the entire planet and be affecting the amount of precipitation and sunlight that everyone receives. It’s not simply that the effects of nuclear war are contained to the countries involved with the nuclear strikes, but rather, potentially the very worst effects of nuclear war create global changes that would affect us all.

Ariel:  OK, so that was for a war between India and Pakistan, which would be small, and it would be using smaller nuclear weapons than what the US and Russia have. So if just an accident were to happen that triggered both the US and Russia to launch their nuclear weapons that are on hair trigger alert, what would the impacts of that be?

Lucas: Well, the United States has about a thousand weapons on hair trigger alert. I’m not exactly sure as to how many there are in Russia, but we can assume that it’s probably a similar amount. So if a nuclear war of about 2,000 weapons were to be exchanged between the United States and Russia, it would cause 510 million tons of smoke to rise into the stratosphere, which would lead to a 4 degrees Celsius change in average global temperatures. And compared to an India-Pakistan conflict, this would lead to catastrophically more casualties from a lack of food and from the direct effects of these nuclear bombs.

Ariel:  And over what sort of time scale is that expected to happen?

Lucas: The effects of nuclear winter, and perhaps even what might one day be nuclear summer, would be lasting over the time span of not just months, but years, even decades.

Ariel:  What’s nuclear summer?

Lucas: So nuclear summer is a more theoretical effect of nuclear war. With nuclear winter you have tons of soot and ash and smoke in the sky blotting out the sun, but additionally, there has actually been an enormous amount of CO2 released from the burning all of the infrastructure and forests and grounds due to the nuclear blasts. After decades, once all of this soot and ash and smoke begin to settle back down onto the Earth’s surface, there will still be this enormous remaining amount of CO2.

So nuclear summer is a hypothetical indirect effect of nuclear war, after nuclear winter, after the soot has fallen down, where there would be a huge spike in average global temperatures due to the enormous amount of CO2 left over from the firestorms.

Ariel: And so how likely is all of this to happen? Is there actually a chance that these types of wars could occur? Or is this mostly something that people are worrying about unnecessarily?

Lucas: The risk of a nuclear war is non-zero. It’s very difficult to quantify exactly what the risks are, but we can say that we have seen at least 21 nuclear close calls where nuclear war was almost started by mistake. And these 21 close calls are actually just the ones that we know about. How many more nuclear close calls have there been that we simply don’t know about, or that governments have been able to keep a secret? We can reflect that as tensions rise between the United States and Russia, and as the risk of terrorism and cyber attack continues to rise, and the conflict between India and Pakistan is continually exacerbated, the threat of nuclear war is actually increasing. It’s not going down.

Ariel:  So there is a risk, and we know that we have more nuclear weapons than we actually need for deterrence. Even if we want to keep enough weapons for deterrence, we don’t need as many as we have. I’m guessing that the government is not going to do anything about this, so what can people do to try to have an impact themselves?

Lucas: A method of engaging with this nuclear issue that has a potentially high efficacy is divesting. We have power as voters, consumers, and producers, but perhaps even more importantly, we have power over what we invest in. We have the power to choose to invest in companies that are socially responsible or ones which are not. So through divestment, we can take money away from nuclear weapons producers. But not only that, we can also work to stigmatize nuclear weapons production and our current nuclear situation through our divestment efforts.

Ariel:  But my understanding is that most of our nuclear weapons are funded by the government, so how would a divestment campaign actually be impactful, given that the money for nuclear weapons wouldn’t necessarily disappear?

Lucas: The most important part of divestment in this area of nuclear weapons is actually the stigmatization. When you see massive amounts of people divesting from something, it creates a lot of light and heat on the subject. It influences the public consciousness and helps to bring back to light this issue of nuclear weapons. And once you have stigmatized something to a critical point, it effectively renders its target politically and socially untenable. Divestment also stimulates new education and research on the topic, while also getting persons invested in the issue.

Ariel:  And so have there been effective campaigns that used divestment in the past?

Lucas: There have been a lot of different campaigns in the past that have used divestment as an effective means of creating important change in the world. A few examples of these are divestment from tobacco, South African apartheid, child labor, and fossil fuels. In all of these instances, persons were divesting from institutions involved in these socially irresponsible acts. Through doing so, they created much stigmatization of these issues, they created capital flight from them, and also created a lot of negative media attention that helped to bring light to these issues and show people the truth of what was going on.

Ariel:  I know FLI was initially inspired by a lot of the work that Don’t Bank on the Bomb has done. Can you talk a bit about some of the work they’ve done and what their success has been?

Lucas: The Don’t Bank on the Bomb campaign has been able to identify direct and indirect investments in nuclear weapons producers, made by large institutions in both Europe and America. Through this they have worked to engage with many banks in Europe to help them to not include these direct or indirect investments in their portfolios and mutual funds, thus helping them to construct socially responsible funds. A few examples of these successes are A&S Bank, ASR, and the Cooperative Bank.

Ariel:  So you’ve been very active with FLI in trying to launch a divestment campaign in the US. I was hoping you could talk a little about the work you’ve done so far and the success you’ve had.

Lucas: Inspired by a lot of the work that’s been done through the Don’t Bank on the Bomb campaign, in junction with resources provided by them, we were able to engage with the city of Cambridge and work with them and help them to divest $1 billion from nuclear weapons-producing companies. As we continue our divestment campaign, we’re really passionate about making the information needed for divestment transparent and open. Currently we’re working on a web app that will allow you to search your mutual fund and see whether not it has direct or indirect investments in nuclear weapons producers. Through doing so, we hope to not only be helping cities and municipalities and institutions divest, but also individuals like you and me.

Ariel:  Lucas, this has been great. Thank you so much for sharing information about the work you’ve been doing so far. If anyone has any questions about how they can divest from nuclear weapons, please email Lucas at lucas@futureoflife.org. You can also check out our new web app at futureoflife.org/invest.

[end of recorded material]

Learn more about nuclear weapons in the 21st Century:

What is hair-trigger alert?

How many nuclear weapons are there and who has them?

What are the consequences of nuclear war?

What would the world look like after a U.S and Russia nuclear war?

How many nukes would it take to make the Earth uninhabitable?

What are the specific effects of nuclear winter?

What can I do to mitigate the risk of nuclear war?

Do we really need so many nuclear weapons on hair-trigger alert?

What sort of new nuclear policy could we adopt?

How can we restructure strategic U.S nuclear forces?

Nuclear Weapons and the Myth of the “Re-Alerting Race”

The following article was originally posted on the Union of Concerned Scientists’ blog, The Equation.

One of the frustrations of trying to change policy is that frequently repeated myths can short-circuit careful thinking about current policies and keep policy makers from recognizing better alternatives.

That is particularly frustrating—and dangerous—when the topic is nuclear weapons.

Under current policies, accidental or mistaken nuclear war is more likely than it should be. Given the consequences, that’s a big deal.

We’ve posted previously about the dangers of the US policy of keeping nuclear missiles on hair-trigger alert so that they can be launched quickly in response to warning of attack. There is a surprisingly long list of past incidents in which human and technical errors have led to false warning of attack in both the both US and Soviet Union/Russia—increasing the risk of an accidental nuclear war.

(Source: Dept. of Defense)

Missile launch officers. (Source: Dept. of Defense)

This risk is particularly high in times of tension—and especially during a crisis—since in that case the people in charge are much more likely to interpret false or ambiguous warning as being real.

The main problem here is silo-based missiles (ICBMs), since they are at known locations an adversary could target. The argument goes that launch-on-warning allows the ICBMs to be launched before an incoming attack could destroy them, and that this deters an attack from occurring in the first place.

But deterring an attack does not depend on our land-based missiles. Most of the US nuclear force is at sea, hidden under the ocean in submarines, invulnerable to attack. And since the sub-based missiles can’t be attacked, they are not under the same pressure to launch quickly.

It’s for this reason that the sensible thing to do is to take ICBMs off hair-trigger alert and eliminate options for launching on warning of attack, which would eliminate the possibility of mistaken launches due to false or ambiguous warning. Security experts and high-level military officials agree.

(It’s worth noting that the US does not have a launch-on-warning doctrine, meaning that there is no requirement to launch on warning. But it continues to maintain launch-on-warning as an option, and to do that it needs to keep its ICBMs on hair-trigger alert.)

The myth of the “re-alerting race”

The main reason administration officials give for keeping missiles on alert is the “re-alerting race” and crisis instability. The argument is that if the United States takes its missiles off hair-trigger alert and a crisis starts to brew, it would want to put them back on alert so they would not be vulnerable to an attack. And the act of putting them back on alert—“re-alerting”—could exacerbate the crisis and lead Russia to assume the United States was readying to launch an attack. If Russia had de-alerted its missiles, it would then re-alert them, further exacerbating the crisis. Both countries could have an incentive to act quickly, leading to instability.

This argument gets repeated so often that people assume it’s simply true.

However, the fallacy of this argument is that there is no good reason for the US to re-alert its ICBMs in a crisis. They are not needed for deterrence since, as noted above, deterrence is provided by the submarine force. Moreover, historical incidents have shown that having missiles on alert during a crisis increases the risk of a mistaken launch due to false or ambiguous warning. So having ICBMs on alert in a crisis increases the risk without providing a benefit.

The administration should not just take ICBMs off hair trigger alert. It should also eliminate the option for launching nuclear weapons on warning.

Eliminating launch-on-warning options would mean you do not re-alert the ICBMs in a crisis. With no re-alerting, there is no re-alerting race.

President Obama should act

Obama in Prague, 2009 (Source: Dept of State)

Obama in Prague, 2009 (Source: Dept of State)

Maybe administration officials have not thought about this as carefully as they should—although, hopefully, a key policy change that would reduce the risk of accidental nuclear war is not being rejected because of sloppy thinking.

Maybe the real reason is simply inertia in the system. The president’s 2009 speech in Prague showed he is willing to think outside the box on these issues to reduce the risk of nuclear catastrophe. So maybe it’s his advisors who are not willing to take such a step.

In that case, he should listen to the words of Gen. Eugene Habiger, former Commander in Chief of U.S. Strategic Command—the man in charge of US nuclear weapons. Earlier this year, he said:

We need to bring the alert status down of our ICBMs. And we’ve been dealing with that for many, many decades. … It’s one of those things where the services are not gonna do anything until the Big Kahuna says, “Take your missiles off alert,” and then by golly within hours the missiles and subs will be off alert.

The Big Kahuna is president until January 20, 2017. Hopefully he will get beyond the myth that has frozen sensible action on this issue, and take the sensible step of ending launch-on-warning.

Success for Cluster Munitions Divestment

“Great news!” said Don’t Bank on the Bomb’s Susi Snyder in a recent blog post, “American company Textron has announced it will end its involvement with cluster munitions.”

This decision marks a major success for those who have pushed for a cluster munition divestment in an effort to stigmatize the weapons and the companies that create them. As Snyder explained later in her article:

“PAX and campaigners active in the Stop Explosive Investments campaign have engaged tirelessly with many investors over the years to urge them to cease their financial support of Textron. This article’s analysis suggests that  pressure from the financial sector has had an effect:

‘A Couple of Hidden Positives: On the surface, yesterday’s announcement seems like a non-event, but we come away with two observations that we think investors shouldn’t overlook. First off, we note that SFW served as a product that limited the “ownability” of TXT shares among foreign investment funds, due largely to interpretations of where TXT stood vis-a-vis international weapons treaties. Arguably, the discontinuation of this product line could expand the addressable investor base for TXT shares by a material amount (i.e. most of Europe), in an industrial vertical (A&D) where investable choices are slim but performance has been strong over the years.’”

Stop Explosive Investments wrote a more detailed post about Textron’s announcement:

“US company Textron announced it will end its involvement with cluster munitions. It produced the Sensor Fuzed Weapon (SFW), which is banned under the 2008 Convention on Cluster Munitions (CCM). This good news comes a few days before the Sixth Meeting of States Parties to the Convention on Cluster Munitions in Geneva next week.

“Over the years, CMC-member PAX has identified Textron as a cluster munition producer in the  “Worldwide Investments in Cluster Munitions; a shared responsibility” report. The 2016 update revealed that worldwide, 49 financial institutions had  financial ties to Textron, with a total of US$12370,83 million invested.

“‘Campaigners active in the Stop Explosive Investments campaign have engaged tirelessly with many investors over the years to urge them to cease their financial support of Textron’,  says Megan Burke, director of the Cluster Munition Coalition. ‘The company’s decision to end their cluster munition production is a great success for all of us working for a world free of cluster munitions.’

“Research by Human Rights Watch and Amnesty International showed that Textron’s Sensor Fuzed Weapons were used in Yemen by the Saudi-led coalition. On 27 May 2016, the United States government blocked the transfer of these Sensor Fuzed Weapons to Saudi Arabia because of concern at the use of cluster munitions in or near civilian areas. Now, Textron decided to end the production of these weapons all together. The company cites a decline in orders and ‘the current political climate’ as motivation, an indication that the CCM is the global norm and that the stigma associated with cluster bombs is ever-growing.

“Pressure from the financial sector has likely also impacted this decision. As a financial analyst explains in this article: ‘[…] interpretations of where Textron stood vis-a-vis international weapons treaties’ meant many (European) investors had excluded the company from their investment universe. Suzanne Oosterwijk from PAX: ‘Such exclusions send a clear message to companies that they are not acceptable business partners as long as they are involved in the production of cluster munitions.’

“Since the launch of the Stop Explosive Investments campaign dozens of financial institutions have installed policies to disinvest from cluster munition producers, and 10 states have legislation to prohibit such investments.

“On Tuesday 6 September during the Sixth Meeting of States Parties, the CMC and PAX will hold a side event on disinvestment form cluster munitions and will urge more countries to ban investments in cluster munitions producers.”

The success seen from cluster munitions divestment provides further evidence that divestment is an effective means of impacting company decisions. This an encouraging announcement for those hoping to decrease the world’s nuclear weapons via divestment.

 

Podcast: Could an Earthquake Destroy Humanity?

Earthquakes as Existential Risks

Earthquakes are not typically considered existential or even global catastrophic risks, and for good reason: they’re localized events. While they may be devastating to the local community, rarely do they impact the whole world. But is there some way an earthquake could become an existential or catastrophic risk? Could a single earthquake put all of humanity at risk? In our increasingly connected world, could an earthquake sufficiently exacerbate a biotech, nuclear or economic hazard, triggering a cascading set of circumstances that could lead to the downfall of modern society?

Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of FLI consider extreme earthquake scenarios to figure out if there’s any way such a risk is remotely plausible. This podcast was produced in a similar vein to Myth Busters and xkcd’s What If series.

We only consider a few scenarios in this podcast, but we’d love to hear from other people. Do you have ideas for an extreme situation that could transform a locally devastating earthquake into a global calamity?

This episode features insight from seismologist Martin Chapman of Virginia Tech.

Op-ed: When NATO Countries Were U.S. Nuclear Targets

Sixty years ago, the U.S. had over 60 nuclear weapons aimed at Poland, ready to launch. At least one of those targeted Warsaw, where, on July 8-9, allied leaders will meet for the biennial NATO summit meeting.

In fact, recently declassified documents, reveal that the U.S. once had their nuclear sites set on over 270 targets scattered across various NATO countries. Most people assume that the U.S. no longer poses a nuclear threat to its own NATO allies, but that assumption may be wrong.

In 2012, Alex Wellerstein created an interactive program called NukeMap to help people visualize how deadly a nuclear weapon would be if detonated in any country of the world. He recently went a step further and ran models to see how far nuclear fallout might drift from its original target.

It turns out, if the U.S. – either unilaterally or with NATO – were to launch a nuclear attack against Russia, countries such as Finland, Estonia, Latvia, Belarus, Ukraine, and even Poland would be at severe risk of nuclear fallout. Similarly, attacks against China or North Korea would harm people in South Korea, Myanmar and Thailand.

Even a single nuclear weapon, detonated too close to a border, on a day that the wind is blowing in the wrong direction, would be devastating for innocent people living in nearby allied countries.

While older targeting data is declassified, today’s nuclear targets have shifted. And the public is kept in the dark about how many countries may be at risk of becoming collateral damage in the event of a nuclear attack anywhere in their region of the globe.

Most people believe that no leader would intentionally fire a nuke at another country. And perhaps no sane leader would intentionally do so – although that’s not something to count on as political tensions increase – but there’s a very good chance that one of the nuclear powers will accidentally launch a nuke in response to inaccurate data.

The accidental launch of a nuclear weapon is something that has almost happened many times in the past, and it only takes one nuclear weapon to kill hundreds of thousands of people. Yet almost 30 years after the Cold War ended, 15,000 nuclear weapons remain, with more than 90% of them split between the U.S. and Russia.

Meanwhile, relations are deteriorating between Russia, China, and the US/ NATO. This doesn’t just increase the risk of intentional nuclear war; it increases the likelihood that a country will misinterpret bad satellite or radar data and launch a retaliatory strike to a false alarm.

Many nuclear and military experts, including Former Secretary of Defense William Perry, warn that the threat of a nuclear attack is greater now than it was during the Cold War.

Major international developments have occurred in the two years since the last NATO meeting. In a recent op-ed in Newsweek, NATO president Jens Stoltenberg overviewed many of the problems that they must address:

“There is no denying that the world has become more dangerous in recent years. Moscow’s actions in Ukraine have shaken the European security order. Turmoil in the Middle East and North Africa has unleashed a host of challenges, not least the largest refugee and migrant crisis since the Second World War. We face security challenges of a magnitude and complexity much greater than only a few years ago. Add to that the uncertainty surrounding “Brexit”—the consequences of which are unclear—and it is easy to be concerned about the future.”

These are serious problems indeed, but 15,000 nuclear weapons in the hands of just a couple leaders only increases global destabilization. If NATO is serious about increasing security, then we must significantly decrease the number of nuclear weapons – and the number of nuclear targets ― around the world.

Deterrence is an important defensive posture, and this is not a call for NATO to encourage countries to eliminate all nuclear weapons. Instead, it is a reminder that we must learn from the past. Those who are enemies today could be friends in a safer, more stable future, but that hope is lost if a nuclear war ever occurs.

The Problem with Brexit: 21st Century Challenges Require International Cooperation

Retreating from international institutions and cooperation will handicap humanity as we tackle our greatest problems.

The UK’s referendum in favor of leaving the EU and the rise of nationalist ideologies in the US and Europe is worrying on multiple fronts. Nationalism espoused by the likes of Donald Trump (U.S.), Nigel Farage (U.K.), Marine Le Pen (France), and Heinz-Christian Strache (Austria) may lead to a resurgence of some of the worst problems of the first half of 20th century. These leaders are calling for policies that would constrain trade and growth, encourage domestic xenophobia, and increase rivalries and suspicion between countries.

Even more worrying, however, is the bigger picture. In the 21st century, our greatest challenges will require global solutions. Retreating from international institutions and cooperation will handicap humanity’s ability to address our most pressing upcoming challenges.

The Nuclear Age

Many of the challenges of the 20th century – issues of public health, urbanization, and economic and educational opportunity – were national problems that could be dealt with at the national level. July 16th, 1945 marked a significant turning point. On that day, American scientists tested the first nuclear weapon in the New Mexican desert. For the first time in history, individual human beings had within their power a technology capable of destroying all of humanity.

Thus, nuclear weapons became the first truly global problem. Weapons with such a destructive force were of interest to every nation and person on the planet. Only international cooperation could produce a solution.

Despite a dangerous arms race between the US and the Soviet Union — including a history of close calls — humanity survived 70 years without a catastrophic global nuclear war. This was in large part due to international institutions and agreements that discouraged wars and further proliferation.

But what if we replayed the Cold War without the U.N. mediating disputes between nuclear adversaries? And without the bitter taste of the Second World War fresh in the minds of all who participated? Would we still have the same benign outcome?

We cannot say what such a revisionist history would look like, but the chances of a catastrophic outcome would surely be higher.

21st Century Challenges

The 21st century will only bring more challenges that are global in scope, requiring more international solutions. Climate change by definition requires a global solution since carbon emissions will lead to global warming regardless of which countries emit them.

In addition, continued development of new powerful technologies — such as artificial intelligence, biotechnologies, and nanotechnologies — will put increasingly large power in the hands of the people who develop and control them. These technologies have the potential to improve the human condition and solve some of our biggest problems. Yet they also have the potential to cause tremendous damage if misused.

Whether through accident, miscalculation, or madness, misuse of these powerful technologies could pose a catastrophic or even existential risk. If a Cold-War-style arms race for new technologies occurs, it is only a matter of time before a close call becomes a direct hit.

Working Together

As President Obama said in his speech at Hiroshima, “Technological progress without an equivalent progress in human institutions can doom us.”

Over the next century, technological progress can greatly improve the human experience. To ensure a positive future, humanity must find the wisdom to handle the increasingly powerful technologies that it is likely to produce and to address the global challenges that are likely to arise.

Experts have blamed the resurgence of nationalism on anxieties over globalization, multiculturalism, and terrorism. Whatever anxieties there may be, we live in a global world where our greatest challenges are increasingly global, and we need global solutions. If we resist international cooperation, we will battle these challenges with one, perhaps both, arms tied behind our back.

Humanity must learn to work together to tackle the global challenges we face. Now is the time to strengthen international institutions, not retreat from them.

U.S. Conference of Mayors Supports Cambridge Nuclear Divestment

The U.S. Conference of Mayors (USCM) unanimously adopted a resolution at their annual meeting this week in support of nuclear reduction. The resolution called for the next U.S. President to:

  • “pursue diplomacy with other nuclear-armed states,”
  • “participate in negotiations for the elimination of nuclear weapons,” and
  • “cut nuclear weapons spending  and redirect funds to meet the needs of cities.”

In addition, the USCM resolution also praised Cambridge Mayor Denise Simmons and the city council members for their actions to divest from nuclear weapons:

“The USCM commends Mayor Denise Simmons and the Cambridge City Council for demonstrating bold leadership at the municipal level by unanimously deciding on April 2, 2016, to divest their one-billion-dollar city pension fund from all companies involved in production of nuclear weapons systems and in entities investing in such companies.”

In an email to FLI Mayor Simmons expressed her gratitude to the USCM, saying,

“I am honored to receive such commendation from the USCM, and I hope this is a sign that nuclear divestment is just getting started in the United States. Divestment is an important tool that politicians and citizens alike can use to send a powerful message that we want a world safe from nuclear weapons.”

The resolution warns that relations between the U.S. and other nuclear-armed countries are increasingly tenuous. It states, “the nuclear-armed countries are edging ever closer to direct military confrontation in conflict zones around the world.”

Moreover, the Obama administration may have overseen a significant reduction of the nuclear stockpile. But nuclear countries still hold over 15,000 nuclear weapons, with the U.S. possessing nearly half. Furthermore, the President’s budget plans call for $1 trillion to be spent on new nuclear weapons over the next three decades.

These new weapons will include the B61-12, which has increased accuracy and a range of optional warhead sizes. The smallest warhead the B61-12 will carry is roughly 50 times smaller than the bomb dropped on Hiroshima. With smaller explosions and increased accuracy, many experts worry that we may be more likely to use the new nukes.

The USCM would rather see the U.S. government invest more of that $1 trillion back into its cities and communities.

 

What is the USCM?

The USCM represents cities with populations greater than 30,000, for a total of over 1400 cities. Resolutions that they adopt at their annual meeting become official policy for the whole group.

Only 313 American cities are members of the international group, Mayors for Peace, but for 11 years now, the USCM has adopted nuclear resolutions that support Mayors for Peace.

Mayors for Peace was established by Hiroshima Mayor Takeshi Araki in 1982 to decrease the risks of nuclear weapons. To sign on, a mayor must support the elimination of nuclear weapons. In 2013, Mayors for Peace established their 2020 Vision Campaign, which seeks eliminate nuclear weapons by 2020. And as of June 1, 2016, the group counted over 7,000 member cities from over 160 countries. They hope to have 10,000 member cities by 2020.

The USCM’s official press release about this nuclear resolution also added:

“This year, for the first time, New York City Mayor Bill de Blasio and Washington, DC Mayor Muriel Bowser added their names as co-sponsors of the Mayors for Peace resolution.”

Read the official resolution here, along with a complete list of the 23 mayors who sponsored it.

 

Watch as Mayor Simmons announces the Cambridge decision to divest from nuclear weapons at the MIT nuclear conference:

 

Existential Risks Are More Likely to Kill You Than Terrorism

People tend to worry about the wrong things.

According to a 2015 Gallup Poll, 51% of Americans are “very worried” or “somewhat worried” that a family member will be killed by terrorists. Another Gallup Poll found that 11% of Americans are afraid of “thunder and lightning.” Yet the average person is at least four times more likely to die from a lightning bolt than a terrorist attack.

Similarly, statistics show that people are more likely to be killed by a meteorite than a lightning strike (here’s how). Yet I suspect that most people are less afraid of meteorites than lightning. In these examples and so many others, we tend to fear improbable events while often dismissing more significant threats.

One finds a similar reversal of priorities when it comes to the worst-case scenarios for our species: existential risks. These are catastrophes that would either annihilate humanity or permanently compromise our quality of life. While risks of this sort are often described as “high-consequence, improbable events,” a careful look at the numbers by leading experts in the field reveals that they are far more likely than most of the risks people worry about on a daily basis.

Let’s use the probability of dying in a car accident as a point of reference. Dying in a car accident is more probable than any of the risks mentioned above. According to the 2016 Global Challenges Foundation report, “The annual chance of dying in a car accident in the United States is 1 in 9,395.” This means that if the average person lived 80 years, the odds of dying in a car crash will be 1 in 120. (In percentages, that’s 0.01% per year, or 0.8% over a lifetime.)

Compare this to the probability of human extinction stipulated by the influential “Stern Review on the Economics of Climate Change,” namely 0.1% per year.* A human extinction event could be caused by an asteroid impact, supervolcanic eruption, nuclear war, a global pandemic, or a superintelligence takeover. Although this figure appears small, over time it can grow quite significant. For example, it means that the likelihood of human extinction over the course of a century is 9.5%. It follows that your chances of dying in a human extinction event are nearly 10 times higher than dying in a car accident.

But how seriously should we take the 9.5% figure? Is it a plausible estimate of human extinction? The Stern Review is explicit that the number isn’t based on empirical considerations; it’s merely a useful assumption. The scholars who have considered the evidence, though, generally offer probability estimates higher than 9.5%. For example, a 2008 survey taken during a Future of Humanity Institute conference put the likelihood of extinction this century at 19%. The philosopher and futurist Nick Bostrom argues that it “would be misguided” to assign a probability of less than 25% to an existential catastrophe before 2100, adding that “the best estimate may be considerably higher.” And in his book Our Final Hour, Sir Martin Rees claims that civilization has a fifty-fifty chance of making it through the present century.

My own view more or less aligns with Rees’, given that future technologies are likely to introduce entirely new existential risks. A discussion of existential risks five decades from now could be dominated by scenarios that are unknowable to contemporary humans, just like nuclear weapons, engineered pandemics, and the possibility of “grey goo” were unknowable to people in the fourteenth century. This suggests that Rees may be underestimating the risk, since his figure is based on an analysis of currently known technologies.

If these estimates are believed, then the average person is 19 times, 25 times, or even 50 times more likely to encounter an existential catastrophe than to perish in a car accident, respectively.

These figures vary so much in part because estimating the risks associated with advanced technologies requires subjective judgments about how future technologies will develop. But this doesn’t mean that such judgments must be arbitrary or haphazard: they can still be based on technological trends and patterns of human behavior. In addition, other risks like asteroid impacts and supervolcanic eruptions can be estimated by examining the relevant historical data. For example, we know that an impactor capable of killing “more than 1.5 billion people” occurs every 100,000 years or so, and supereruptions happen about once every 50,000 years.

Nonetheless, it’s noteworthy that all of the above estimates agree that people should be more worried about existential risks than any other risk mentioned.

Yet how many people are familiar with the concept of an existential risk? How often do politicians discuss large-scale threats to human survival in their speeches? Some political leaders — including one of the candidates currently running for president — don’t even believe that climate change is real. And there are far more scholarly articles published about dung beetles and Star Trek than existential risks. This is a very worrisome state of affairs. Not only are the consequences of an existential catastrophe irreversible — that is, they would affect everyone living at the time plus all future humans who might otherwise have come into existence — but the probability of one happening is far higher than most people suspect.

Given the maxim that people should always proportion their fears to the best available evidence, the rational person should worry about the above risks in the following order (from least to most risky): terrorism, lightning strikes, meteorites, car crashes, and existential catastrophes. The psychological fact is that our intuitions often fail to track the dangers around us. So, if we want to ensure a safe passage of humanity through the coming decades, we need to worry less about the Islamic State and al-Qaeda, and focus more on the threat of an existential catastrophe.

x-risksarielfigure*Editor’s note: To clarify, the 0.1% from the Stern Report is used here purely for comparison to the numbers calculated in this article. The number was an assumption made at Stern and has no empirical backing. You can read more about this here.

Top Scientists Call for Obama to Take Nuclear Missiles off Hair-Trigger Alert

The following post was written by Lisbeth Gronlund, co-director of the Global Security Program for the Union of Concerned Scientists.

More than 90 prominent US scientists, including 20 Nobel laureates and 90 National Academy of Sciences members, sent a letter to President Obama yesterday urging him to take US land-based nuclear missiles off hair-trigger alert and remove launch-on-warning options from US warplans.

As we’ve discussed previously on this blog and elsewhere, keeping these weapons on hair-trigger alert so they can be launched within minutes creates the risk of a mistaken launch in response to false warning of an incoming attack.

This practice dates to the Cold War, when US and Soviet military strategists feared a surprise first-strike nuclear attack that could destroy land-based missiles. By keeping missiles on hair-trigger alert, they could be launched before they could be destroyed on the ground. But as the letter notes, removing land-based missiles from hair-trigger alert “would still leave many hundreds of submarine-based warheads on alert—many more than necessary to maintain a reliable and credible deterrent.”

“Land-based nuclear missiles on high alert present the greatest risk of mistaken launch,” the letter states. “National leaders would have only a short amount of time—perhaps 10 minutes—to assess a warning and make a launch decision before these missiles could be destroyed by an incoming attack.”

Minuteman III launch officers (Source: US Air Force)

Minuteman III launch officers (Source: US Air Force)

Past false alarms

Over the past few decades there have been numerous U.S. and Russian false alarms—due to technical failures, human errors and misinterpretations of data—that could have prompted a nuclear launch. The scientists’ letter points out that today’s heightened tension between the United States and Russia increases that risk.

The scientists’ letter reminds President Obama that he called for taking nuclear-armed missiles off hair-trigger alert after being elected president. During his 2008 presidential campaign, he also noted, “[K]eeping nuclear weapons ready to launch on a moment’s notice is a dangerous relic of the Cold War. Such policies increase the risk of catastrophic accidents or miscalculation.”

Other senior political and military officials have also called for an end to hair-trigger alert.

The scientists’ letter comes at an opportune time, since the White House is considering what steps the president could take in his remaining time in office to reduce the threat posed by nuclear weapons.

The Collective Intelligence of Women Could Save the World

Neil deGrasse Tyson was once asked about his thoughts on the cosmos. In a slow, gloomy voice, he intoned, “The universe is a deadly place. At every opportunity, it’s trying to kill us. And so is Earth. From sinkholes to tornadoes, hurricanes, volcanoes, tsunamis.” Tyson humorously described a very real problem: the universe is a vast obstacle course of catastrophic dangers. Asteroid impacts, supervolcanic eruptions, and global pandemics represent existential risks that could annihilate our species or irreversibly catapult us back into the Stone Age.

But nature is the least of our worries. Today’s greatest existential risks stem from advanced technologies like nuclear weapons, biotechnology, synthetic biology, nanotechnology, and even artificial superintelligence. These tools could trigger a disaster of unprecedented proportions. Exacerbating this situation are “threat multipliers” — issues like climate change and biodiveristy loss, which, while devastating in their own right, can also lead to an escalation of terrorism, pandemics, famines, and potentially even the use of WTDs (weapons of total destruction).

The good news is that none of these existential threats are inevitable. Humanity can overcome every single known danger. But accomplishing this will require the smartest groups working together for the common good of human survival.

So, how do we ensure that we have the smartest groups working to solve the problem?

Get women involved.

A 2010 study, published in Science, made two unexpected discoveries. First, it established that groups can exhibit a collective intelligence (or c factor). Most of us are familiar with general human intelligence, which describes a person’s intelligence level across a broad spectrum of cognitive tasks. It turns out groups also have a similar “collective” intelligence that determines how successfully they can navigate these cognitive tasks. This is an important finding because “research, management, and many other kinds of tasks are increasingly accomplished by groups — working both face-to-face and virtually.” To optimize group performance, we need to understand what makes a group more intelligent.

This leads to the second unexpected discovery. Intuitively, one might think that groups with really smart members will themselves be really smart. This is not the case. The researchers found no strong correlation between the average intelligence of members and the collective intelligence of the group. Similarly, one might suspect that the group’s IQ will increase if a member of the group has a particularly high IQ. Surely a group with Noam Chomsky will perform better than one in which he’s replaced by Joe Schmo. But again, the study found no strong correlation between the smartest person in the group and the group’s collective smarts.

Instead, the study found three factors linked to group intelligence. The first pertains to the “social sensitivity” of group members, measured by the “Reading the Mind in the Eyes” test. This term refers to one’s ability to infer the emotional states of others by picking up on certain non-verbal clues. The second concerns the number of speaking turns taken by members of the group. “In other words,” the authors write, “groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking.”

The last factor relates to the number of female members: the more women in the group, the higher the group’s IQ. As the authors of the study explained, “c was positively and significantly correlated with the proportion of females in the group.” If you find this surprising, you’re not alone: the authors themselves didn’t anticipate it, nor were they looking for a gender effect.

Why do women make groups smarter? The authors suggest that it’s because women are, generally speaking, more socially sensitive than men, and the link between social sensitivity and collective intelligence is statistically significant.

Another possibility is that men tend to dominate conversations more than women, which can disrupt the flow of turn-taking. Multiple studies have shown that women are interrupted more often than men; that when men interrupt women, it’s often to assert dominance; and that men are more likely to monopolize professional meetings. In other words, there’s robust empirical evidence for what the writer and activist Rebecca Solnit describes as “mansplaining.”

These data have direct implications for existential riskology:

Given the unique, technogenic dangers that haunt the twenty-first century, we need the smartest groups possible to tackle the problems posed by existential risks. We need groups comprised of women.

Yet the existential risk community is marked by a staggering imbalance of gender participation. For example, a random sample of 40 members of the “Existential Risk” group on Facebook (of which I am an active member) included only 3 women. Similar asymmetries can be found in many of the top research institutions working on global challenges.

This dearth of female scholars constitutes an existential emergency. If the studies above are correct, then the groups working on existential risk issues are not nearly as intelligent as they could be.

The obvious next question is: How can the existential risk community rectify this potentially dangerous situation? Some answers are implicit in the data above: for example, men could make sure that women have a voice in conversations, aren’t interrupted, and don’t get pushed to the sidelines in conversations monopolized by men.

Leaders of existential risk studies should also strive to ensure that women are adequately represented at conferences, that their work is promoted to the same extent as men’s, and that the environments in which existential risk scholarship takes place is free of discrimination. Other factors that have been linked to women avoiding certain fields include the absence of visible role models, the pernicious influence of gender stereotypes, the onerous demands of childcare, a lack of encouragement, and the statistical preference of women for professions that focus on “people” rather than “things.”

No doubt there are other factors not mentioned, and other strategies that could be identified. What can those of us already ensconced in the field do to achieve greater balance? What changes can the community make to foster more diversity? How can we most effectively maximize the collective intelligence of teams working on existential risks?

As Sir Martin Rees writes in Our Final Hour, “what happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.” Future generations may very well thank us for taking the link between collective intelligence and female participation seriously.

Note: there’s obviously a moral argument for ensuring that women have equal opportunities, get paid the same amount as men, and don’t have to endure workplace discrimination. The point of this article is to show that even if one brackets moral considerations, there are still compelling reasons for making the field more diverse. (For more , see chapter 14 of my book, which  lays out a similar argument.

How Could a Failed Computer Chip Lead to Nuclear War?

The US early warning system is on watch 24/7, looking for signs of a nuclear missile launched at the United States. As a highly complex system with links to sensors around the globe and in space, it relies heavily on computers to do its job. So, what happens if there is a glitch in the computers?

Between November 1979 and June 1980, those computers led to several false warnings of all-out nuclear attack by the Soviet Union—and a heart-stopping middle-of-the-night telephone call.

NORA command post, c. 1982. (Source: US National Archives)

NORA command post, c. 1982. (Source: US National Archives)

I described one of these glitches previously. That one, in 1979, was actually caused by human andsystems errors: A technician put a training tape in a computer that then—inexplicably—routed the information to the main US warning centers. The Pentagon’s investigator stated that they were never able to replicate the failure mode to figure out what happened.

Just months later, one of the millions of computer chips in the early warning system went haywire, leading to incidents on May 28, June 3, and June 6, 1980.

The June 3 “attack”

By far the most serious of the computer chip problems occurred on  early June 3, when the main US warning centers all received notification of a large incoming nuclear strike. The president’s National Security Advisor Zbigniew Brezezinski woke at 3 am to a phone call telling him a large nuclear attack on the United States was underway and he should prepare to call the president. He later said he had not woken up his wife, assuming they would all be dead in 30 minutes.

Like the November 1979 glitch, this one led NORAD to convene a high-level “Threat Assessment Conference,” which includes the Chair of the Joint Chiefs of Staff and is just below the level that involves the president. Taking this step sets lots of things in motion to increase survivability of U.S. strategic forces and command and control systems. Air Force bomber crews at bases around the US got in their planes and started the engines, ready for take-off. Missile launch offices were notified to standby for launch orders. The Pacific Command’s Airborne Command Post took off from Hawaii. The National Emergency Airborne Command Post at Andrews Air Force Base taxied into position for a rapid takeoff.

The warning centers, by comparing warning signals they were getting from several different sources, were able to determine within a few minutes they were seeing a false alarm—likely due to a computer glitch. The specific cause wasn’t identified until much later. At that point, a Pentagon document matter-of-factly stated that a 46-cent computer chip “simply wore out.”

Short decision times increase nuclear risks

As you’d hope, the warning system has checks built into it. So despite the glitches that caused false readings, the warning officers were able to catch the error in the short time available before the president would have to make a launch decision.

We know these checks are pretty good because there have been a surprising number of incidents like these, and so far none have led to nuclear war.

But we also know they are not foolproof.

The risk is compounded by the US policy of keeping its missile on hair-trigger alert, poised to be launched before an incoming attack could land. Maintaining an option of launching quickly on warning of an attack makes the time available for sorting out confusing signals and avoiding a mistaken launch very short.

For example, these and other unexpected incidents have led to considerable confusion on the part of the operators. What if the confusion had persisted longer? What might have happened if something else had been going on that suggested the warning was real? In his book, My Journey at the Nuclear Brink, former Secretary of Defense William Perry asks what might have happened if these glitches “had occurred during the Cuban Missile Crisis, or a Mideast war?”

There might also be unexpected coincidences. What if, for example, US sensors had detected an actual Soviet missile launch around the same time? In the early 1980s the Soviets were test launching 50 to 60 missiles per year—more than one per week. Indeed, US detection of the test of a Soviet submarine-launch missile had led to a Threat Assessment Conference just weeks before this event.

Given enough time to analyze the data, warning officers on duty would be able to sort out most false alarms. But the current system puts incredible time pressure on the decision process, giving warning officers and then more senior officials only a few minutes to assess the situation. If they decide the warning looks real, they would alert the president, who would have perhaps 10 minutes to decide whether to launch.

Keeping missiles on hair-trigger alert and requiring a decision within minutes of whether or not to launch is something like tailgating when you’re driving on the freeway. Leaving only a small distance between you and the car in front of you reduces the time you have to react. You may be able to get away with it for a while, but the longer you put yourself in that situation the greater the chance that some unforeseen situation, or combination of events, will lead to disaster.

In his book, William Perry makes a passionate case for taking missiles off alert:

“These stories of false alarms have focused a searing awareness of the immense peril we face when in mere minutes our leaders must make life-and-death decisions affecting the whole planet. Arguably, short decision times for response were necessary during the Cold War, but clearly those arguments do not apply today; yet we are still operating with an outdated system fashioned for Cold War exigencies.

“It is time for the United States to make clear the goal of removing all nuclear weapons everywhere from the prompt-launch status in which nuclear-armed ballistic missiles are ready to be launched in minutes.”

Wheel of Near Misfortune

 

To see what other incidents have increased the risks posed by nuclear weapons over the years, visit our new Wheel of Near Misfortune.

Will We Use the New Nuclear Weapons?

gravity_bomb

B61-12 gravity bomb just before it penetrates the ground in a test last year. Photo courtesy National Nuclear Security Administration.

In 2015, the Pentagon successfully tested the B61-12 nuclear gravity bomb as part of a  $1 trillion effort to make the nuclear arsenal more accurate and lethal. This redesigned weapon is equipped with “dial-a-yield” technology, which allows the military to adjust the destructive force of the B61-12 before launch, for an explosive range of 0.3 to 50 kilotons of TNT.  Many government officials believe that not only does rebuilding this bomb violate the Non-Proliferation Treaty, but that the U.S. is more likely to launch this nuclear weapon at targets.

The B61 was one of the primary thermonuclear weapons that the U.S. built during the Cold War. At the time, the U.S. and European Union deployed American tactical (short-range) and strategic (long-range) nuclear weapons to counter the Soviet threat. Tactical weapons are smaller, shorter range attack missiles, which include high-caliber artillery, ground-to-ground missiles, combat support aircraft, and sea-based torpedoes, missiles, and anti-submarine weapons.

Modifications (or “mods”) of the B61 were designed to be both strategic and tactical. For example, the B61-4 is a tactical mod with a low-yield range of 0.3 to 0.5 kilotons, while the strategic B61-7 can carry yields ranging from 10 to 360 kilotons. The B61-11, the most recent of the strategic B61 mods, carries only a single yield of 400 kilotons. This weapon was designed in 1997 as a “bunker buster” — a nuclear weapon with limited earth-penetration, designed to bore meters into the ground before exploding.

The B61-12 is all of these weapons in one. The yield range of this new nuclear weapons spans that of the B61-4 up to the low end of the B61-7. And while the B61-12 won’t be as powerful as the B61-11, it will feature comparable bunker-busting capabilities, with greatly increased accuracy. Boeing developed four maneuverable fins for the new gravity bomb that will work with the new electronics system to zero in on targets – even those deep underground, such as tunnels and weapons bunkers.

The image of a nuclear explosion that springs to mind most often is either of the bombs dropped on Japan or the massive, 50 megaton Tsar Bomba that the Soviets tested in the 60s. The bombs dropped on Hiroshima and Nagasaki were “only” 15 and 20 kilotons, respectively, and they killed over 250,000 people. The B61-12 is a completely different beast.

At 0.3 kilotons, the smallest yield for the B61-12 is 50 times smaller than the bomb dropped on Hiroshima, while the maximum yield of 50 kilotons is over twice as large. This range of potential and accurate devastation is unlike anything we’ve seen. As the Director of the Nuclear Information Project at the Federation of American Scientists, Hans Kristensen, notes in the National Interest,  “We do not have a nuclear-guided bomb in our arsenal today… It [the B61-12] is a new weapon.”

According to a quote by scholar Robert C. Aldridge in the same National Interest article, “Making a weapon twice as accurate has the same effect on lethality as making the warhead eight times as powerful. Phrased another way, making the missile twice as precise would only require one-eighth the explosive power to maintain the same lethality.”

This is not your grandparent’s nuclear bomb.

Of course, these new modifications won’t happen for free. The B61-12 is first of the five new nuclear warheads the government plans to build over the next three decades, at a total estimated cost (including delivery systems) of $1 trillion dollars. Not only is this a lot of money, but the government justifies these smaller weapons as both safer and useable.

According to Zachary Keck from the National Interest, “This combination of accuracy and low-yield make the B61-12 the most usable nuclear bomb in America’s arsenal.” Nuclear attack simulations show that if the U.S. were to counterstrike against China’s ICBM silos using a high-yield weapon, 3-4 million people could be killed. However with a low-yield nuclear weapon, this death toll could drop to as little as 700.* With casualties this low, using a nuclear weapon has become thinkable for the first time since the 1940s.

The government has scheduled the production of 4-500 B61-12s over the next 20 years. However, production has already been postponed once from 2017 to 2020 causing the price to double not once, but twice from $2 million for each bomb to $4 million and again to $8 million. Further delays are anticipated and the costs are expected to increase again to $10 million. According to Hans Kristensen and Robert Norris with the Bulletin of Atomic Scientists, “The weapon’s overall price tag is expected to exceed $10 billion, with each B61-12 estimated to cost more than the value of its weight in gold.”

In 2009, President Obama pledged a “nuclear-free world” in Prague and was awarded the Peace Prize by the Nobel committee. Though the nuclear stockpile has been reduced, rebuilding this warhead to be the first self-guided weapon makes the B61-12 a new addition nuclear arsenal.

According an article from the New York Times, James N. Miller, who helped establish this plan before leaving his post as under secretary of defense for policy in 2014, using this accurate weapon is a step in the right direction when it comes to increasing accuracy and deterrence. “Though not everyone agrees, I think it’s the right way to proceed,” Mr. Miller said. “Minimizing civilian casualties near foreign military targets.”  General James E. Cartwright, also quoted in the Times article, agreed these mini-nuclear weapons are useful upgrades, but ‘“what going smaller does,” he acknowledged, “is to make the weapon more thinkable.”’

Retired veteran Ellen O. Tauscher, a former under secretary of state for arms control who was also quoted in the Times article, disagreed: “I think there’s a universal sense of frustration. Somebody has to get serious. We’re spending billions of dollars on a status quo that doesn’t make us any safer.”

 

*Editor’s note: It is unclear whether cutting casualties from millions to thousands would greatly reduce the adversary’s desire to counterattack, given historical reactions to thousands killed at Pearl Harbor or September 11.