Podcast: Beneficial AI and Existential Hope in 2018

For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we’ve built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how we’ve honored one of civilization’s greatest heroes.

Full transcript:

Ariel: I’m Ariel Conn with the Future of Life Institute. As you may have noticed, 2017 was quite the dramatic year. In fact, without me even mentioning anything specific, I’m willing to bet that you already have some examples forming in your mind of what a crazy year this was. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. But I’ll let Max Tegmark, president of FLI, tell you a little more about that.

Max: I think it’s important when we reflect back at the years news to understand how things are all connected. For example, the drama we’ve been following with Kim Jung Un and Donald Trump and Putin with nuclear weapons, is really very connected to all the developments in artificial intelligence because in both cases we have a technology which is so powerful that it’s not clear that we humans have sufficient wisdom to manage it well. And that’s why I think it’s so important that we all continue working towards developing this wisdom further, to make sure that we can use these powerful technologies like nuclear energy, like artificial intelligence, like biotechnology and so on to really help rather than to harm us.

Ariel: And it’s worth remembering that part of what made this such a dramatic year was that there were also some really positive things that happened. For example, in March of this year, I sat in a sweltering room in New York City, as a group of dedicated, caring individuals from around the world discussed how they planned to convince the United Nations to ban nuclear weapons once and for all. I don’t think anyone in the room that day realized that not only would they succeed, but by December of this year, the International Campaign to Abolish Nuclear Weapons, led by Beatrice Fihn would be awarded the Nobel Peace Prize for their efforts. And while we did what we could to help that effort, our own big story had to be the Beneficial AI Conference that we hosted in Asilomar California. Many of us at FLI were excited to talk about Asilomar, but I’ll let Anthony Aguirre, Max, and Victoria Krakovna start.

Anthony: I would say pretty unquestionably the big thing that I felt was most important and felt most excited about was the big meeting in Asilomar and centrally putting together the Asilomar Principles.

Max: I’m going to select the Asilomar conference that we organized early this year, whose output was the 23 Asilomar Principles, which has since been signed by over a thousand AI researchers around the world.

Vika: (take 2) I was really excited about the Asilomar conference that we organized this year. This was the sequel to FLI’s Puerto Rico Conference, which was at the time a real game changer in terms of making AI safety more mainstream and connecting people working in AI safety with the machine learning community and integrating those two. I think Asilomar did a great job of continuing to build on that.

Max: I’m very excited about this because I feel that it really has helped mainstream AI safety work. Not just near term AI safety stuff, like how to transform today’s buggy and hackable computers into robust systems that you can really trust but also mainstream larger issues. The Asilomar Principles actually contain the word super intelligence, contain the phrase existential risk, contain the phrase recursive self improvement and yet they have been signed by really a who’s who in AI. So it’s from now on, it’s impossible for anyone to dismiss these kind of concerns, this kind of safety research. By saying, that’s just people who have no clue about AI.

Anthony: That was a process that started in 2016, brainstorming at FLI and then the wider community and then getting rounds of feedback and so on. But it was exciting both to see how much cohesion there was in the community and how much support there was for getting behind some sort of principles governing AI. But also, just to see the process unfold because one of the things that I’m quite frustrated about often is this sense that there’s this technology that’s just unrolling like a steam roller and it’s going to go where it’s going to go, and we don’t have any agency over where that is. And so to see people really putting thought into what is the world we would like there to be in ten, fifteen, twenty, fifty years and how can we distill what it is that we like about that world into principles like these…that felt really, really good. It felt like an incredibly useful thing for society as a whole but in this case, the people who are deeply engaged with AI, to be thinking through in a real way rather than just how can we put out the next fire, or how can we just turn the progress one more step forward, to really think about the destination.

Ariel: But what’s that next step? How do we transition from Principles that we all agree on to actions that we can also all get behind. Jessica Cussins joined FLI later in the year, but when asked what she was excited about as far as FLI was concerned, she immediately mentioned the implementation of things like the Asilomar Principles.

Jessica: I’m most excited about the developments we’ve seen over the last year related to safe, beneficial and ethical AI. I think FLI has been a really important player in this. We had the beneficial AI conference in January that resulted in the Asilomar AI Principles. It’s been really amazing to see how much traction those principles have gotten and to see a growing consensus around the importance of being thoughtful about the design of AI systems, the challenges of algorithmic bias of data control and manipulation and accountability and governance. So the thing I’m most excited about right now, is the growing number of initiatives we’re seeing around the world related to ethical and beneficial IA.

Anthony: What’s been great to see is the development of ideas both from FLI and from many other organizations of what policies might be good. What concrete legislative actions there might be or standards, organizations or non-profits, agreements between companies and so on might be interesting.

But I think, we’re only at the step of formulating those things and not that much action has been taken anywhere in terms of actually doing those things. Little bits of legislation here and there. But I think we’re getting to the point where lots of governments, lots of companies, lots of organizations are going to be publishing and creating and passing more and more of these things. I think seeing that play out and working really hard to ensure that it plays out in a way that’s favorable in as many ways and as many people as possible, I think is super important and something we’re excited to do.

Vika: I think that Asilomar principles are a great common point for the research community and others to agree what we are going for, what’s important.

Besides having the principles as an output, the event itself was really good for building connections between different people from interdisciplinary backgrounds, from different related fields who are interested in the questions of safety and ethics.

And we also had this workshop that was adjacent to Asilomar where our grant winners actually presented their work. I think it was great to have a concrete discussion of research and the progress we’ve made so far and not just abstract discussions of the future, and I hope that we can have more such technical events, discussing research progress and making the discussion of AI safety really concrete as time goes on.

Ariel: And what is the current state of AI safety research? Richard Mallah took on the task of answering that question for the Asilomar conference, while Tucker Davey has spent the last year interviewing various FLI grant winners to better understand their work.

Richard: I presented a landscape of technical AI safety research threads. This lays out hundreds of different types of research areas and how they are related to each other. All different areas that need a lot more research going into them than they have today to help keep AI safe and beneficent and robust. I was really excited to be at Asilomar and to have co-organized Asilomar and that so many really awesome people were there and collaborating on these different types of issues. And that they were using that landscape that I put together as sort of a touchpoint and way to coordinate. That was pretty exciting.

Tucker: I just found it really inspiring interviewing all of our AI grant recipients. It’s kind of been an ongoing project interviewing these researchers and writing about what they’re doing. Just for me, getting recently involved in AI, it’s been incredibly interesting to get either a half an hour, an hour with these researchers to talk in depth about their work and really to learn more about a research landscape that I hadn’t been aware of before working at FLI. Really, being a part of those interviews and learning more about the people we’re working with and these people that are really spearheading AI safety was really inspiring to be a part of.

Ariel: And with that, we have a big announcement.

Richard: So, FLI is launching a new grants program in 2018. This time around, we will be focusing more on artificial general intelligence, artificial super intelligence and ways that we can do technical research and other kinds of research today. On today’s systems or things that we can analyze today, things that we can model or make theoretical progress on today that are likely to actually still be relevant at the time, where AGI comes about. This is quite exciting and I’m excited to be part of the ideation and administration around that.

Max: I’m particularly excited about the new grants program that we’re launching for AI safety research. Since AI safety research itself has become so much more mainstream, since we did our last grants program three years ago, there’s now quite a bit of funding for a number of near term challenges. And I feel that we at FLI should focus on things more related to challenges and opportunities from super intelligence, since there is virtually no funding for that kind of safety research. It’s going to be really exciting to see what proposals come in and what research teams get selected by the review panels. Above all, how this kind of research hopefully will contribute to making sure that we can use this powerful technology to create a really awesome future.

Vika: I think this grant program could really build on the impact of our previous grant program. I’m really excited that it’s going to focus more on long term AI safety research, which is still the most neglected area.

AI safety has really caught on in the past two years, and there’s been a lot more work on that going on, which is great. And part of what this means is that the we at FLI can focus more on the long term. The long term work has also been getting more attention, and this grant program can help us build on that and make sure that the important problems get solved. This is really exciting.

Max: I just came back from spending a week at the NIPS Conference, the biggest artificial intelligence conference of the year. Its fascinating how rapidly everything is proceeding. AlphaZero has now defeated not just human chess players and Go players but it has also defeated human AI researchers, who after spending 30 years handcrafting artificial intelligence software to play computer chess, got all their work completely crushed by AlphaZero that just learned to do much better than that from scratch in four hours.

So, AI is really happening, whether we like it or not. The challenge we face is simply to compliment that through AI safety research and a lot of good thinking to make sure that this helps humanity flourish rather than flounder.

Ariel: In the spirit of flourishing, FLI also turned its attention this year to the movement to ban lethal autonomous weapons. While there is great debate around how to define autonomous weapons and whether or not they should be developed, more people tend to agree that the topic should at least come before the UN for negotiations. And so we helped create the video Slaughterbots to help drive this conversation. I’ll let Max take it from here.

Max: Slaughterbots, autonomous little drones that can go anonymously murder people without any human control. Fortunately, they don’t exist yet. We hope that an international treaty is going to keep it that way, even though we almost have the technology to do them already. Just need to integrate then mass produce tech we already have. So to help with this, we made this video called Slaughterbots. It was really impressive to see it get over forty million views and make the news throughout the world. I was very happy that Stewart Russell, whom we partnered with in this, also presented this to the diplomats at the United Nations in Geneva when they were discussing whether to move towards a treaty, drawing a line in the sand.

Anthony: Pushing on the autonomous weapons front, it’s been really scary, I would say to think through that issue. But a little bit like the issue of AI, in general, there’s a potential scary side but there’s also a potentially helpful side in that I think this is an issue that is a little bit tractable. Even a relatively small group of committed individuals can make difference. So I think, I’m excited to see how much movement we can get on the autonomous weapons front. It doesn’t seem at all like a hopeless issue to me and I think 2018 will be kind of a turning point — I hope that will be sort of a turning point for that issue. It’s kind of flown under the radar but it really is coming up now and it will be at least interesting. Hopefully, it will be exciting and happy and so on as well as interesting. It will at least be interesting to see how it plays out on the world stage.

Jessica: For 2018, I’m hopeful that we will see the continued growth of the global momentum against lethal autonomous weapons. Already, this year a lot has happened at the United Nations and across communities around the world, including thousands of AI and robotics researchers speaking out and saying they don’t want to see their work used to create these kinds of destabilizing weapons of mass destruction. One thing I’m really excited for 2018 is to see a louder, rallying call for an international ban of lethal autonomous weapons.

Ariel: Yet one of the biggest questions we face when trying to anticipate autonomous weapons and artificial intelligence in general, and even artificial general intelligence – one of the biggest questions is: when? When will these technologies be developed? If we could answer that, then solving problems around those technologies could become both more doable and possibly more pressing. This is an issue Anthony has been considering.

Anthony: Of most interest has been the overall set of projects to predict artificial intelligence timelines and milestones. This is something that I’ve been doing through this prediction website, Metaculus, which I’ve been a part of. And also something where I’ve took part in a very small workshop run by the Foresight Institute over the summer. It’s both a super important question because I think the overall urgency with which we have to deal with certain issues really depends on how far away they are. It’s also an instructive one, in that even posing the questions of what do we want to know exactly, really forces you to think through what is it that you care about, how would you estimate things, what different considerations are there in terms of this sort of big question.

We have this sort of big question, like when is really powerful AI going to appear? But when you dig into that, what exactly is really powerful, what exactly…  What does appear mean? Does that mean in sort of an academic setting? Does it mean becomes part of everybody’s life?

So there are all kinds of nuances to that overall big question that lots of people asking. Just getting into refining the questions, trying to pin down what it is that mean — make them exact so that they can be things that people can make precise and numerical predictions about. I think its been really, really interesting and elucidating to me and in sort of understanding what all the issues are. I’m excited to see how that kind of continues to unfold as we get more questions and more predictions and more expertise focused on that. Also, a little but nervous because the timeline seemed to be getting shorter and shorter and the urgency of the issue seems to be getting greater and greater. So that’s a bit of a fire under us, I think, to keep acting and keep a lot of intense effort on making sure that as AI gets more powerful, we get better at managing it.

Ariel: One of the current questions AI researchers are struggling with is the problem of value alignment, especially when considering more powerful AI. Meia Chita-Tegmark and Lucas Perry recently co-organized an event to get more people thinking creatively about how to address this.

Meia: So we just organized a workshop about the ethics of value alignment together with a few partner organizations, the Berggruen Institute and also CFAR.

Lucas: This was a workshop recently that took place in California and just to remind everyone, value alignment is the process by which we bring AI’s actions, goals, and intention in alignment with and in accordance with what is deemed to be the good or what are human values and preferences and goals and intentions.

Meia: And we had a fantastic group of thinkers there. We had philosophers. We had social scientists, AI researchers, political scientists. We were all discussing this very important issue of how do we get an artificial intelligence that is aligned to our own goals and our own values.

It was really important to have the perspectives of ethicists and moral psychologists, for example, because this question is not just about the technical aspect of how do you actually implement it, but also about whose values do we want implemented and who should be part of the conversation and who gets excluded and what process do we want to establish to collect all the preferences and values that we want implemented in AI. That was really fantastic. It was a very nice start to what I hope will continue to be a really fruitful collaboration between different disciplines on this very important topic.

Lucas: I think one essential take-away from that was that value alignment is truly something that is interdisciplinary. It’s normally been something which has been couched and understood in the context of technical AI safety research, but value alignment, at least in my view, also inherently includes ethics and governance. It seems that the project of creating beneficial AI through efforts and value alignment can really only happen when we have lots of different people from lots of different disciplines working together on this supremely hard issue.

Meia: I think the issue with AI is something that … first of all, it concerns such a great number of people. It concerns all of us. It will impact, and it already is impacting all of our experiences. There’re different disciplines that look at this impact from different ways.

Of course, technical AI researchers will focus on developing this technology, but it’s very important to think about how does this technology co-evolve with us. For example, I’m a psychologist. I like to think about how does it impact our own psyche. How does it impact the way we act in the world, the way we behave. Stuart Russell many times likes to point out that one danger that can come with very intelligent machines is a subtle one, not necessarily what they will do, but what we will not do because of them. He calls this enfeeblement. What are the capacities that are being stifled because we no longer engage in some of the cognitive tasks that we’re now delegating to AIs.

So that’s just one example of how, for example, psychologists can help really bring more light and make us reflect on what is it that we want from our machines and how do we want to interact with them and how do we wanna design them such that they actually empower us rather than enfeeble us.

Lucas: Yeah, I think that one essential thing to FLI’s mission and goal is the generation of beneficial AI. To me, and I think many other people coming out of this Ethics of Value Alignment conference, you know, what beneficial exactly entails and what beneficial looks like is still a really open question both in the short term and in the long-term. I’d be really interested in seeing both FLI and other organizations pursue questions in value alignment more vigorously. Issues with regard to the ethics of AI and issues regarding value and the sort of world that we want to live in.

Ariel: And what sort of world do we want to live in? If you’ve made it this far through the podcast, you might be tempted to think that all we worry about is AI. And we do think a lot about AI. But our primary goal is to help society flourish. And so this year, we created the Future of Life Award to be presented to people who act heroically to ensure our survival and hopefully move us closer to that ideal world. Our inaugural award was presented in honor of Vasili Arkhipov who stood up to his commander on a Soviet submarine, and prevented the launch of a nuclear weapon during the height of tensions in the Cold War.

Tucker: One thing that particularly stuck out to me was our inaugural Future of Life Award and we presented this award to Vasili Arkhipov who was a Soviet officer in the Cold War and arguably saved the world and is the reason we’re all alive today. He’s now passed, but FLI presented a generous award to his daughter and his grandson. It was really cool to be a part of this because it seemed like the first award of its kind.

Meia: So, of course with FLI, we have all these big projects that take a lot of time. But I think for me, one of the more exciting and heartwarming and wonderful moments that I was able to experience due to our work here at FLI was a train ride from London to Cambridge with Elena and Sergei, the daughter and the grandson of Vasili Arkhipov. Vasili Arkhipov is this Russian naval officer that helped prevent a second world war in the Cuban missile crisis. The Future of Life Institute awarded him the Future of Life prize this year. He is now dead unfortunately, but his daughter and his grandson was there in London to receive it.

Vika: It was great to get to meet them in person and to all go on stage together and have them talk about their attitude towards the dilemma that Vasili Arkhipov has faced, and how it is relevant today, and how we should be really careful with nuclear weapons and protecting our future. It was really inspiring.

At that event, Max was giving his talk about his book, and then at the end we had the Arkhipovs come up on stage and it was kind of fun for me to translate their speech to the audience. I could not fully transmit all the eloquence, but thought it was a very special moment.

Meia: It was just so amazing to really listen to their stories about the father, the grandfather, and look at photos that they had brought all the way from Moscow. This person who has become the hero for so many people that are really concerned about this essential risk, it was nice to really imagine him in his capacity as a son, as a grandfather, as a husband, as a human being. It was very inspiring and touching.

One of the nice things was they showed a photo of him that had actually notes that he had written on the back of it. That was his favorite photo. And one of the comments he made is that he felt that that was the most beautiful photo of himself because there was no glint in his eyes. It was just this pure sort of concentration. I thought that said a lot about his character. He rarely smiled in photos, also. Also always looked very pensive. Very much like you’d imagine a hero who saved the world would be.

Tucker: It was especially interesting for me to work on the press release for this award and to reach out to people from different news outlets, like The Guardian and The Atlantic, and to actually see them write about this award.

I think something like the Future of Life Award is inspiring because it highlights people in the past that have done an incredible service to civilization, but I also think it’s interesting to look forward and think about who might be the future Vasili Arkhipov that saves the world.

Ariel: As Tucker just mentioned, this award was covered by news outlets like the Guardian and the Atlantic. And in fact, we’ve been incredibly fortunate to have many of our events covered by major news. However, there are even more projects we’ve worked on that we think are just as important and that we’re just as excited about that most people probably aren’t aware of.

Jessica: So people may not know that FLI recently joined the partnership on AI. This was the group that was founded by Google and Amazon, Facebook and Apple and others to think about issues like safety, and fairness and impact from AI systems. So I’m excited about this because I think it’s really great to see this kind of social commitment from industry, and it’s going to be critical to have the support and engagement from these players to really see AI being developed in a way that’s positive for everyone. So I’m really happy that FLI is now one of the partners of what will likely be an important initiative for AI.

Anthony: I attending the first meeting of the partnership on AI in October. And to see, at that meeting, so much discussion of some of the principles themselves directly but just in a broad sense. So much discussion from all of the key organizations that are engaged with AI, that almost all of whom had representation there, about how are we going to make these things happen. If we value transparency, if we value fairness, if we value safety and trust in AI systems, how are we going to actually get together and formulate best practices and policies, and groups and data sets and things to make all that happen. And to see the speed at which, I would say the field has moved from purely, wow, we can do this, to how are we going to do this right and how are we going to do this well and what does this all mean, has been a ray of hope I would say.

AI is moving so fast but it was good to see that I think the sort of wisdom race hasn’t been conceded entirely. That there are dedicated group of people that are working really hard to figure out how to do it well.

Ariel: And then there’s Dave Stanley, who has been the force around many of the behind-the-scenes projects that our volunteers have been working on that have helped FLI grow this year.

Dave: As for another project that has very much been ongoing and more relates to the website is basically our ongoing effort to make the English content on the website that’s been fairly influential in English speaking countries about AI safety and nuclear weapons, take that content and make it available in a lot of other languages to maximize the impact that it’s having.

Right now, thanks to the efforts of our volunteers, we have 55 translations available on our website right now in nine different languages, which are Russian, Chinese, French, Polish, Spanish, German, Hindi, Japanese, and Korean. All in all, this represents about 1000 hours of volunteer time put in by our volunteers. I’d just like to give a shoutout to some of the volunteers who have been involved. They are Alan Yan, Kevin Wang, Kazue Evans, Jake Beebe, Jason Orlosky, Li Na, Bena Lim, Alina Kovtun, Ben Peterson, Carolyn Wu, Zhaoran Joanna Wang, Mayumi Nakamura, Derek Su, Dipti Pandey, Marvin, Vera Koroleva, Grzegorz Orwiński, Szymon Radziszewicz, Natalia Berezovskaya, Vladimir Nimensky, Natalia Kuzmenko, George Godula, Eric Gastfriend, Olivier Grondin, Claire Park, Kristy Wen, Yishuai Du, and Revathi Vinoth Kumar.

Ariel: As we’ve worked to establish AI safety as a global effort, Dave and the volunteers were behind the trip Richard took to China, where he participated in the Global Mobile Internet Conference in Beijing earlier this year.

Dave: So basically, this was something that was actually prompted and largely organized by one of FLIs volunteers, George Godula, who’s based in Shanghai right now.

Basically, this is partially motivated by the fact that recently, China’s been promoting a lot of investment in artificial intelligence research, and they’ve made it a national objective to become a leader in AI research by 2025. So FLI and the team have been making some efforts to basically try to build connections with China and raise awareness about AI safety, at least our view on AI safety and engage in dialogue there.

It’s culminated with George organizing this trip for Richard, and A large portion of the FLI volunteer team participating in basically support for that trip. So identifying contacts for Richard to connect with over there and researching the landscape and providing general support for that. And then that’s been coupled with an effort to take some of the existing articles that FLI has on their website about AI safety and translate those to Chinese to make it accessible to that audience.

Ariel: In fact, Richard has spoken at many conferences, workshops and other events this year, and he’s noted a distinct shift in how AI researchers view AI safety.

Richard: This is a single example of many of these things I’ve done throughout the year. Yesterday I gave a talk to a bunch of machine learning and artificial intelligence researchers and entrepreneurs in Boston, here where I’m based about AI safety and beneficence. Every time I do this it’s really fulfilling that so many of these people who really are pushing the leading edge of what AI does in many respects. They realize that these are extremely valid concerns and there are new types of technical avenues to help just keep things better for the future. The facts that I’m not receiving push back anymore as compared to many years ago when I would talk about these things — that people really are trying to gauge and understand and kind of weave themselves into whatever is going to turn into the best outcome for humanity. Given the type of leverage that advanced AI will bring us. I think people are starting to really get what’s at stake.

Ariel: And this isn’t just the case among AI researchers. Throughout the year, we’ve seen this discussion about AI safety broaden into various groups outside of traditional AI circles, and we’re hopeful this trend will continue in 2018.

Meia: I think that 2017 has been fantastic to start this project of getting more thinkers from different disciplines to really engage with the topic of artificial intelligence, but I think we are just manage to scratch the surface of this topic in this collaboration. So I would really like to work more on strengthening this conversation and this flow of ideas between different disciplines. I think we can achieve so much more if we can make sure that we hear each other, that we go past our own disciplinary jargon, and that we truly are able to communicate and join each other in research projects where we can bring different tools and different skills to the table.

Ariel: The landscape on AI safety research that Richard presented at Asilomar at the start of the year was designed to enable greater understanding among researchers. Lucas rounded off the year with another version of the landscape. This one looking at ethics and value alignment with the goal, in part, of bringing more experts from other fields into the conversation.

Lucas: One thing that I’m also really excited about for next year is seeing our conceptual landscapes of both AI safety and value alignment being used in more educational context and in context in which they can foster interdisciplinary conversations regarding issues in AI. I think that their virtues are that they create a conceptual landscape of both AI safety and value alignment, but also include definitions and descriptions of jargon. Given this, it functions both as a means by which you can introduce people to AI safety and value alignment and AI risk, but it also serves as a means of introducing experts to sort of the conceptual mappings of the spaces that other experts are engaged with and so they can learn each other’s jargon and really have conversations that are fruitful and sort of streamlined.

Ariel: As we look to 2018, we hope to develop more programs, work on more projects, and participate in more events that will help draw greater attention to the various issues we care about. We hope to not only spread awareness, but also to empower people to take action to ensure that humanity continues to flourish in the future.

Dave: There’s a few things that are coming up that I’m really excited about. The first one is basically we’re going to be trying to release some new interactive apps on the website that’ll hopefully be pages that can gather a lot of attention and educate people about the issues that we’re focused on, mainly nuclear weapons, and answering questions to give people a better picture of what are the geopolitical and economic factors that motivate countries to keep their nuclear weapons and how does this relate to public support, based on polling data, for whether the general public wants to keep these weapons or not.

Meia: One thing that I think has made me also very excited in 2017, and I’m looking forward to seeing the evolution of in 2018 was the public’s engagement with this topic. I’ve had the luck to be in the audience for many of the book talks that Max has given for his book “Life 3.0: Being Human in the Age of Artificial Intelligence,” and it was fascinating just listening to the questions. They’ve become so much more sophisticated and nuanced than a few years ago. I’m very curious to see how this evolves in 2018, and I hope that FLI will contribute to this conversation and making it more rich. I think I’d like people in general to get engaged with this topic much more, and refine their understanding of it.

Tucker: Well, I think in general it’s been amazing to watch FLI this year because we’ve made big splashes in so many different things with the Asilomar conference, with our Slaughterbots video, helping with the nuclear ban, but I think one thing that I’m particularly interested in is working more this coming year to I guess engage my generation more on these topics. I sometimes sense a lot of defeatism and hopelessness with people in my generation. Kind of feeling like there’s nothing we can do to solve civilization’s biggest problems. I think being at FLI has kind of given me the opposite perspective. Sometimes I’m still subject to that defeatism, but working here really gives me a sense that we can actually do a lot to solve these problems. I’d really like to just find ways to engage more people in my generation to make them feel like they actually have some sense of agency to solve a lot of our biggest challenges.

Ariel: Learn about these issues and more, join the conversation, and find out how you can get involved by visiting futureoflife.org.

[end]

 

Help Support FLI This Giving Tuesday

We’ve accomplished a lot. FLI has only been around for a few years, but during that time, we’ve:

  • Helped mainstream AI safety research,
  • Funded 37 AI safety research grants,
  • Launched multiple open letters that have brought scientists and the public together for the common cause of a beneficial future,
  • Drafted the 23 Asilomar Principles which offer guidelines for ensuring that AI is developed beneficially for all,
  • Supported the successful efforts by the International Campaign to Abolish Nuclear Weapons (ICAN) to get a treaty UN treaty passed that bans and stigmatizes nuclear weapons (ICAN won this year’s Nobel Peace Prize for their work),
  • Supported efforts to advance negotiations toward a ban on lethal autonomous weapons with a video that’s been viewed over 30 millions times,
  • Launched a website that’s received nearly 3 million page views,
  • Broadened the conversation about how humanity can flourish rather than flounder with powerful technologies.

But that’s just the beginning. There’s so much more we’d like to do, but we need your help. On Giving Tuesday this year, please consider a donation to FLI.

Where would your money go?

  • More AI safety research,
  • More high-quality information and communication about AI safety,
  • More efforts to keep the future safe from lethal autonomous weapons,
  • More efforts to trim excess nuclear stockpiles & reduce nuclear war risk,
  • More efforts to guarantee a future we can all look forward to.

Please Consider a Donation to Support FLI

Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem

The following policy memo was written and posted by the Stanley Foundation.

Download the PDF (252K)

How might a nuclear crisis play out in today’s media environment? What dynamics in this information ecosystem—with social media increasing the velocity and reach of information, disrupting journalistic models, creating potent vectors for disinformation, and changing how political leaders interact with constituencies—might challenge decision making during crises between nuclear-armed states?

This memo discusses facets of the modern information ecosystem and how they might affect decision making involving the use of nuclear weapons, based on insights from a multidisciplinary roundtable. The memo concludes with more questions than answers. Because the impact of social media on international crisis stability is recent, there are few cases from which to draw conclusions. But because the catastrophic impact of a nuclear exchange is so great, there is a need to further investigate the mechanisms by which the current information ecosystem could influence decisions about the use of these weapons. To that end, the memo poses a series of questions to inspire future research to better understand new—or newly important—dynamics in the information ecosystem and international security environment.

Scientists to Congress: The Iran Deal is a Keeper

The following article was written by Dr. Lisbeth Gronlund and originally posted on the Union of Concerned Scientists blog.

The July 2015 Iran Deal, which places strict, verified restrictions on Iran’s nuclear activities, is again under attack by President Trump. This time he’s kicked responsibility over to Congress to “fix” the agreement and promised that if Congress fails to do so, he will withdraw from it.

As the New York Times reported, in response to this development over 90 prominent scientists sent a letter to leading members of Congress yesterday urging them to support the Iran Deal—making the case that continued US participation will enhance US security.

Many of these scientists also signed a letter strongly supporting the Iran Deal to President Obama in August 2015, as well as a letter to President-elect Trump in January. In all three cases, the first signatory is Richard L. Garwin, a long-standing UCS board member who helped develop the H-bomb as a young man and has since advised the government on all matters of security issues. Last year, he was awarded a Presidential Medal of Freedom.

What’s the Deal?

If President Trump did pull out of the agreement, what would that mean? First, the Joint Comprehensive Plan of Action (JCPoA) (as it is formally named) is not an agreement between just Iran and the US—but also includes China, France, Germany, Russia, the UK, and the European Union. So the agreement will continue—unless Iran responds by quitting as well. (More on that later.)

The Iran Deal is not a treaty, and did not require Senate ratification. Instead, the United States participates in the JCPoA by presidential action. However, Congress wanted to get into the act and passed The Iran Agreement Review Act of 2015, which requires the president to certify every 90 days that Iran remains in compliance.

President Trump has done so twice, but declined to do so this month and instead called for Congress—and US allies—to work with the administration “to address the deal’s many serious flaws.” Among those supposed flaws is that the deal covering Iran’s nuclear activities does not also cover its missile activities!

According to President Trump’s October 13 remarks:

Key House and Senate leaders are drafting legislation that would amend the Iran Nuclear Agreement Review Act to strengthen enforcement, prevent Iran from developing an inter– —this is so totally important—an intercontinental ballistic missile, and make all restrictions on Iran’s nuclear activity permanent under US law.

The Reality

First, according to the International Atomic Energy Agency, which verifies the agreement, Iran remains in compliance. This was echoed by Norman Roule, who retired this month after working at the CIA for three decades. He served as the point person for US intelligence on Iran under multiple administrations. He told an NPR interviewer, “I believe we can have confidence in the International Atomic Energy Agency’s efforts.”

Second, the Iran Deal was the product of several years of negotiations. Not surprisingly, recent statements by the United Kingdom, France, Germany, the European Union, and Iran make clear that they will not agree to renegotiate the agreement. It just won’t happen. US allies are highly supportive of the Iran Deal.

Third, Congress can change US law by amending the Iran Nuclear Agreement Review Act, but this will have no effect on the terms of the Iran Deal. This may be a face-saving way for President Trump to stay with the agreement—for now. However, such amendments will lay the groundwork for a future withdrawal and give credence to President Trump’s claims that the agreement is a “bad deal.” That’s why the scientists urged Congress to support the Iran Deal as it is.

The End of a Good Deal?

If President Trump pulls out of the Iran Deal and reimposes sanctions against Iran, our allies will urge Iran to stay with the deal. But Iran has its own hardliners who want to leave the deal—and a US withdrawal is exactly what they are hoping for.

If Iran leaves the agreement, President Trump will have a lot to answer for. Here is an agreement that significantly extends the time it would take for Iran to produce enough material for a nuclear weapon, and that would give the world an alarm if they started to do so. For the United States to throw that out the window would be deeply irresponsible. It would not just undermine its own security, but that of Iran’s neighbors and the rest of the world.

Congress should do all it can to prevent this outcome. The scientists sent their letter to Senators Corker and Cardin, who are the Chairman and Ranking Member of the Senate Foreign Relations Committee, and to Representatives Royce and Engel, who are the Chairman and Ranking Member of the House Foreign Affairs Committee, because these men have a special responsibility on issues like these.

Let’s hope these four men will do what’s needed to prevent the end of a good deal—a very good deal.

55 Years After Preventing Nuclear Attack, Arkhipov Honored With Inaugural Future of Life Award

London, UK – On October 27, 1962, a soft-spoken naval officer named Vasili Arkhipov single-handedly prevented nuclear war during the height of the Cuban Missile Crisis. Arkhipov’s submarine captain, thinking their sub was under attack by American forces, wanted to launch a nuclear weapon at the ships above. Arkhipov, with the power of veto, said no, thus averting nuclear war.

Now, 55 years after his courageous actions, the Future of Life Institute has presented the Arkhipov family with the inaugural Future of Life Award to honor humanity’s late hero.

Arkhipov’s surviving family members, represented by his daughter Elena and grandson Sergei, flew into London for the ceremony, which was held at the Institute of Engineering & Technology. After explaining Arkhipov’s heroics to the audience, Max Tegmark, president of FLI, presented the Arkhipov family with their award and $50,000. Elena and Sergei were both honored by the gesture and by the overall message of the award.

Elena explained that her father “always thought that he did what he had to do and never consider his actions as heroism. … Our family is grateful for the prize and considers it as a recognition of his work and heroism. He did his part for the future so that everyone can live on our planet.”

Elena and Sergei with the Future of Life Award

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. Arkhipov, whose courage and composure potentially saved billions of lives, was an obvious choice for the inaugural event.

“Vasili Arkhipov is arguably the most important person in modern history, thanks to whom October 27 2017 isn’t the 55th anniversary of World War III,” FLI president Max Tegmark explained. “We’re showing our gratitude in a way he’d have appreciated, by supporting his loved ones.”

The award also aims to foster a dialogue about the growing existential risks that humanity faces, and the people that work to mitigate them.

Jaan Tallinn, co-founder of FLI, said: “Given that this century will likely bring technologies that can be even more dangerous than nukes, we will badly need more people like Arkhipov — people who will represent humanity’s interests even in the heated moments of a crisis.”

FLI president Max Tegmark presenting the Future of Life Award to Arkhipov’s daughter, Elena, and grandson, Sergei.

 

Arkhipov’s Story

On October 27 1962, during the Cuban Missile Crisis, eleven US Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the US “quarantine” area. Arkhipov was one of the officers on board. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them which, unbeknownst to the crew, they’d informed Moscow were merely meant to force the sub to surface and leave.

“We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

What the Americans didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. As the depth charges intensified and temperatures onboard climbed above 45ºC (113ºF), many crew members fainted from carbon dioxide poisoning, and in the midst of this panic, Captain Savitsky decided to launch their nuclear weapon.

“Maybe the war has already started up there,” he shouted. “We’re gonna blast them now! We will die, but we will sink them all – we will not disgrace our Navy!”

The combination of depth charges, extreme heat, stress, and isolation from the outside world almost lit the fuse of full-scale nuclear war. But it didn’t. The decision to launch a nuclear weapon had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no.

Amidst the panic, the 34-year old Arkhipov remained calm and tried to talk Captain Savitsky down. He eventually convinced Savitsky that these depth charges were signals for the Soviet submarine to surface, and the sub surfaced safely and headed north, back to the Soviet Union.

It is sobering that very few have heard of Arkhipov, although his decision was perhaps the most valuable individual contribution to human survival in modern history. PBS made a documentary, The Man Who Saved the World, documenting Arkhipov’s moving heroism, and National Geographic profiled him as well in an article titled – You (and almost everyone you know) Owe Your Life to This Man.

The Cold War never became a hot war, in large part thanks to Arkhipov, but the threat of nuclear war remains high. Beatrice Fihn, Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN) and this year’s recipient of the Nobel Peace Prize, hopes that the Future of Life Award will help draw attention to the current threat of nuclear weapons and encourage more people to stand up to that threat. Fihn explains: “Arkhipov’s story shows how close to nuclear catastrophe we have been in the past. And as the risk of nuclear war is on the rise right now, all states must urgently join the Treaty on the Prohibition of Nuclear Weapons to prevent such catastrophe.”

Of her father’s role in preventing nuclear catastrophe, Elena explained: “We must strive so that the powerful people around the world learn from Vasili’s example. Everybody with power and influence should act within their competence for world peace.”

ICAN Wins Nobel Peace Prize

We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons.

Over 70 years have passed since the bombs were first dropped on Hiroshima and Nagasaki, but finally, on July 7 of this year, 122 countries came together at the United Nations to establish a treaty outlawing nuclear weapons. Behind the effort was the small, dedicated team at ICAN, led by Beatrice Fihn. They coordinated with hundreds of NGOs in 100 countries to guide a global discussion and build international support for the ban.

In a statement, they said: “By harnessing the power of the people, we have worked to bring an end to the most destructive weapon ever created – the only weapon that poses an existential threat to all humanity.”

There’s still more work to be done to decrease nuclear stockpiles and rid the world of nuclear threats, but this incredible achievement by ICAN provides the hope and inspiration we need to make the world a safer place.

Perhaps most striking, as seen below in many of the comments by FLI members, is how such a small, passionate group was able to make such a huge difference in the world. Congratulations to everyone at ICAN!

Statements by members of FLI:

Anthony Aguirre: “The work of Bea inspiringly shows that a passionate and committed group of people working to make the world safer can actually succeed!”

Ariel Conn: “Fear and tragedy might monopolize the news lately, but behind the scenes, groups like ICAN are changing the world for the better. Bea and her small team represent great hope for the future, and they are truly an inspiration.”

Tucker Davey: “It’s easy to feel hopeless about the nuclear threat, but Bea and the dedicated ICAN team have clearly demonstrated that a small group can make a difference. Passing the nuclear ban treaty is a huge step towards a safer world, and I hope ICAN’s Nobel Prize inspires others to tackle this urgent threat.”

Victoria Krakovna: “Bea’s dedicated efforts to protect humanity from itself are an inspiration to us all.”

Richard Mallah: “Bea and ICAN have shown such dedication in working to curb the ability of a handful of us to kill most of the rest of us.”

Lucas Perry: “For me, Bea and ICAN have beautifully proven and embodied Margaret Mead’s famous quote, ‘Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has.’”

David Stanley: “The work taken on by ICAN’s team is often not glamorous, yet they have acted tirelessly for the past 10 years to protect us all from these abhorrent weapons. They are the few to whom so much is owed.”

Max Tegmark: “It’s been an honor and a pleasure collaborating with ICAN, and the attention brought by this Nobel Prize will help the urgently needed efforts to stigmatize the new nuclear arms race.”

Learn more about the treaty here.

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer.

To learn more, I spoke with Rob Wiblin and Brenton Mayer of 80,000 Hours. The following are highlights of the interview, but you can listen to the full podcast above or read the transcript here.

Can you give us some background about 80,000 Hours?

Rob: 80,000 Hours has been around for about six years and started when Benjamin Todd and Will MacAskill wanted to figure out how they could do as much good as possible. They started looking into things like the odds of becoming an MP in the UK or if you became a doctor, how many lives would you save. Pretty quickly, they were learning things that no one else had investigated.

They decided to start 80,000 Hours, which would conduct this research in a more systematic way and share it with people who wanted to do more good with their career.

80,000 hours is roughly the number of hours that you’d work in a full-time professional career. That’s a lot of time, so it pays off to spend quite a while thinking about what you’re going to do with that time.

On the other hand, 80,000 hours is not that long relative to the scale of the problems that the world faces. You can’t tackle everything. You’ve only got one career, so you should be judicious about what problems you try to solve and how you go about solving them.

How do you help people have more of an impact with their careers?

Brenton: The main thing is a career guide. We’ll talk about how to have satisfying careers, how to work on one of the world’s most important problems, how to set yourself up early so that later on you can have a really large impact.

The second part that we do is do career coaching and try to apply advice to individuals.

What is earning to give?

Rob: Earning to give is the career approach where you try to make a lot of money and give it to organizations that can use it to have a really large positive impact. I know people who can make millions of dollars a year doing the thing they love and donate most of that to effective nonprofits, supporting 5, 10, 15, possibly even 20 people to do direct work in their place.

Can you talk about research you’ve been doing regarding the world’s most pressing problems?

Rob: One of the first things we realized is that if you’re trying to help people alive today, your money can go further in the developing world. We just need to scale up solutions to basic health problems and economic issues that have been resolved elsewhere.

Moving beyond that, what other groups in the world are extremely neglected? Factory farmed animals really stand out. There’s very little funding focused on improving farm animal welfare.

The next big idea was, of all the people that we could help, what fraction are alive today? We think that it’s only a small fraction. There’s every reason to think humanity could live for another 100 generations on Earth and possibly even have our descendants alive on other planets.

We worry a lot about existential risks and ways that civilization can go off track and never recover. Thinking about the long-term future of humanity is where a lot of our attention goes and where I think people can have the largest impact with their career.

Regarding artificial intelligence safety, nuclear weapons, biotechnology and climate change, can you consider different ways that people could pursue either careers or “earn to give” options for these fields?

Rob: One would be to specialize in machine learning or other technical work and use those skills to figure out how can we make artificial intelligence aligned with human interests. How do we make the AI do what we want and not things that we don’t intend?

Then there’s the policy and strategy side, trying to answer questions like how do we prevent an AI arms race? Do we want artificial intelligence running military robots? Do we want the government to be more involved in regulating artificial intelligence or less involved? You can also approach this if you have a good understanding of politics, policy, and economics. You can potentially work in government, military or think tanks.

Things like communications, marketing, organization, project management, and fundraising operations — those kinds of things can be quite hard to find skilled, reliable people for. And it can be surprisingly hard to find people who can handle media or do art and design. If you have those skills, you should seriously consider applying to whatever organizations you admire.

[For nuclear weapons] I’m interested in anything that can promote peace between the United States and Russia and China. A war between those groups or an accidental nuclear incident seems like the most likely thing to throw us back to the stone age or even pre-stone age.

I would focus on ensuring that they don’t get false alarms; trying to increase trust between the countries in general and the communication lines so that if there are false alarms, they can quickly diffuse the situation.

The best opportunities [in biotech] are in early surveillance of new diseases. If there’s a new disease coming out, a new flu for example, it takes  a long time to figure out what’s happened.

And when it comes to controlling new diseases, time is really of the essence. If you can pick it up within a few days or weeks, then you have a reasonable shot at quarantining the people and following up with everyone that they’ve met and containing it. Any technologies that we can invent or any policies that will allow us to identify new diseases before they’ve spread to too many people is going to help with both natural pandemics, and also any kind of synthetic biology risks, or accidental releases of diseases from biological researchers.

Brenton: A Wagner and Weitzman paper suggests that there’s about a 10% chance of warming larger than 4.8 degrees Celsius, or a 3% chance of more than 6 degrees Celsius. These are really disastrous outcomes. If you’re interested in climate change, we’re pretty excited about you working on these very bad scenarios. Sensible things to do would be improving our ability to forecast; thinking about the positive feedback loops that might be inherent in Earth’s climate; thinking about how to enhance international corporation.

Rob: It does seem like solar power and storage of energy from solar power is going to have the biggest impact on emissions over at least the next 50 years. Anything that can speed up that transition makes a pretty big contribution.

Rob, can you explain your interest in long-term multigenerational indirect effects and what that means?

Rob: If you’re trying to help people and animals thousands of years in the future, you have to help them through a causal chain that involves changing the behavior of someone today and then that’ll help the next generation and so on.

One way to improve the long-term future of humanity is to do very broad things that improve human capabilities like reducing poverty, improving people’s health, making schools better.

But in a world where the more science and technology we develop, the more power we have to destroy civilization, it becomes less clear that broadly improving human capabilities is a great way to make the future go better. If you improve science and technology, you both improve our ability to solve problems and create new problems.

I think about what technologies can we invent that disproportionately make the world safer rather than more risky. It’s great to improve the technology to discover new diseases quickly and to produce vaccines for them quickly, but I’m less excited about generically pushing forward the life sciences because there’s a lot of potential downsides there as well.

Another way that we can robustly prepare humanity to deal with the long-term future is to have better foresight about the problems that we’re going to face. That’s a very concrete thing you can do that puts humanity in a better position to tackle problems in the future — just being able to anticipate those problems well ahead of time so that we can dedicate resources to averting those problems.

To learn more, visit 80000hours.org and subscribe to Rob’s new podcast.

START from the Beginning: 25 Years of US-Russian Nuclear Weapons Reductions

By Eryn MacDonald and originally posted at the Union of Concerned Scientists.

For the past 25 years, a series of treaties have allowed the US and Russia to greatly reduce their nuclear arsenals—from well over 10,000 each to fewer than 2,000 deployed long-range weapons each. These Strategic Arms Reduction Treaties (START) have enhanced US security by reducing the nuclear threat, providing valuable information about Russia’s nuclear arsenal, and improving predictability and stability in the US-Russia strategic relationship.

Twenty-five years ago, US policy-makers of both parties recognized the benefits of the first START agreement: on October 1, 1992, the Senate voted overwhelmingly—93 to 6—in favor of ratifying START I.

The end of START?

With increased tensions between the US and Russia and an expanded range of security threats for the US to worry about, this longstanding foundation is now more valuable than ever.

The most recent agreement—New START—will expire in early February 2021, but can be extended for another five years if the US and Russian presidents agree to do so. In a January 28 phone call with President Trump, Russian President Putin reportedly raised the possibility of extending the treaty. But instead of being extended, or even maintained, the START framework is now in danger of being abandoned.

President Trump has called New START “one-sided” and “a bad deal,” and has even suggested the US might withdraw from the treaty. His advisors are clearly opposed to doing so. Secretary of State Rex Tillerson expressed support for New START in his confirmation hearing. Secretary of Defense James Mattis, while recently stating that the administration is currently reviewing the treaty “to determine whether it’s a good idea,” has previously also expressed support, as have the head of US Strategic Command and other military officials.

Withdrawal seems unlikely, especially given recent anonymous comments by administration officials saying that the US still sees value in New START and is not looking to discard it. But given the president’s attitude toward the treaty, it may still take some serious pushing from Mattis and other military officials to convince him to extend it. Worse, even if Trump is not re-elected, and the incoming president is more supportive of the treaty, there will be little time for a new administration, taking office in late January 2021, to do an assessment and sign on to an extension before the deadline. While UCS and other treaty supporters will urge the incoming administration to act quickly, if the Trump administration does not extend the treaty, it is quite possible that New START—and the security benefits it provides—will lapse.

The Beginning: The Basics and Benefits of START I

The overwhelming bipartisan support for a treaty cutting US nuclear weapons demonstrated by the START I ratification vote today seems unbelievable. At the time, however, both Democrats and Republicans in Congress, as well as the first President Bush, recognized the importance of the historic agreement, the first to require an actual reduction, rather than simply a limitation, in the number of US and Russian strategic nuclear weapons.

By the end of the Cold War, the US had about 23,000 nuclear warheads in its arsenal, and the Soviet Union had roughly 40,000. These numbers included about 12,000 US and 11,000 Soviet deployed strategic warheads—those mounted on long-range missiles and bombers. The treaty limited each country to 1,600 strategic missiles and bombers and 6,000 warheads, and established procedures for verifying these limits.

The limits on missiles and bombers, in addition to limits on the warheads themselves, were significant because START required the verifiable destruction of any excess delivery vehicles, which gave each side confidence that the reductions could not be quickly or easily reversed. To do this, the treaty established a robust verification regime with an unprecedented level of intrusiveness, including on-site inspections and exchanges of data about missile telemetry.

Though the groundwork for START I was laid during the Reagan administration, ratification and implementation took place during the first President Bush’s term. The treaty was one among several measures taken by the elder Bush that reduced the US nuclear stockpile by nearly 50 percent during his time in office.

START I entered into force in 1994 and had a 15-year lifetime; it required the US and Russia to complete reductions by 2001, and maintain those reductions until 2009. However, both countries actually continued reductions after reaching the START I limits. By the end of the Bush I administration, the US had already reduced its arsenal to just over 7,000 deployed strategic warheads. By the time the treaty expired, this number had fallen to roughly 3,900.

The Legacy of START I

Building on the success of START I, the US and Russia negotiated a follow-on treaty—START II—that required further cuts in deployed strategic weapons. These reductions were to be carried out in two steps, but when fully implemented would limit each country to 3,500 deployed strategic warheads, with no more than 1,750 of these on submarine-launched ballistic missiles.

Phase II also required the complete elimination of independently targetable re-entry vehicles (MIRVs) on intercontinental ballistic missiles. This marked a major step forward, because MIRVs were a particularly destabilizing configuration. Since just one incoming warhead could destroy all the warheads on a MIRVed land-based missile, MIRVs create pressure to “use them or lose them”—an incentive to strike first in a crisis. Otherwise, a country risked losing its ability to use those missiles to retaliate in the case of a first strike against it.

While both sides ratified START II, it was a long and contentious process, and entry into force was complicated by provisions attached by both the US Senate and Russian Duma. The US withdrawal from the Anti-Ballistic Missile (ABM) treaty in 2002 was the kiss of death for START II. The ABM treaty had strictly limited missile defenses. Removing this limit created a situation in which either side might feel it had to deploy more and more weapons to be sure it could overcome the other’s defense. But the George W. Bush administration was now committed to building a larger-scale defense, regardless of Russia’s vocal opposition and clear statements that doing so would undermine arms control progress.

Russia responded by announcing its withdrawal from START II, finally ending efforts to bring the treaty into force. A proposed START III treaty, which would have called for further reductions to 2,000 to 2,500 warheads on each side, never materialized; negotiations had been planned to begin after entry into force of START II.

After the failure of START II, the US and Russia negotiated the Strategic Offensive Reductions Treaty (SORT, often called the “Moscow Treaty”). SORT required each party to reduce to 1,700 to 2,200 deployed strategic warheads, but was a much less formal treaty than START. It did not include the same kind of extensive verification regime and, in fact, did not even define what was considered a “strategic warhead,” instead leaving each party to decide for itself what it would count. This meant that although SORT did encourage further progress to lower numbers of weapons, overall it did not provide the same kind of benefits for the US as START had.

New START

Recognizing the deficiencies of the minimal SORT agreement, the Obama administration made negotiation of New START an early priority, and the treaty was ratified in 2010.

New START limits each party to 1,550 deployed strategic nuclear warheads by February 2018. The treaty also limits the number of deployed intercontinental ballistic missiles, submarine-launched ballistic missiles, and long-range bombers equipped to carry nuclear weapons to no more than 700 on each side. Altogether, no more than 800 deployed and non-deployed missiles and bombers are allowed for each side.

In reality, each country will deploy somewhat more than 1,550 warheads—probably around 1,800 each—because of a change in the way New START counts warheads carried by long-range bombers. START I assigned a number of warheads to each bomber based on its capabilities. New START simply counts each long-range bomber as a single warhead, regardless of the actual number it does or could carry. The less stringent limits on bombers are possible because bombers are considered less destabilizing than missiles. The bombers’ detectability and long flight times—measured in hours vs. the roughly thirty minutes it takes for a missile to fly between the United States and Russia—mean that neither side is likely to use them to launch a first strike.

Both the United States and Russia have been moving toward compliance with the New START limits, and as of July 1, 2017—when the most recent official exchange of data took place—both are under the limit for deployed strategic delivery vehicles and close to meeting the limit for deployed and non-deployed strategic delivery vehicles. The data show that the United States is currently slightly under the limit for deployed strategic warheads, at 1,411, while Russia, with 1,765, still has some cuts to make to reach this limit.

Even in the increasingly partisan atmosphere of the 2000s, New START gained support from a wide range of senators, as well as military leaders and national security experts. The treaty passed in the Senate with a vote of 71 to 26; thirteen Republicans joined all Democratic senators in voting in favor. While this is significantly closer than the START I vote, as then-Senator John F. Kerry noted at the time, “in today’s Senate, 70 votes is yesterday’s 95.”

And the treaty continues to have strong support—including from Air Force General John Hyten, commander of US Strategic Command, which is responsible for all US nuclear forces. In Congressional testimony earlier this year, Hyten called himself “a big supporter” of New START and said that “when it comes to nuclear weapons and nuclear capabilities, that bilateral, verifiable arms control agreements are essential to our ability to provide an effective deterrent.” Another Air Force general, Paul Selva, vice chair of the Joint Chiefs of Staff, agreed, saying in the same hearing that when New START was ratified in 2010, “the Joint Chiefs reviewed the components of the treaty—and endorsed it. It is a bilateral, verifiable agreement that gives us some degree of predictability on what our potential adversaries look like.”

The military understands the benefits of New START. That President Trump has the power to withdraw from the treaty despite support from those who are most directly affected by it is, as he would say, “SAD.”

That the US president fails to understand the value of US-Russian nuclear weapon treaties that have helped to maintain stability for more than two decades is a travesty.

Countries Sign UN Treaty to Outlaw Nuclear Weapons

Update 9/25/17: 53 countries have now signed and 3 have ratified.

Today, 50 countries took an important step toward a nuclear-free world by signing the United Nations Treaty on the Prohibition of Nuclear Weapons. This is the first treaty to legally ban nuclear weapons, just as we’ve seen done previously with chemical and biological weapons.

A Long Time in the Making

In 1933, Leo Szilard first came up with the idea of a nuclear chain reaction. Only a few years later, the Manhattan Project was underway, culminating in the nuclear attacks against Hiroshima and Nagasaki in 1945. In the following decades of the Cold War, the U.S. and Russia amassed arsenals that peaked at over 70,000 nuclear weapons in total, though that number is significantly less today. The U.K, France, China, Israel, India, Pakistan, and North Korea have also built up their own, much smaller arsenals.

Over the decades, the United Nations has established many treaties relating to nuclear weapons, including the non-proliferation treaty, START I, START II, the Comprehensive Nuclear Test Ban Treaty, and New START. Though a few other countries began nuclear weapons programs, most of those were abandoned, and the majority of the world’s countries have rejected nuclear weapons outright.

Now, over 70 years since the bombs were first dropped on Japan, the United Nations finally has a treaty outlawing nuclear weapons.

The Treaty

The Treaty on the Prohibition of Nuclear Weapons was adopted on July 7, with a vote of approval from 122 countries. As part of the treaty, the states who sign agree that they will never “[d]evelop, test, produce, manufacture, otherwise acquire, possess or stockpile nuclear weapons or other nuclear explosive devices.” Signatories also promise not to assist other countries with such efforts, and no signatory will “[a]llow any stationing, installation or deployment of any nuclear weapons or other nuclear explosive devices in its territory or at any place under its jurisdiction or control.”

Not only had 50 countries signed the treaty at the time this article was written, but 3 of them also already ratified it. The treaty will enter into force 90 days after it’s ratified by 50 countries.

The International Campaign to Abolish Nuclear Weapons (ICAN) is tracking progress of the treaty, with a list of countries that have signed and ratified it so far.

At the ceremony, UN Secretary General António Guterres said, “The Treaty on the Prohibition of Nuclear Weapons is the product of increasing concerns over the risk posed by the continued existence of nuclear weapons, including the catastrophic humanitarian and environmental consequences of their use.”

Still More to Do

Though countries that don’t currently have nuclear weapons are eager to see the treaty ratified, no one is foolish enough to think that will magically rid the world of nuclear weapons.

“Today we rightfully celebrate a milestone.  Now we must continue along the hard road towards the elimination of nuclear arsenals,” Guterres added in his statement.

There are still over 15,000 nuclear weapons in the world today. While that’s significantly less than we’ve had in the past, it’s still more than enough to kill most people on earth.

The U.S. and Russia hold most of these weapons, but as we’re seeing from the news out of North Korea, a country doesn’t need to have thousands of nuclear weapons to present a destabilizing threat.

Susi Snyder, author of Pax’s Don’t Bank on the Bomb and a leading advocate of the treaty, told FLI:

“The countries signing the treaty are the responsible actors we need in these times of uncertainty, fire, fury, and devastating threats. They show it is possible and preferable to choose diplomacy over war.

Earlier this summer, some of the world’s leading scientists also came together in support of the nuclear ban with this video that was presented to the United Nations:

Stanislav Petrov

The signing of the treaty has occurred within a week of both the news of the death of Stanislav Petrov, as well as of Petrov day. On September 26, 1983, Petrov chose to follow his gut rather than rely on what turned out to be faulty satellite data. In doing so, he prevented what could have easily escalated into full-scale global nuclear war.

Stanislav Petrov, the Man Who Saved the World, Has Died

September 23, 1983: Soviet Union Detects Incoming Missiles

A Soviet early warning satellite showed that the United States had launched five land-based missiles at the Soviet Union. The alert came at a time of high tension between the two countries, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. In addition, earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people. Stanislav Petrov, the Soviet officer on duty, had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflection of the sun on the tops of clouds had fooled the satellite into thinking it was detecting missile launches (Accidental Nuclear War: a Timeline of Close Calls).

Petrov is widely credited for having saved millions if not billions of people with his decision to ignore satellite reports, preventing accidental escalation into what could have become a full-scale nuclear war. This event was turned into the movie “The Man Who Saved the World,” and Petrov was honored at the United Nations and given the World Citizen Award.

All of us at FLI were saddened to learn that Stanislav Petrov passed away this past May. News of his death was announced this weekend. Petrov was to be honored during the release of a new documentary, also called The Man Who Saved the World, in February of 2018. Stephen Mao, who is an executive producer of this documentary, told FLI that though they had originally planned to honor Petrov in person at February’s Russian theatrical premier, “this will now be an event where we will eulogize and remember Stanislav for his contribution to the world.”

Jakob Staberg, the movie’s producer, said:

“Stanislav saved the world but lost everything and was left alone. Taking part in our film, The Man Who Saved the World, his name and story came out to the whole world. Hopefully the actions of Stanislav will inspire other people to take a stand for good and not to forget that the nuclear threat is still very real. I will remember Stanislav’s own humble words about his actions: ‘I just was at the right place at the right time’. Yes, you were Stanislav. And even though you probably would argue that I am wrong, I am happy it was YOU who was there in that moment. Not many people would have the courage to do what you did. Thank you.”

You can read more about Petrov’s life and heroic actions in the New York Times obituary.

Understanding the Risks and Limitations of North Korea’s Nuclear Program

By Kirsten Gronlund

Late last month, North Korea launched a ballistic missile test whose trajectory arced over Japan. And this past weekend, Pyongyang flaunted its nuclear capabilities with an underground test of what it claims was a hydrogen bomb: a more complicated—and powerful—alternative to the atomic bombs it has previously tested.

Though North Korea has launched rockets over its eastern neighbor twice before—in 1998 and 2009—those previous launches carried satellites, not warheads. And the reasoning behind those two previous launches was seemingly innocuous: eastern-directed launches use the earth’s spin to most effectively put a satellite in orbit. Since 2009, North Korea has taken to launching its satellites southward, sacrificing maximal launch conditions to keep the peace with Japan. This most recent launch, however, seemed intentionally designed to aggravate tensions not only with Japan but also with the U.S. And while there is no way to verify North Korea’s claim that it tested a hydrogen bomb, in such a tense environment the claim itself is enough to provoke Washington.

What We Know

In light of these and other recent developments, I spoke with Dr. David Wright, an expert on North Korean nuclear missiles at the Union of Concerned Scientists, to better understand the real risks associated with North Korea’s nuclear program. He described what he calls the “big question”: now that its missile program is advancing rapidly, can North Korea build good enough—that is, small enough, light enough, and rugged enough—nuclear weapons to be carried by these missiles?

Pyongyang has now successfully detonated nuclear weapons in six underground tests, but these tests have been carried out in ideal conditions, far from the reality of a ballistic launch. Wright and others believe that North Korea likely has warheads that can be delivered via short-range missiles that can reach South Korea or Japan. They have deployed such missiles for years. But it remains unclear whether North Korean warheads would be deliverable via long-range missiles.

Until last Monday’s launch, North Korea has sought to avoid provoking its neighbors by not conducting missile tests that would pass over other countries. Instead it has tested its missiles by shooting them upwards on highly lofted trajectories that land them in the Sea of Japan. This has caused some confusion about the range that North Korean missiles have achieved. Wright, however, uses height data from these launches to calculate the potential range that its missiles would reach on standard trajectories.

To date, North Korea’s farthest test launch—in July of this year—had the range to reach large cities in the U.S. mainland. That range, however, depends on the weight of the warhead used in the tests, a factor that remains unknown. Thus while North Korea is capable of launching missiles that would hit the U.S., it is unclear whether such missiles could actually deliver a nuclear warhead to that range.

A second key question, according to Wright, is one of numbers: how many missiles and warheads do the North Koreans have? Dr. Siegfried Hecker, former head of Los Alamos weapons laboratory, makes the following estimates based in part on visits he has made to North Korea’s Yongbyon laboratory. In terms of nuclear material, Hecker suggests that the North Koreans have “20 to 40 kilograms plutonium and 200 to 450 kilograms highly enriched uranium.” This material, he estimates, would “suffice for perhaps 20 to 25 nuclear weapons, not the 60 reported in the leaked intelligence estimate.” Based on past underground tests, it was estimated that the biggest yield of a North Korean warhead was about the size of the bomb that destroyed Hiroshima—which, though potentially devastating, is still about 20 times smaller than most U.S. warheads. The test this past weekend outsized its largest previous yield by a factor of five or more.

As for missiles, Wright says estimates suggest that North Korea may have a few hundred short- and medium-range missiles. The number of long-range missiles, however, is unknown—as is the speed with which new ones could be built. In the near term, Wright believes the number is likely to be small.

What seems clear is that Kim Jong Un, following his father’s death, began pouring money and resources into developing weapons technology and expertise. Since Kim Jong Un has taken power, the country’s rate of missile tests has skyrocketed: since last June, it has performed roughly 30 tests.

It has also unveiled a surprising number of new types of missiles. For years, the longest-range North Korean missiles reached about 1300 km—just putting Japan within range. In mid-May of this year, however, North Korea launched a missile with a potential range (depending on its payload) of more than 4000 km, for the first time putting Guam—which is 3500 km from North Korea—in reach. Then in July, that range increased again. The first launch in that month could reach 7000 km; the second—their current record—could travel more than 10,000 km, about the distance from North Korea to Chicago.

An Existential Risk?

On its own, the North Korean nuclear arsenal does not pose an existential risk—it is too small. According to Wright, the consequences of a North Korean nuclear strike, if successful, would be catastrophic—but not on an existential scale. He worries, though, about how the U.S. might respond. As Wright puts it, “When people start talking about using nuclear weapons, there’s a huge uncertainty about how countries will react.”

That said, the U.S. has overwhelming conventional military capabilities that could devastate North Korea. A nuclear response would not be necessary to neutralize any further threat from Pyongyang. But there are people who would argue that failure to launch a nuclear response would weaken deterrence. “I think,” says Wright, “that if North Korea launched a nuclear missile against its neighbors or the United States, there would be tremendous pressure to respond with nuclear weapons.”

Wright notes that moments of crisis have been shown to produce unpredictable responses: “There would be no reason for the U.S. to use nuclear weapons, but there is evidence to suggest that in high pressure situations, people don’t always think these things through. For example, we know that there have been war simulations that the U.S. has done where the adversary using anti-satellite weapons against the United States has led to the U.S. using nuclear weapons.”

Wright also worries about accidents, errors, and misinterpretations. While North Korea does not have the ability to detect launches or incoming missiles, it does have a lot of anti-aircraft radar. Wright offers the following example of a misinterpretation that could stem from North Korean detection of U.S. aircraft.

The U.S. has repeatedly said that it is keeping all options on the table—including a nuclear strike. It also talks about preemptive military strikes against North Korean launch sites and support areas, which would include targets in the Pyongyang area. North Korea knows this.

The aircraft that it would use in such a strike are likely its B-1 bombers. The B-1 once carried nuclear weapons but, per a treaty with Russia, has been modified to rid it of its nuclear capabilities. Despite U.S. attempts to emphasize this fact, however, Wright says that “statements we’ve seen from North Korea make you wonder whether it really has confidence that the B-1s haven’t been re-modified to carry nuclear weapons again”; the North Koreans, for example, repeatedly refer to the B-1 as nuclear-capable.

Now imagine that U.S. intelligence detects launch preparations of several North Korean missiles. The U.S. interprets this as the precursor to a launch toward Guam, which North Korea has previously threatened. The U.S. then sends a conventional preemptive strike to destroy those missiles using B-1s. In such a crisis, Wright reminds us, “Tensions are very high, people are making worst-case assumptions, they’re making fast decisions, and they’re worried about being caught by surprise.” It is feasible that, having detected the incoming B-1 bombers flying toward Pyongyang, North Korea would assume them to be carrying nuclear weapons. Under this assumption, they might fire short-range ballistic missiles at South Korea. This illustrates how misinterpretations might drive a crisis.

“Presumably,” says Wright, “the U.S. understands the risk of military attacks and such a scenario is unlikely.” He remains hopeful that “the two sides will find a way to step back from the brink.”

Podcast: Banning Nuclear and Autonomous Weapons with Richard Moyes and Miriam Struyk

How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He’s worked closely with the International Campaign to Abolish Nuclear Weapons, he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety here.

Why is a ban on nuclear weapons important, even if nuclear weapons states don’t sign?

Richard: This process came out the humanitarian impact of nuclear weapons: from the use of a single nuclear weapon that would potentially kill hundreds of thousands of people, up to the use of multiple nuclear weapons which could have devastating impacts for human society and for the environment as a whole. These weapons should be considered illegal because their effects cannot be contained or managed in a way that avoids massive suffering.

At the same time, it’s a process that’s changing the landscape against which those states continue to maintain and assert the validity of their maintenance of nuclear weapons. By changing that legal background, we’re potentially in position to put much more pressure on those states to move towards disarmament as a long-term agenda.

Miriam: At a time when we see erosion of international norms, it’s quite astonishing that in less than two weeks, we’ll have an international treaty banning nuclear weapons. For too long nuclear weapons were mythical, symbolic weapons, but we never spoke about what these weapons actually do and whether we think that’s illegal.

This treaty brings back the notion of what do these weapons do and do we want that.

It also brings democratization of security policy. This is a process that was brought about by several states and also by NGOs, by the ICRC and other actors. It’s so important that it’s actually citizens speaking about nukes and whether we think they’re acceptable or not.

What is an autonomous weapon system?

Richard: If I might just backtrack a little — an important thing to recognize in all of these contexts is that these weapons don’t prohibit themselves — weapons have been prohibited because a diverse range of actors from civil society and from international organizations and from states have worked together.

Autonomous weapons are really an issue of new and emerging technologies and the challenges that new and emerging technologies present to society particularly when they’re emerging in the military sphere — a sphere which is essentially about how we’re allowed to kill each other or how we’re allowed to use technologies to kill each other.

Autonomous weapons are a movement in technology to a point where we will see computers and machines making decisions about where to apply force, about who to kill when we’re talking about people, or what objects to destroy when we’re talking about material.

What is the extent of autonomous weapons today versus what do we anticipate will be designed in the future?

Miriam: It depends a lot on your definition of course. I’m still, in a way, a bit of an optimist by saying that perhaps we can prevent the emergence of lethal autonomous weapon systems. But I also see some similarities that lethal autonomous weapons systems, like we had with nuclear weapons a few decades ago, can lead to an arms race, and can lead to more global insecurity, and can also lead to warfare.

The way we’re approaching lethal autonomous weapon systems is to try to ban them before we see horrible humanitarian consequences. How does that change your approach from previous weapons?

Richard: That this is a more future-orientated debate definitely creates different dynamics. But other weapon systems have been prohibited. Blinding laser weapons were prohibited when there was concern that laser systems designed to blind people were going to become a feature of the battlefield.

In terms of autonomous weapons, we already see significant levels of autonomy in certain weapon systems today and again I agree with Miriam in terms of recognition that certain definitional issues are very important in all of this.

One of the ways we’ve sought to orientate to this is by thinking about the concept of meaningful human control. What are the human elements that we feel are important to retain? We are going to see more and more autonomy within military operations. But in certain critical functions around how targets are identified and how force is applied and over what period of time — those are areas where we will potentially see an erosion of a level of human, essentially moral, engagement that is fundamentally important to retain.

Miriam: This is not so much about a weapon system but how do we control warfare and how do we maintain human control in the sense that it’s a human deciding who is legitimate target and who isn’t.

An argument in favor of autonomous weapons is that they can ideally make decisions better than humans and potentially reduce civilian casualties. How do you address that argument?

Miriam: We’ve had that debate with other weapon systems, as well, where the technological possibilities were not what they were promised to be as soon as they were used.

It’s an unfair debate because it’s mainly from states with developed industries who are most likely the ones using some form of lethal autonomous weapons systems first. Flip the question and say, ‘what if these systems will be used against your soldiers or in your country?’ Suddenly you enter a whole different debate. I’m highly skeptical of people who say it could actually be beneficial.

Richard: I feel like there are assertions of “goodies” and “baddies” and our ability to label one from the other. To categorize people and things in society in such an accurate way is somewhat illusory and something of a misunderstanding of the reality of conflict.

Any claims that we can somehow perfect violence in a way where it can be distributed by machinery to those who deserve to receive it and that there’s no tension or moral hazard in that — that is extremely dangerous as an underpinning concept because, in the end, we’re talking about embedding categorizations of people and things within a micro bureaucracy of algorithms and labels.

Violence in society is a human problem and it needs to continue to be messy to some extent if we’re going to recognize it as a problem.

What is the process right now for getting lethal autonomous weapons systems banned?

Miriam: We started the International Campaign to Stop Killer Robots in 2013 — it immediately gave a push to the international discussion, including the one on the Human Rights Council and within the Conventional Weapons in Geneva. We saw a lot of debates there in 2013, 2014, and 2015and the last one was in April.

At the last CCW meeting it was decided that a group of governmental experts should start within CCW to look at these type of weapons which was applauded by many states.

Unfortunately, due to financial issues, the meeting has been canceled. So we’re in a bit of a silence mode right now. But that doesn’t mean there’s no progress. We have 19 states who called for a ban, and more than 70 states within the CCW framework discussing this issue. We know from other treaties that you need these kind of building blocks.

Richard: Engaging scientists and roboticists and AI practitioners around these themes — it’s one of the challenges sometimes that the issues around weapons and conflict can sometimes be treated as very separate from other parts of society. It is significant that the decisions that get made about the limits essentially of AI-driven decision making about life and death in the context of weapons could well have implications in the future regarding how expectations and discussions get set elsewhere.

What is the most important for people to understand about nuclear and autonomous weapon systems?

Miriam: Both systems go way beyond the discussion about weapon systems: it’s about what kind of world and society do we want to live in. None of these — not killer robots, not nuclear weapons — are an answer to any of the threats that we face right now, be it climate change, be it terrorism. It’s not an answer. It’s only adding more fuel to an already dangerous world.

Richard: Nuclear weapons — they’ve somehow become a very abstract, rather distant issue. Simple recognition of the scale of humanitarian harm from a nuclear weapon is the most substantial thing — hundreds of thousands killed and injured. [Leaders of nuclear states are] essentially talking about incinerating hundreds of thousands of normal people — probably in a foreign country — but recognizable, normal people. The idea that that can be approached in some ways glibly or confidently at all is I think very disturbing. And expecting that at no point will something go wrong — I think it’s a complete illusion.

On autonomous weapons — what sort of society do we want to live in, and how much are we prepared to hand over to computers and machines? I think handing more and more violence over to such processes does not augur well for our societal development.

This podcast was edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

United Nations Adopts Ban on Nuclear Weapons

Today, 72 years after their invention, states at the United Nations formally adopted a treaty which categorically prohibits nuclear weapons.

With 122 votes in favor, one vote against, and one country abstaining, the “Treaty on the Prohibition of Nuclear Weapons” was adopted Friday morning and will open for signature by states at the United Nations in New York on September 20, 2017. Civil society organizations and more than 140 states have participated throughout negotiations.

On adoption of the treaty, ICAN Executive Director Beatrice Fihn said:

“We hope that today marks the beginning of the end of the nuclear age. It is beyond question that nuclear weapons violate the laws of war and pose a clear danger to global security. No one believes that indiscriminately killing millions of civilians is acceptable – no matter the circumstance – yet that is what nuclear weapons are designed to do.”

In a public statement, Former Secretary of Defense William Perry said:

“The new UN Treaty on the Prohibition of Nuclear Weapons is an important step towards delegitimizing nuclear war as an acceptable risk of modern civilization. Though the treaty will not have the power to eliminate existing nuclear weapons, it provides a vision of a safer world, one that will require great purpose, persistence, and patience to make a reality. Nuclear catastrophe is one of the greatest existential threats facing society today, and we must dream in equal measure in order to imagine a world without these terrible weapons.”

Until now, nuclear weapons were the only weapons of mass destruction without a prohibition treaty, despite the widespread and catastrophic humanitarian consequences of their intentional or accidental detonation. Biological weapons were banned in 1972 and chemical weapons in 1992.

This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and does not consider them legitimate tools of war. The repeated objection and boycott of the negotiations by many nuclear-weapon states demonstrates that this treaty has the potential to significantly impact their behavior and stature. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviors, even in states not party to the treaty.

“This is a triumph for global democracy, where the pro-nuclear coalition of Putin, Trump and Kim Jong-Un were outvoted by the majority of Earth’s countries and citizens,” said MIT Professor and FLI President Max Tegmark.

“The strenuous and repeated objections of nuclear armed states is an admission that this treaty will have a real and lasting impact,” Fihn said.

The treaty also creates obligations to support the victims of nuclear weapons use (Hibakusha) and testing and to remediate the environmental damage caused by nuclear weapons.

From the beginning, the effort to ban nuclear weapons has benefited from the broad support of international humanitarian, environmental, nonproliferation, and disarmament organizations in more than 100 states. Significant political and grassroots organizing has taken place around the world, and many thousands have signed petitions, joined protests, contacted representatives, and pressured governments.

“The UN treaty places a strong moral imperative against possessing nuclear weapons and gives a voice to some 130 non-nuclear weapons states who are equally affected by the existential risk of nuclear weapons. … My hope is that this treaty will mark a sea change towards global support for the abolition of nuclear weapons. This global threat requires unified global action,” said Perry.

Fihn added, “Today the international community rejected nuclear weapons and made it clear they are unacceptable.It is time for leaders around the world to match their values and words with action by signing and ratifying this treaty as a first step towards eliminating nuclear weapons.”

 

Images courtesy of ICAN.

 

WHAT THE TREATY DOES

Comprehensively bans nuclear weapons and related activity. It will be illegal for parties to undertake any activities related to nuclear weapons. It bans the use, development, testing, production, manufacturing, acquiring, possession, stockpiling, transferring, receiving, threatening to use, stationing, installation, or deploying of nuclear weapons.  [Article 1]

Bans any assistance with prohibited acts. The treaty bans assistance with prohibited acts, and should be interpreted as prohibiting states from engaging in military preparations and planning to use nuclear weapons, financing their development and manufacture, or permitting the transit of them through territorial waters or airspace. [Article 1]

Creates a path for nuclear states which join to eliminate weapons, stockpiles, and programs. It requires states with nuclear weapons that join the treaty to remove them from operational status and destroy them and their programs, all according to plans they would submit for approval. It also requires states which have other country’s weapons on their territory to have them removed. [Article 4]

Verifies and safeguards that states meet their obligations. The treaty requires a verifiable, time-bound, transparent, and irreversible destruction of nuclear weapons and programs and requires the maintenance and/or implementation of international safeguards agreements. The treaty permits safeguards to become stronger over time and prohibits weakening of the safeguard regime. [Articles 3 and 4]

Requires victim and international assistance and environmental remediation. The treaty requires states to assist victims of nuclear weapons use and testing, and requires environmental remediation of contaminated areas. The treaty also obliges states to provide international assistance to support the implementation of the treaty. The text requires states to join the Treaty, and to encourage others to join, as well as to meet regularly to review progress. [Articles 6, 7, and 8]

NEXT STEPS

Opening for signature. The treaty will be open for signature on 20 September at the United Nations in New York. [Article 13]

Entry into force. Fifty states are required to ratify the treaty for it to enter into force.  At a national level, the process of ratification varies, but usually requires parliamentary approval and the development of national legislation to turn prohibitions into national legislation. This process is also an opportunity to elaborate additional measures, such as prohibiting the financing of nuclear weapons. [Article 15]

First meeting of States Parties. The first Meeting of States Parties will take place within a year after the entry into force of the Convention. [Article 8]

SIGNIFICANCE AND IMPACT OF THE TREATY

Delegitimizes nuclear weapons. This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and do not consider them legitimate weapons, creating the foundation of a new norm of international behaviour.

Changes party and non-party behaviour. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviours, even in states not party to the treaty. This is true for treaties ranging from those banning cluster munitions and land mines to the Convention on the law of the sea. The prohibition on assistance will play a significant role in changing behaviour given the impact it may have on financing and military planning and preparation for their use.

Completes the prohibitions on weapons of mass destruction. The treaty completes work begun in the 1970s, when Chemical weapons were banned, and the 1990s when biological weapons were banned.

Strengthens International Humanitarian Law (“Laws of War”). Nuclear weapons are intended to kill millions of civilians – non-combatants – a gross violation of International Humanitarian Law. Few would argue that the mass slaughter of civilians is acceptable and there is no way to use a nuclear weapon in line with international law. The treaty strengthens these bodies of law and norm.

Remove the prestige associated with proliferation. Countries often seek nuclear weapons for the prestige of being seen as part of an important club. By more clearly making nuclear weapons an object of scorn rather than achievement, their spread can be deterred.

FLI sought to increase support for the negotiations from the scientific community this year. We organized an open letter signed by over 3700 scientists in 100 countries, including 30 Nobel Laureates. You can see the letter here and the video we presented recently at the UN here.

This post is a modified version of the press release provided by the International Campaign to Abolish Nuclear Weapons (ICAN).

Support Grows for UN Nuclear Weapons Ban

“Do you want to be defended by the mass murder of people in other countries?”

According to Princeton physicist Zia Mian, nuclear weapons are “fundamentally anti-democratic” precisely because citizens are never asked this question. Mian argues that “if you ask people this question, almost everybody would say, ‘No, I do not want you to incinerate entire cities and kill millions of women and children and innocent people to defend us.’”

With the negotiations to draft a treaty that would ban nuclear weapons underway at the United Nations, much of the world may be showing it agrees. Just this week, a resolution passed during a meeting of the United States Conference of Mayors calling for the US to “lower nuclear tensions,” to “redirect nuclear spending,” and to “support the ban treaty negotiations.”

And it’s not just the US Conference of Mayors supporting a reduction in nuclear weapons. In October of 2016, 123 countries voted to pursue these negotiations to draft a nuclear ban treaty. As of today, the international group, Mayors for Peace, has swelled to “7,295 cities in 162 countries and regions, with 210 U.S. members, representing in total over one billion people.” A movement by the Hibakusha – survivors of the bombs dropped on Hiroshima and Nagasaki – has led to a petition that was signed by nearly 3 million people in support of the ban. And this spring, over 3700 scientists from 100 countries signed an open letter in support of the ban negotiations.

Yet there are some, especially in countries that either have nuclear weapons or are willing to let nuclear weapons be used on their behalf, who worry that the ban treaty could have a destabilizing effect globally. Nuclear experts, scientists, and government leaders have all offered statements why they believe the world will be better off with this treaty.

The Ultimate Equalizer

“I support a ban on nuclear weapons because I know that a nuclear bomb is an equal opportunity destroyer.” -Congresswoman Barbara Lee.

Today’s nuclear weapons can be as much as 100 times bigger than the bomb dropped on Hiroshima, and just one would level a radius within a city that was miles wide, with the carnage outside the blast zone extending even further. This destruction would include the hospitals and health facilities that would be necessary to treat the injured.

As the US Conference of Mayors noted, “No national or international response capacity exists that would adequately respond to the human suffering and humanitarian harm that would result from a nuclear weapon explosion in a populated area, and [such] capacity most likely will never exist.”

And the threat of nuclear weapons doesn’t end with the area targeted. Climate scientist Alan Robock and physicist Brian Toon estimate that even a small, regional nuclear war could lead to the deaths of up to 1 billion people worldwide as global temperatures plummet and farms fail to grow enough food to feed the population.

Toon says, “If there were a full-scale conflict with all the nuclear weapons on the planet. Or a conflict just involving smaller countries with perhaps 100 small weapons. In either case, there’s an environmental catastrophe caused by the use of the weapons.”

Robock elaborates: “The smoke from the fires could cause a nuclear winter, if the US and Russia have a nuclear war, sentencing most of the people in the world to starvation. Even a very small nuclear war could produce tremendous climatic effects and disruption of the world’s food supplies. The only way to prevent this happening is to get rid of the weapons.”

 

 

Destabilization and Rising Political Tensions

Many of the concerns expressed by people hesitant to embrace a ban on nuclear weapons seem to revolve around the rising geopolitical tensions. It’s tempting to think that certain people or countries may be at more risk from nuclear weapons, and it’s equally tempting to think that living in a country with nuclear weapons will prevent others from attacking.

“The key part of the problem is that most people I know think nuclear weapons are scary but kind of cool at the same time because they keep us safe, and that’s just a myth.” -MIT physicist Max Tegmark

Among other things, heightened tensions actually increase the risk of an accidental nuclear attack, as almost happened many times during the Cold War.

Nuclear physicist Frank von Hippel says, “My principal concern is that they’ll be used by accident as a result of false warning or even hacking. … At the moment, [nuclear weapons are] in a ‘launch on warning’ posture. The US and Russia are sort of pointed at each other. That’s an urgent problem, and we can’t depend on luck indefinitely.”

“Launch on warning” means that either leader would have roughly 10-12 minutes to launch what they think is a retaliatory nuclear attack, which doesn’t leave much time to confirm that warning signals are correct and not just some sort of computer glitch.

Many people often misinterpret the ban as requiring unilateral disarmament. However, the purpose of the ban is to make weapons that cause these indiscriminate and inhumane effects illegal — and set the stage for all countries to disarm.

Tegmark explains, “The UN treaty … will create stigma, which, as a first step, will pressure countries to slash their excessive arsenals down to the minimal size needed for deterrence.”

For example, the United States has not signed the Mine Ban Treaty because they still maintain landmines along the border between North and South Korea, but the stigma of the treaty helped lead the U.S. to pledge to give up most of its landmines.

North Korea also comes up often as a reason countries, and specifically the U.S., can’t decrease their nuclear arsenals. When I asked Mian about this, his response was: “North Korea has 10 nuclear weapons. The United States has 7,000. That’s all there is to say.”

The Pentagon has suggested that the U.S. could ensure deterrence with about 300 nuclear weapons. That would be a mere 4% of our current nuclear arsenal, and yet it would still be 30 times what North Korea has.

The Non-Proliferation Treaty

Many people have said that they fear a new treaty that bans nuclear weapons outright could undermine the Non-Proliferation Treaty (NPT), but supporters of the ban insist that the new ban would work in conjunction with the NPT. However supporters have also expressed frustration with what they see as failings of the NPT.

Lawrence Krauss, physicist and board member for the Bulletin of Atomic Scientists explains, “190 countries have already adhered to the non-proliferation treaty. But in fact we are not following the guidelines of that treaty, which says that the nuclear state should do everything they can do disarm. And, we’re violating that right now.”

Lisbeth Gronlund, a physicist and nuclear expert with the Union of Concerned Scientists adds, “The nuclear non-proliferation treaty has two purposes, and it has succeeded at preventing other states from getting nuclear weapons. It has failed in its second purpose, which is getting the nuclear weapons states to disarm. I support the ban treaty because it will pressure the nuclear weapons states to do what they are already obligated to do.”

Money

Maintaining nuclear arsenals is incredibly expensive, and now the U.S. is planning to spend $1.2 trillion to upgrade its arsenal (this doesn’t take into account the money that other nuclear countries are also putting into their own upgrades).

Jonathan King, a biologist and nuclear expert says, “Very few people realize that it’s their tax dollars that pay for the development and maintenance of these weapons – billions and billions of dollars a year. The cost of one year of maintaining nuclear weapons is equivalent to the entire budget of the National Institute of Health responsible for research on all of the diseases that afflict Americans: heart disease, stroke, Alzheimer’s, arthritis, diabetes. It’s an incredible drain of national resources.”

William Hartung, a military spending expert, found that it would be more cost effective to just burn $1 million every hour for the next 30 years.

Final Thoughts

“Today, the United Nations is considering a ban on nuclear weapons. The political effect of that ban is by no means clear. But the moral effect is quite clear. What we are saying is there ought to be a ban on nuclear weapons.” –Former Secretary of Defense, William Perry.

Beatrice Fihn is the Executive Director of ICAN, which has helped initiate and mobilize support for the nuclear ban treaty from the very beginning, bringing together 450 organizations from over 100 countries.

“Nuclear weapons are intended to kill civilians by the millions,” Fihn points out. “Civilized people no longer believe that is acceptable behavior. It is time to place nuclear weapons alongside chemical and biological weapons, as relics we have evolved beyond. Banning these weapons in international law is a logical first step to eliminating them altogether, and we’re almost there.”

 

U.S. Conference of Mayors Unanimously Adopts Mayors for Peace Resolution

U.S. Conference of Mayors Unanimously Adopts Mayors for Peace Resolution Calling on President Trump to Lower Nuclear Tensions, Prioritize Diplomacy, and Redirect Nuclear Weapons Spending to meet Human Needs and Address Environmental Challenges


Conference also Adopts Two Additional Resolutions Calling for Reversal of Military Spending to Meet the Needs of Cities

Miami Beach, FL – At the close of its 85th Annual Meeting on Monday June 26, 2017, the United States Conference of Mayors (USCM), for the 12th consecutive year, adopted a strong resolution put forward by Mayors for Peace. The resolution, “Calling on President Trump to Lower Nuclear Tensions, Prioritize Diplomacy, and Redirect Nuclear Weapons Spending to meet Human Needs and Address Environmental Challenges,” was sponsored by Mayors for Peace Lead U.S. Mayor Frank Cownie of Des Moines, Iowa and 19 co-sponsors (full list below).

Mayor Cownie, addressing the International Affairs Committee of the USCM, quoted from the resolution: “This is an unprecedented moment in human history. The world has never faced so many nuclear flashpoints simultaneously. From NATO-Russia tensions, to the Korean Peninsula, to South Asia and the South China Sea and Taiwan — all of the nuclear-armed states are tangled up in conflicts and crises that could catastrophically escalate at any moment.”

“At the same time,” he noted, “historic negotiations are underway right now in the United Nations, involving most of the world’s countries, on a treaty to prohibit nuclear weapons, leading to their total elimination. More than unfortunately, the U.S. and the other nuclear-armed nations are boycotting these negotiations. I was there in March and witnessed the start of the negotiations first hand.”

The opening paragraph of the resolution declares: “Whereas, the Bulletin of the Atomic Scientists has moved the hands of its ‘Doomsday Clock’ to 2.5 minutes to midnight – the closest it’s been since 1953, stating, ‘Over the course of 2016, the global security landscape darkened as the international community failed to come effectively to grips with humanity’s most pressing existential threats, nuclear weapons and climate change,’ and warning that, ‘Wise public officials should act immediately, guiding humanity away from the brink’.”

As Mayor Cownie warned: “Just the way the mayors responded to the current Administration pulling out of the Paris Climate Accord, we need to respond to the other existential threat.”

The USCM is the nonpartisan association of American cities with populations over 30,000. There are 1,408 such cities. Resolutions adopted at annual meetings become USCM official policy.

By adopting this resolution, the USCM (abbreviated points): 

  • Calls on the U.S. Government, as an urgent priority, to do everything in his power to lower nuclear tensions though intense diplomatic efforts with Russia, China, North Korea and other nuclear-armed states and their allies, and to work with Russia to dramatically reduce U.S. and Russian nuclear stockpiles;
  • Welcomes the historic negotiations currently underway in the United Nations, involving most of the world’s countries, on a treaty to prohibit nuclear weapons, leading to their total elimination, and expresses deep regret that the U.S. and the other nuclear-armed states are boycotting these negotiations;
  • Calls on the U.S. to support the ban treaty negotiations as a major step towards negotiation of a comprehensive agreement on the achievement and permanent maintenance of a world free of nuclear arms, and to initiate, in good faith, multilateral negotiations to verifiably eliminate nuclear weapons within a timebound framework;
  • Welcomes the Restricting First Use of Nuclear Weapons Act of 2017introduced in both houses of Congress, that would prohibit the President from launching a nuclear first strike without a declaration of war by Congress;
  • Calls for the Administration’s new Nuclear Posture Review to reaffirm the stated U.S. goal of the elimination of nuclear weapons, to lessen U.S. reliance on nuclear weapons, and to recommend measures to reduce nuclear risks;
  • Calls on the President and Congress to reverse federal spending priorities and to redirect funds currently allocated to nuclear weapons and unwarranted military spending to restore full funding for Community Block Development Grants and the Environmental Protection Agency, to create jobs by rebuilding our nation’s crumbling infrastructure, and to ensure basic human services for all, including education, environmental protection, food assistance, housing and health care; and
  • Urges all U.S. mayors to join Mayors for Peace in order to help reach the goal of 10,000 member cities by 2020, and encourages U.S. member cities to get actively involved by establishing sister city relationships with cities in other nuclear-armed nations, and by taking action at the municipal level to raise public awareness of the humanitarian and financial costs of nuclear weapons, the growing dangers of wars among nuclear-armed states, and the urgent need for good faith U.S. participation in negotiating the global elimination of nuclear weapons.

Mayors for Peace, founded in 1982, is led by the Mayors of Hiroshima and Nagasaki. Since 2003 it has been calling for the global elimination of nuclear weapons by 2020. Mayors for Peace membership has grown exponentially, as of June 1, 2017 counting 7,335 cities in 162 countries including 211 U.S. members, representing more than one billion people.

The 2017 Mayors for Peace USCM resolution additionally “welcomes resolutions adopted by cities including New Haven, CT, Charlottesville, VA, Evanston, IL, New London, NH, and West Hollywood, CA urging Congress to cut military spending and redirect funding to meet human and environmental needs”.

The USCM on June 16, 2017 also unanimously adopted two complimentary resolutions: Opposition to Military Spending, sponsored by Mayor Svante L. Myrick of Ithaca New York; and Calling for Hearings on Real City Budgets Needed and the Taxes our Cities Send to the Federal Military Budget, sponsored by Mayor Toni Harp of New Haven Connecticut, a member of Mayors for Peace. These two resolutions are posted at http://legacy.usmayors.org/resolutions/85th_Conference/proposedcommittee.asp?committee=Metro Economies (scroll down).

The full text of the Mayors for Peace resolution with the list of 20 sponsors is posted at http://wslfweb.org/docs/2017MfPUSCMres.pdf

Official version (scroll down):  http://legacy.usmayors.org/resolutions/85th_Conference/proposedcommittee.asp?committee=International Affairs

The 2017 Mayors for Peace USCM resolution was sponsored by: T. M. Franklin Cownie, Mayor of Des Moines, IA; Alex Morse, Mayor of Holyoke, MA; Roy D. Buol, Mayor of Dubuque, IA; Nan Whaley, Mayor of Dayton, OH; Paul Soglin, Mayor of Madison, WI; Geraldine Muoio, Mayor of West Palm Beach, FL; Lucy Vinis, Mayor of Eugene, OR; Chris Koos, Mayor of Normal, IL; John Heilman, Mayor of West Hollywood, CA; Pauline Russo Cutter, Mayor of San Leandro, CA; Salvatore J. Panto, Jr., Mayor of Easton, PA; John Dickert, Mayor of Racine, WI; Ardell F. Brede, Mayor of Rochester, MN; Helene Schneider, Mayor of Santa Barbara, CA; Frank Ortis, Mayor of Pembroke Pines, FL; Libby Schaaf, Mayor of Oakland, CA; Mark Stodola, Mayor of Little Rock, AK; Patrick L. Wojahn, Mayor of College Park, MD; Denny Doyle, Mayor of Beaverton, OR; Patrick J. Furey, Mayor of Torrance, CA

Podcast: Creative AI with Mark Riedl & Scientists Support a Nuclear Ban

If future artificial intelligence systems are to interact with us effectively, Mark Riedl believes we need to teach them “common sense.” In this podcast, I interviewed Mark to discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining “common sense reasoning.” We also discuss the “big red button” problem with AI safety, the process of teaching rationalization to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work focuses on human-AI interaction and how humans and AI systems can understand each other.

The following transcript has been heavily edited for brevity (the full podcast also includes interviews about the UN negotiations to ban nuclear weapons, not included here). You can read the full transcript here.

Ariel: Can you explain how an AI could learn from stories?

Mark: I’ve been looking at ‘common sense errors’ or ‘common sense goal errors.’ When humans want to communicate to an AI system what they want to achieve, they often leave out the most basic rudimentary things. We have this model that whoever we’re talking to understands the everyday details of how the world works. If we want computers to understand how the real world works and what we want, we have to figure out ways of slamming lots of common sense, everyday knowledge into them.

When looking for sources of common sense knowledge, we started looking at stories – fiction, non-fiction, blogs. When we write stories we implicitly put everything that we know about the real world and how our culture works into characters.

One of my long-term goals is to say: ‘How much cultural and social knowledge can we extract by reading stories, and can we get this into AI systems who have to solve everyday problems, like a butler robot or a healthcare robot?’

Ariel: How do you choose which stories to use?

Mark: Through crowd sourcing services like Mechanical Turk, we ask people to tell stories about common things like, how do you go to a restaurant or how do you catch an airplane. Lots of people tell a story about the same topic and have agreements and disagreements, but the disagreements are a very small proportion. So we build an AI system that looks for commonalities. The common elements that everyone implicitly agrees on bubble to the top and the outliers get left along the side. And AI is really good at finding patterns.

Ariel: How do you ensure that’s happening?

Mark: When we test our AI system, we watch what it does, and we have things we do not want to see the AI do. But we don’t tell it in advance. We’ll put it into new circumstances and say, do the things you need to do, and then we’ll watch to make sure those [unacceptable] things don’t happen.

When we talk about teaching robots ethics, we’re really asking how we help robots avoid conflict with society and culture at large. We have socio-cultural patterns of behavior to help humans avoid conflict with other humans. So when I talk about teaching morality to AI systems, what we’re really talking about is: can we make AI systems do the things that humans normally do? That helps them fit seamlessly into society.

Stories are written by all different cultures and societies, and they implicitly encode moral constructs and beliefs into their protagonists and antagonists. We can look at stories from different continents and even different subcultures, like inner city versus rural.

Ariel: I want to switch to your recent paper on Safely Interruptible Agents, which were popularized in the media as the big red button problem.

Mark: At some point we’ll have robots and AI systems that are so sophisticated in their sensory abilities and their abilities to manipulate the environment, that they can theoretically learn that they have an off switch – what we call the big red button – and learn to keep humans from turning them off.

If an AI system gets a reward for doing something, turning it off means it loses the reward. A robot that’s sophisticated enough can learn that certain actions in the environment reduce future loss of reward. We can think of different scenarios: locking a door to a control room so the human operator can’t get in, physically pinning down a human. We can let our imaginations go even wilder than that.

Robots will always be capable of making mistakes. We’ll always want an operator in the loop who can push this big red button and say: ‘Stop. Someone is about to get hurt. Let’s shut things down.’ We don’t want robots learning that they can stop humans from stopping them, because that ultimately will put people into harms way.

Google and their colleagues came up with this idea of modifying the basic algorithms inside learning robots, so that they are less capable of learning about the big red button. And they came up with this very elegant theoretical framework that works, at least in simulation. My team and I came up with a different approach: to take this idea from The Matrix, and flip it on its head. We use the big red button to intercept the robot’s sensors and motor controls and move it from the real world into a virtual world, but the robot doesn’t know it’s in a virtual world. The robot keeps doing what it wants to do, but in the real world the robot has stopped moving.

Ariel: Can you also talk about your work on explainable AI and rationalization?

Mark: Explainability is a key dimension of AI safety. When AI systems do something unexpected or fail unexpectedly, we have to answer fundamental questions: Was this robot trained incorrectly? Did the robot have the wrong data? What caused the robot to go wrong?

If humans can’t trust AI systems, they won’t use them. You can think of it as a feedback loop, where the robot should understand humans’ common sense goals, and the humans should understand how robots solve problems.

We came up with this idea called rationalization: can we have a robot talk about what it’s doing as if a human were doing it? We get a bunch of humans to do some tasks, we get them to talk out loud, we record what they say, and then we teach the robot to use those same words in the same situations.

We’ve tested it in computer games. We have an AI system that plays Frogger, the classic arcade game in which the frog has to cross the street. And we can have a Frogger talk about what it’s doing. It’ll say things like “I’m waiting for a gap in the cars to open before I can jump forward.”

This is significant because that’s what you’d expect something to say, but the AI system is doing something completely different behind the scenes. We don’t want humans watching Frogger to have to know anything about rewards and reinforcement learning and Bellman equations. It just sounds like it’s doing the right thing.

Ariel: Going back a little in time – you started with computational creativity, correct?

Mark: I have ongoing research in computational creativity. When I think of human AI interaction, I really think, ‘what does it mean for AI systems to be on par with humans?’ The human is going make cognitive leaps and creative associations, and if the computer can’t make these cognitive leaps, it ultimately won’t be useful to people.

I have two things that I’m working on in terms of computational creativity. One is story writing. I’m interested in how much of the creative process of storytelling we can offload from the human onto a computer. I’d like to go up to a computer and say, “hey computer, tell me a story about X, Y or Z.”

I’m also interested in whether an AI system can build a computer game from scratch. How much of the process of building the construct can the computer do without human assistance?

Ariel: We see fears that automation will take over jobs, but typically for repetitive tasks. We’re still hearing that creative fields will be much harder to automate. Is that the case?

Mark: I think it’s a long, hard climb to the point where we’d trust AI systems to make creative decisions, whether it’s writing an article for a newspaper or making art or music.

I don’t see it as a replacement so much as an augmentation. I’m particularly interested in novice creators – people who want to do something artistic but haven’t learned the skills. I cannot read or write music, but sometimes I get these tunes in my head and I think I can make a song. Can we bring the AI in to become the skills assistant? I can be the creative lead and the computer can help me make something that looks professional. I think this is where creative AI will be the most useful.

For the second half of this podcast, I spoke with scientists, politicians, and concerned citizens about why they support the upcoming negotiations to ban nuclear weapons. Highlights from these interviews include comments by Congresswoman Barbara Lee, Nobel Laureate Martin Chalfie, and FLI president Max Tegmark.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The U.S. Worldwide Threat Assessment Includes Warnings of Cyber Attacks, Nuclear Weapons, Climate Change, etc.

Last Thursday – just one day before the WannaCry ransomware attack would shut down 16 hospitals in the UK and ultimately hit hundreds of thousands of organizations and individuals in over 150 countries – the Director of National Intelligence, Daniel Coats, released the Worldwide Threat Assessment of the US Intelligence Community.

Large-scale cyber attacks are among the first risks cited in the document, which warns that “cyber threats also pose an increasing risk to public health, safety, and prosperity as cyber technologies are integrated with critical infrastructure in key sectors.”

Perhaps the other most prescient, or at least well-timed, warning in the document relates to North Korea’s ambitions to create nuclear intercontinental ballistic missiles (ICBMs). Coats writes:

“Pyongyang is committed to developing a long-range, nuclear-armed missile that is capable of posing a direct threat to the United States; it has publicly displayed its road-mobile ICBMs on multiple occasions. We assess that North Korea has taken steps toward fielding an ICBM but has not flight-tested it.”

This past Sunday, North Korea performed a missile test launch, which many experts believe shows considerable progress toward the development of an ICBM. Though the report hints this may be less of an actual threat from North Korea and more for show. “We have long assessed that Pyongyang’s nuclear capabilities are intended for deterrence, international prestige, and coercive diplomacy,” says Coats in the report.

More Nuclear Threats

The Assessment also addresses the potential of nuclear threats from China and Pakistan. China “continues to modernize its nuclear missile force by adding more survivable road-mobile systems and enhancing its silo-based systems. This new generation of missiles is intended to ensure the viability of China’s strategic deterrent by providing a second-strike capability.” In addition, China is also working to develop “its first long-range, sea-based nuclear capability.”

Meanwhile, though Pakistan’s nuclear program doesn’t pose a direct threat to the U.S., advances in Pakistan’s nuclear capabilities could risk further destabilization along the India-Pakistan border.

The report warns: “Pakistan’s pursuit of tactical nuclear weapons potentially lowers the threshold for their use.” And of the ongoing conflicts between Pakistan and India, it says, “Increasing numbers of firefights along the Line of Control, including the use of artillery and mortars, might exacerbate the risk of unintended escalation between these nuclear-armed neighbors.”

This could be especially problematic because “early deployment during a crisis of smaller, more mobile nuclear weapons would increase the amount of time that systems would be outside the relative security of a storage site, increasing the risk that a coordinated attack by non-state actors might succeed in capturing a complete nuclear weapon.”

Even a small nuclear war between India and Pakistan could trigger a nuclear winter that could send the planet into a mini ice age and starve an estimated 1 billion people.

Artificial Intelligence

Nukes aren’t the only weapons the government is worried about. The report also expresses concern about the impact of artificial intelligence on weaponry: “Artificial Intelligence (Al) is advancing computational capabilities that benefit the economy, yet those advances also enable new military capabilities for our adversaries.”

Coats worries that AI could negatively impact other aspects of society, saying, “The implications of our adversaries’ abilities to use AI are potentially profound and broad. They include an increased vulnerability to cyber attack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment.”

Space Warfare

But threats of war are not expected to remain Earth-bound. The Assessment also addresses issues associated with space warfare, which could put satellites and military communication at risk.

For example, the report warns that “Russia and China perceive a need to offset any US military advantage derived from military, civil, or commercial space systems and are increasingly considering attacks against satellite systems as part of their future warfare doctrine.”

The report also adds that “the global threat of electronic warfare (EW) attacks against space systems will expand in the coming years in both number and types of weapons.” Coats expects that EW attacks will “focus on jamming capabilities against dedicated military satellite communications” and against GPS, among others.

Environmental Risks & Climate Change

Plenty of global threats do remain Earth-bound though, and not all are directly related to military concerns. The government also sees environmental issues and climate change as potential threats to national security.

The report states, “The trend toward a warming climate is forecast to continue in 2017. … This warming is projected to fuel more intense and frequent extreme weather events that will be distributed unequally in time and geography. Countries with large populations in coastal areas are particularly vulnerable to tropical weather events and storm surges, especially in Asia and Africa.”

In addition to rising temperatures, “global air pollution is worsening as more countries experience rapid industrialization, urbanization, forest burning, and agricultural waste incineration, according to the World Health Organization (WHO). An estimated 92 percent of the world’s population live in areas where WHO air quality standards are not met.”

According to the Assessment, biodiversity loss will also continue to pose an increasing threat to humanity. The report suggests global biodiversity “will likely continue to decline due to habitat loss, overexploitation, pollution, and invasive species, … disrupting ecosystems that support life, including humans.”

The Assessment goes on to raise concerns about the rate at which biodiversity loss is occurring. It says, “Since 1970, vertebrate populations have declined an estimated 60 percent … [and] populations in freshwater systems declined more than 80 percent. The rate of species loss worldwide is estimated at 100 to 1,000 times higher than the natural background extinction rate.”

Other Threats

The examples above are just a sampling of the risks highlighted in the Assessment. A great deal of the report covers threats of terrorism, issues with Russia, China and other regional conflicts, organized crime, economics, and even illegal fishing. Overall, the report is relatively accessible and provides a quick summary of the greatest known risks that could threaten not only the U.S., but also other countries in 2017. You can read the report in its entirety here.

Forget the Cold War – Experts say Nuclear Weapons Are a Bigger Risk Today

Until recently, many Americans believed that nuclear weapons don’t represent the same threat as during the Cold War. However, recent events and aggressive posturing between nuclear nations —especially the U.S., Russia, and North Korea—has increased public awareness and concern. These fears were addressed at a recent MIT conference on nuclear weapons.

“The possibility of a nuclear bomb going off is greater today than 20 years ago,” said Ernest Moniz, former Secretary of Energy and a keynote speaker.

California Congresswoman Barbara Lee, another keynote speaker, recently returned from a trip to South Korea and Japan. Of the trip, she said, “We went to the DMZ, and I saw how close to nuclear war we really are.”

Lee suggested that if we want to eliminate nuclear weapons once and for all, this is the time to do it. At the very least, she argued for a common sense nuclear policy of “no first use,” that is, the U.S. won’t launch the first nuclear strike.

“We must prevent the president from launching nuclear weapons without a declaration from Congress,” Lee said.

Under current U.S. nuclear policy, the President is the only person who can launch a nuclear weapon, and no one else’s input is necessary. This policy was adapted, at least in part, to ensure the safety and usability of the land-based arm of the nuclear triad (the other two arms are air- and sea-based).

During the Cold War, the fear was that, if Russia were to attack the U.S., it would first target the land-based missiles in an attempt to limit their use during war. To protect these weapons, the U.S. developed an advanced-warning system that could notify military personnel of an incoming strike, giving the president just enough time to launch the land-based missiles in response.

Weapons launched from Russia would take about 30 minutes to reach the U.S. That means that, in 30 minutes, the warning system must pick up the signal of incoming missiles. Then, personnel must confirm that the warning is accurate, and not an error – which has happened many times. And by the time the information reaches the President, he’ll have around 10 minutes to decide whether to launch a retaliation.

Lisbeth Gronlund with the Union of Concerned Scientists pointed out that not only does this time frame put us at greater risk of an accidental launch, but “cyber attacks are a new unknown.” As a result, she’s also concerned that the risk of a nuclear launch is greater today than during the Cold War.

“If we eliminate our land-based missiles and base our deterrence on nuclear submarines and bombers, which are safe from a Russian attack, then we eliminate the risk of nuclear war caused by false alarms and rushed decisions,” said MIT physics professor Max Tegmark.

But even with growing risks, people who are concerned about nuclear weapons still feel they must compete for public attention with groups who are worried about climate change and income inequality and women’s rights issues. Jonathan King, a molecular biologist at MIT who has worked to strengthen the Biological Weapons Convention, emphasized that the idea of competition is the wrong approach. Rather, the cost of and government focus on nuclear weapons actually prevents us from dealing with these other issues.

“The reason we don’t have these things is because tax dollars are going to things like nuclear weapons,” King explained, arguing that if we could free up money that’s currently allotted for nukes, we could finally address technological costs of solving climate problems or building better infrastructure.

The 2017 budget for the Unites States calls for an increase in military spending of $54 billion. However, as William Hartung, a nuclear weapons and military spending expert explained, the current U.S. budget is already larger than the next eight countries combined. And just the proposed increase in spending for 2017 exceeds the total military spending for almost all countries.

The United States nuclear arsenal, itself, requires tens of billions of dollars per year, and the U.S. currently plans to spend $1 trillion over the next 30 years to upgrade the nuclear arsenal to be better suited for a first strike. Burning $1 million per hour for the next 30 years would cost roughly a quarter of this budget, leading Hartung to suggest that “burning the money is a better investment.”

Cambridge Mayor Denise Simmons summed up all of these concerns, saying, “[it] feels like we’re playing with matches outside an explosives factory.”

Reducing the Threat of Nuclear War 2017

Spring Conference at MIT, Saturday, May 6

The growing hostility between the US and Russia — and with North Korea and Iran — makes it more urgent than ever to reduce the risk of nuclear war, as well as to rethink plans to spend a trillion dollars replacing US nuclear weapons with new ones that will be more suited for launching a first-strike. Nuclear war can be triggered intentionally or through miscalculation — terror or error — and this conference aims to advocate and organize toward reducing and ultimately eliminating this danger.

This one-day event includes lunch as well as food for thought from a great speaker lineup, including Iran-deal broker Ernie Moniz (MIT, fmr Secretary of Energy), California Congresswoman Barbara Lee, Lisbeth Gronlund (Union of Concerned Scientists), Joe Cirincione (Ploughshares), our former congressman John Tierney, MA state reps Denise Provost and Mike Connolly, and Cambridge Mayor Denise Simmons. It is not an academic conference, but rather one that addresses the political and economic realities, and attempts to stimulate and inform the kinds of social movement needed to change national policy. The focus will be on concrete steps we can take to reduce the risks.






Schedule



8:45 AM – Registration and coffee

9:15 AM – Welcome from City of Cambridge: Mayor Denise Simmons

9:30 AM – Program for the Day: Prof. Jonathan King (MIT, Peace Action)

9:45 AM – Session I. The Pressing Need for Nuclear Disarmament

– Costs and Profits from Nuclear Weapons Manufacture: William Hartung (Center for International Policy).

– Reasons to Reject the Trillion Dollar Nuclear Weapons Escalation: Joseph Cirincione (Ploughshares Fund).

– Nuclear Weapons Undermine Democracy: Prof. Elaine Scarry (Harvard University)

10:45 AM – Session II. Destabilizing Factors

Chair: Prof. Frank Von Hippel (Princeton University)

– Dangers of Hair Trigger Alert: Lisbeth Gronlund (Union of Concerned Scientists).

– Nuclear Modernization vs. National Security: Prof. Aron Bernstein (MIT, Council for a Livable World).

– Accidents and Unexpected Events: Prof. Max Tegmark (MIT, Future of Life Institute).

– International Tensions and Risks of further Nuclear Proliferation: TBA.

12:00 PM – Lunch Workshops (listed below)

2:00 PM – Session III. Economic and Social Consequences of Excessive Weapons Spending

Chair: Prof. Melissa Nobles (MIT):

– Build Housing Not Bombs: Rev. Paul Robeson Ford (Union Baptist Church).

– Education as a National Priority: Barbara Madeloni (Mass Teachers Association).

– Invest in Minds Not Missiles: Prof. Jonathan King (MIT, Mass Peace Action).

– Build Subways Not Submarines: Fred Salvucci (former Secretary of Transportation).

3:00 PM – Session IV. Current Prospects for Progress

Chair: John Tierney (former US Representative, Council for a Livable World)

– House Steps Toward Nuclear Disarmament: U. S. Representative Barbara Lee.

– Maintaining the Iran Nuclear Agreement: Ernie Moniz (MIT, former Secretary of Energy).

4:15 PM – Session V. Organizing to Reduce the Dangers

Chair: Jim Anderson (President, Peace Action New York State):

– Divesting from Nuclear Weapons Investments: Susi Snyder (Don’t Bank on the Bomb).

– Taxpayers Information and Transparency Acts: State Reps Denise Provost/Mike Connolly.

– Mobilizing the Scientific Community: Prof. Max Tegmark (MIT, Future of Life Institute).

– A National Nuclear Disarmament Organizing Network 2017 -2018: Program Committee.

5:00 PM – Adjourn


Conference Workshops:

a) Campus Organizing – Chair: Kate Alexander (Peace Action New York State); Caitlin Forbes (Mass Peace Action); Remy Pontes (Brandeis University); Haleigh Copley-Cunningham (Tufts U), Lucas Perry (Don’t Bank on the Bomb, Future of Life Institute); MIT Students (Nuclear Weapons Matter).

b) Bringing nuclear weapons into physics and history course curricula – Chair: Frank Davis (past President of TERC); Prof. Gary Goldstein (Tufts University); Prof. Aron Bernstein (MIT); Prof. Vincent Intondi (American University); Ray Matsumiya (Oleander Initiative, University of the Middle East Project).

c) Dangerous Conflicts – Chair, Erica Fein (Women’s Action for New Directions); Jim Walsh (MT Security Studies Program); John Tierney (former US Representative, Council for a Livable World); Subrata Ghoshroy (MIT); Arnie Alpert (New Hampshire AFSC).

d) Municipal and State Initiatives – Chair: Cole Harrison (Mass Peace Action); Rep. Denise Provost (Mass State Legislature); Councilor Dennis Carlone (Cambridge City Councillor and Architect/Urban Designer); Jared Hicks (Our Revolution); Prof. Ceasar McDowell (MIT Urban Studies); Nora Ranney (National Priorities Project).

e) Peace with Justice: People’s Budget and Related Campaigns to Shift Federal budget Priorities – Chair: Andrea Miller (People Demanding Action); Rep. Mike Connolly (Mass State Legislature); Paul Shannon (AFSC); Madelyn Hoffman (NJPA); Richard Krushnic (Mass Peoples Budget Campaign).

f) Reducing Nuclear Weapons through Treaties and Negotiation – Chair: Prof. Nazli Choucri (MIT), Kevin Martin (National Peace Action); Shelagh Foreman (Mass Peace Action); Joseph Gerson (AFSC); Michel DeGraff (MIT Haiti Project).

g) Strengthening the Connection between Averting Climate Change and Averting Nuclear War – Chair: Prof. Frank Von Hippel (Princeton University); Ed Aquilar (Pennsylvania Peace Action); Geoffrey Supran (Fossil Free MIT); Rosalie Anders (Mass Peace Action).

h) Working with Communities of Faith – Chair: Rev. Thea Keith-Lucas (MIT Radius); Rev. Herb Taylor (Harvard-Epworth United Methodist Church); Pat Ferrone (Mass Pax Christi); Rev. Paul Robeson Ford (Union Baptist Church).



Address

50 Vassar St. Building #34 Rm 101
Cambridge, Massachusetts, 02139


Directions

By Red Line: Exit the Kendall Square Red Line Station and walk west (away from Boston) past Ames Street to Vassar Street. Turn left and walk halfway down Vassar to #50 MIT building 34 (broad stairs, set back entrance).

By #1 Bus: Exit in front of MIT Main Entrance. Walk 1/2 block back on Mass Ave to Vassar Street. Turn right and walk half block to #50 MIT Building 34 (broad stairs, set back entrance).

By car: Public Parking Structures are available nearby on Ames Street, between Main and Broadway. A smaller surface lot is on the corner of Mass Ave and Vassar St.


Participants


Kate Alexander


Kate Alexander – Alexander is a peace advocate and researcher with 10 years experience in community organizing. Her previous work experience includes war crimes research and assistance in a genocide trial in Bosnia and community peace-building work in Northern Uganda. She is a graduate of Brandeis University with a degree in International and Global Studies and a minor in Legal Studies. Kate is currently studying at the Columbia University School of International and Public Affairs.


Arnie Alpert


Arnie Alpert – Alpert serves as AFSC’s New Hampshire co-director and co-coordinator of the Presidential Campaign Project, and has coordinated AFSC’s New Hampshire program since 1981. He is a leader in movements for economic justice and affordable housing, civil and worker rights, peace and disarmament, abolition of the death penalty, and an end to racism and homophobia.


Rosalie Anders


Rosalie Anders – Anders worked as an Associate Planner with the City of Cambridge’s Community Development Department, and is author of the city’s Pedestrian Plan, a set of guidelines intended to promote walking in the city. She has a Master’s degree in social work and worked as a family therapist for many years. She organizes around peace and environmental issues and is active with 350 Massachusetts. She chairs the Massachusetts Peace Action Education Fund board and co-founded our Climate and Peace Working Group in early 2016.


Ed Aquilar


Ed Aquilar – Ed Aguilar is director for the Coalition for Peace Action in the Greater Philadelphia region. After successful collaboration on the New START Treaty (2010), in 2012, he opened the Philadelphia CFPA office, and organized a Voting Rights campaign, to allow 50,000 college students to vote, who were being denied by the “PA Voter ID Law”, later reversed. Ed has worked on rallies and conferences at Friends Center; Temple, Philadelphia, and Drexel Universities; and the Philadelphia Ethical Society—on the climate crisis, drones, mass incarceration, nuclear disarmament, and diplomacy with Iran.


Aron_bernstein


Aron Bernstein – Bernstein is a Professor of Physics Emeritus at MIT where he has been on the faculty since 1961. He has taught a broad range of physics courses from freshman to graduate level. His research program has been in nuclear and particle physics, with an emphasis on studying the basic symmetries of matter, and currently involves collaborations with University and government laboratories, and colleagues in many countries.


Dennis Carlone


Dennis Carlone – Carlone is currently serving his second term on the Cambridge City Council, where he has earned recognition as an advocate for social justice through his expertise in citywide planning, transit policy, and sustainability initiatives.


Nazli_Choucri


Nazli Choucri – Nazli Choucri is Professor of Political Science. Her work is in the area of international relations, most notably on sources and consequences of international conflict and violence. Professor Choucri is the architect and Director of the Global System for Sustainable Development (GSSD), a multi-lingual web-based knowledge networking system focusing on the multi-dimensionality of sustainability. As Principal Investigator of an MIT-Harvard multi-year project on Explorations in Cyber International Relations, she directed a multi-disciplinary and multi-method research initiative. She is Editor of the MIT Press Series on Global Environmental Accord and, formerly, General Editor of the International Political Science Review. She also previously served as the Associate Director of MIT’s Technology and Development Program.


Joseph_Cirincione


Joseph Cirincione – Cirincione is president of Ploughshares Fund, a global security foundation. He is the author of the new book Nuclear Nightmares: Securing the World Before It Is Too Late, Bomb Scare: The History and Future of Nuclear Weapons and Deadly Arsenals: Nuclear, Biological and Chemical Threats. He is a member of Secretary of State John Kerry’s International Security Advisory Board and the Council on Foreign Relations.


Mike Connolly


Mike Connolly – Connolly is an attorney and community organizer who proudly represents Cambridge and Somerville in the Massachusetts House of Representatives. He is committed to social and economic justice and emphasizes the importance of broad investments in affordable housing, public transportation, early education, afterschool programs, and other critical services.



Haleigh Copley-Cunningham



Frank Davis


Michel DeGraff


Michel DeGraff – DeGraff is the Director of the MIT-Haiti Initiative, a Founding Member of Akademi Kreyol Ayisyen, and a Professor of Linguistics at MIT. His research interests include syntax, morphology, and language change and is the author of over 40 publications.



Erica Fein – Fein is WAND’s Nuclear Weapons Policy Director. In this capacity, she works with Congress, the executive branch, and the peace and security community on arms control, nonproliferation, and Pentagon and nuclear weapons budget reduction efforts. Previously, Erica served as a legislative assistant to Congressman John D. Dingell where she advised on national security, defense, foreign policy, small business, and veterans’ issues. Erica’s commentary has been published in the New York Times, Defense One, Defense News, The Hill, and the Huffington Post. She has also appeared on WMNF 88.5 in Tampa. Erica holds a M.A in International Security from the University of Denver’s Josef Korbel School of International Studies and a B.A. in International Studies from University of Wisconsin – Madison. She is a political partner at the Truman National Security Project. Erica can be found on Twitter @enfein.


Charles_Ferguson


Charles Ferguson – Ferguson has been the president of the Federation of American Scientists since January 1, 2010. From February 1998 to August 2000, Dr. Ferguson worked for FAS on nuclear proliferation and arms control issues as a senior research analyst. Previously, from 2002 to 2004, Dr. Ferguson had been with the Monterey Institute’s Center for Nonproliferation Studies (CNS) as its scientist-in-residence. At CNS, he co-authored the book The Four Faces of Nuclear Terrorism and was also lead author of the award-winning report “Commercial Radioactive Sources: Surveying the Security Risks,” which was published in January 2003 and was one of the first post-9/11 reports to assess the radiological dispersal device, or “dirty bomb,” threat. This report won the 2003 Robert S. Landauer Lecture Award from the Health Physics Society. From June 2011 to October 2013, he served as Co-Chairman of the U.S.-Japan Nuclear Working Group, organized by the Mansfield Foundation, FAS, and the Sasakawa Peace Foundation. In May 2011, his book Nuclear Energy: What Everyone Needs to Know was published by Oxford University Press. In 2013, he was elected a Fellow of the American Physical Society for his work in educating the public and policy makers about nuclear issues. Dr. Ferguson received his undergraduate degree in physics from the United States Naval Academy in Annapolis, Maryland, and his M.A. and Ph.D. degrees, also in physics, from Boston University in Boston, Massachusetts.



Pat Ferrone – Pat has been involved in peace and justice issues from a gospel nonviolent perspective for the past 40+ years. Currently, she acts as Co-coordinator of Pax Christi MA, a regional group of Pax Christi USA, the Catholic peace organization associated with Pax Christi International. Pax Christi, “grounded in the gospel and Catholic social teaching…rejects war, preparation for war, every form of violence and domination, and personal and systemic racism..we seek to model the Peace of Christi in our witness to the mandate of the nonviolence of the Cross.” She also chairs the St. Susanna Parish Pax Christi Committee, which recently sponsored two programs on the nuclear issue.


Caitlin_forbes


Caitlin Forbes – Forbes is the Student Outreach Coordinator for Massachusetts Peace Action, a nonpartisan, nonprofit organization working to develop peaceful US policies. Before beginning her work with MAPA, Caitlin gained a strong background with students through her work as an instructor of first year literature at the University of Connecticut and as the assistant alpine ski coach for Brown University. Caitlin has received both her B.A. and her M.A. in Literature and focused her work on the intersection between US-Middle Eastern foreign policy and contemporary American literature.


Rev. Paul Robeson Ford


Rev. Paul Robeson Ford – The Rev. Paul Robeson Ford is the Senior Pastor of the Union Baptist Church in Cambridge, Massachusetts. Shortly after his third year at Union, he assumed leadership as Executive Director of the Boston Workers Alliance, a Roxbury-based grassroots organization dedicated to creating economic opportunity and winning criminal justice reform in Massachusetts; he served there until June 2016.
He received a Bachelor of Arts from Grinnell College and a Master of Divinity Degree from the Divinity School at the University of Chicago.


Shelagh Foreman


Shelagh Foreman – Shelagh is the program director of Massachusetts Peace Action. She was a founding member in the early 1980s of Mass Freeze, the statewide nuclear freeze organization, which merged with SANE to form Massachusetts Peace Action. She has worked consistently on nuclear disarmament and on bringing Peace Action’s message to our elected officials. She studied art at The Cooper Union and Columbia University, taught art and art history, and is a painter and printmaker. She represents MAPA on the Political Committee of Mass Alliance and is a core group member of 20/20 Action. She serves on the boards of Mass. Peace Action and Mass. Peace Action Ed Fund and on MAPA’s executive committee and is chair of MAPA’s Middle East Task Force. She has 5 children and 7 grandchildren and with her husband Ed Furshpan lives in Cambridge and also spends time in Falmouth.


joseph_gerson


Joseph Gerson – Gerson has served the American Friends Service committee since 1976 and is currently Director of Programs and Director of the Peace and Economic Security Program for the AFSC in New England. His program work focuses on challenging and overcoming U.S. global hegemony, its preparations for and threats to initiate nuclear war, and its military domination of the Asia-Pacific and the Middle East.



Subrata Ghoshroy – Ghoshroy is a research affiliate at the Massachusetts Institute of Technology’s Program in Science, Technology, and Society. Before that, he was for many years a senior engineer in the field of high-energy lasers. He was also a professional staff member of the House National Security Committee and later a senior analyst with the Government Accountability Office.


gary_goldstein


Prof. Gary R. Goldstein is a theoretical physicist, specializing in high energy particle physics and nuclear physics. As a researcher, teacher and a long time member of Tufts Physics and Astronomy Department, he taught all levels of Physics course along with courses for non-scientists including Physics for Humanists, The Nuclear Age: History and Physics (with Prof. M. Sherwin – History), Physics of Music and Color. He is a political activist on nuclear issues, social equity, anti-war, and environmentalism. He spent several years working in the Program for Science, Technology and International Security and at University of Oxford Department of Theoretical Physics. He was also a Science Education researcher affiliated with the Tufts Education department and TERC, Cambridge, working with K-12 students and teachers in public schools. He is a member of the board of the Mass Peace Action fund for education. Over many years he has been giving talks for a general audience about the dangers of nuclear weapons and war.


lisbeth_gronlund


Lisbeth Gronlund – Gronlund focuses on technical and policy issues related to nuclear weapons, ballistic missile defenses, and space weapons. She has authored numerous articles and reports, lectured on nuclear arms control and missile defense policy issues before lay and expert audiences, and testified before Congress. A long list of news organizations, including the New York Times and NPR, have cited Gronlund since she joined UCS in 1992.


Cole_Harrison


Cole Harrison – Cole is Executive Director of Massachusetts Peace Action. He was on the coordinating committee of the 2012 Budget for All Massachusetts campaign, co-coordinates the People’s Budget Campaign, and leads Peace Action’s national Move the Money Working Group. He is a member of the planning committee of United for Justice with Peace (UJP) and coordinated the Afghanistan Working Group of United for Peace and Justice (UFPJ) from 2010 to 2012. Born in Delhi, India, he has a B.A. from Harvard in applied mathematics and a M.S. from Northeastern in computer science. He worked for the Symphony Tenants Organizing Project and the Fenway News in the 1970?s, participated in the Jamaica Plain Committee on Central America (JP COCA) in the 1980s, and worked as a software developer and manager at CompuServe Data Technologies, Praxis Inc., and Ask.com before joining Peace Action in 2010. He lives in Roslindale, Massachusetts.


William Hartung


William Hartung – He is the author of Prophets of War: Lockheed Martin and the Making of the Military-Industrial Complex (Nation Books, 2011) and the co-editor, with Miriam Pemberton, of Lessons from Iraq: Avoiding the Next War (Paradigm Press, 2008). His previous books include And Weapons for All (HarperCollins, 1995), a critique of U.S. arms sales policies from the Nixon through Clinton administrations. From July 2007 through March 2011, Mr. Hartung was the director of the Arms and Security Initiative at the New America Foundation. Prior to that, he served as the director of the Arms Trade Resource Center at the World Policy Institute.



Madelyn Hoffman


Jared Hicks


Jared Hicks



Prof. Vincent Intondi


Thea Keith-Lucas


Thea Keith-Lucas – Keith-Lucas was raised on the campus of the University of the South in a family of scientists and engineers. She served as Curate to Trinity Church in Randolph, one of the most ethnically diverse parishes of the Diocese of Massachusetts, and then in 2007 was called as Rector of Calvary Episcopal Church in Danvers, where she initiated creative outreach efforts and facilitated a merger. Thea joined the staff of Radius in January 2013.


jonathan_king


Jonathan A. King – King is professor of molecular biology at MIT, the author of over 250 scientific papers, and a specialist in protein folding. Prof. King is a former President of the Biophysical Society, former Guggenheim Fellow, and a recipient of MIT’s MLKJr Faculty Leadership Award. He was a leader in the mobilization of biomedical scientists to renounce the military use of biotechnology and strengthen the Biological Weapons Convention. He was a founder of a Jobs with Peace campaign in the 1980s and now chairs Massachusetts Peace Action’s Nuclear Weapons Abolition working group. He is also an officer of the Cambridge Residents Alliance and of Citizens for Public Schools.



Richard Krushnic


Barbara Lee


Barbara Lee – Lee is the U.S. Representative for California’s 13th congressional district, serving East Bay voters from 1998 to 2013 during a time when the region was designated California’s 9th congressional district. She is a member of the Democratic Party. She was the first woman to represent the 9th district and is also the first woman to represent the 13th district. Lee was the Chair of the Congressional Black Caucus and was the Co-Chair of the Congressional Progressive Caucus. Lee is notable as the only member of either house of Congress to vote against the authorization of use of force following the September 11, 2001 attacks.[1] This made her a hero among many in the anti-war movement.[2] Lee has been a vocal critic of the war in Iraq and supports legislation creating a Department of Peace.



Kevin Martin – Martin, President of Peace Action and the Peace Action Education Fund, joined the staff on Sept 4, 2001. Kevin previously served as Director of Project Abolition, a national organizing effort for nuclear disarmament, from August 1999 through August 2001. Kevin came to Project Abolition after ten years in Chicago as Executive Director of Illinois Peace Action. Prior to his decade-long stint in Chicago, Kevin directed the community outreach canvass for Peace Action (then called Sane/Freeze) in Washington, D.C., where he originally started as a door-to-door canvasser with the organization in 1985. Kevin has traveled abroad representing Peace Action and the U.S. peace movement on delegations and at conferences in Russia, Japan, China, Mexico and Britain. He is married, with two children, and lives in Silver Spring, Maryland.


Barbara Madeloni


Barbara Madeloni – Madeloni is president of the 110,000-member Massachusetts Teachers Association and a staunch advocate for students and educators in the public schools and public higher education system in Massachusetts. She believes that strong unions led by rank-and-file members produce stronger public schools and communities. She is committed to racial and economic justice – and to building alliances with parents, students and communities – to secure a more just world.


Ray Matsumiya


Ray Matsumiya


Ceasar McDowell


Ceasar McDowell – McDowell is Professor of the Practice of Community Development at MIT. He holds an Ed.D. (88) and M.Ed. (84) from Harvard. McDowell’’s current work is on the development of community knowledge systems and civic engagement. He is also expanding his critical moments reflection methodology to identify, share and maintaining grassroots knowledge. His research and teaching interests also include the use of mass media and technology in promoting democracy and community-building, the education of urban students, the development and use of empathy in community work, civil rights history, peacemaking and conflict resolution. He is Director of the global civic engagement organization dropping knowledge international Dropping Knowledge International, MIT’s former Center for Reflective Community Practice (renamed Co-Lab) and co-founder of The Civil Rights Forum on Telecommunications Policy and founding Board member of The Algebra Project Algebra.



Andrea Miller


Ernie Moniz


Ernie Moniz – Moniz is an American nuclear physicist and the former United States Secretary of Energy, serving under U.S. President Barack Obama from May 2013 to January 2017. He served as the Associate Director for Science in the Office of Science and Technology Policy in the Executive Office of the President of the United States from 1995 to 1997 and was Under Secretary of Energy from 1997 to 2001 during the Clinton Administration. Moniz is one of the founding members of The Cyprus Institute and has served at Massachusetts Institute of Technology as the Cecil and Ida Green Professor of Physics and Engineering Systems, as the Director of the Energy Initiative, and as the Director of the Laboratory for Energy and the Environment.


Melissa Nobles


Melissa Nobles – Nobles is Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences, and Professor of Political Science at the Massachusetts Institute of Technology. Her current research is focused on constructing a database of racial murders in the American South, 1930–1954. Working closely as a faculty collaborator and advisory board member of Northeastern Law School’s Civil Rights and Restorative Justice law clinic, Nobles has conducted extensive archival research, unearthing understudied and more often, unknown racial murders and contributing to several legal investigations. She is the author of two books, Shades of Citizenship: Race and the Census in Modern Politics (Stanford University Press, 2000), The Politics of Official Apologies, (Cambridge University Press, 2008), and co-editor with Jun-Hyeok Kwak of Inherited Responsibility and Historical Reconciliation in East Asia (Routledge Press, 2013).



Remy Pontes


perry


Lucas Perry – Perry is passionate about the role that science and technology will play in the evolution of all sentient life. He has studied at a Buddhist monastery in Nepal and while there he engaged in meditative retreats and practices. He is now working to challenge and erode our sense of self and our subject-object frame of reference. His current project explores how mereological nihilism and the illusion of self may contribute to forming a radically post-human consequentialist ethics. His other work seeks to resolve the conflicts between bio-conservatism and transhumanism.


Denise Provost


Denise Provost



John Ratliff – Ratliff was political director of an SEIU local union in Miami, Florida, and relocated to Cambridge after his retirement in 2012. He is a graduate of Princeton University and Yale Law School. A Vietnam veteran and member of Veterans for Peace, he is a member of the coordinating committee of Massachusetts Senior Action’s Cambridge branch, and chair of Massachusetts Jobs with Justice’s Global Justice Task Force. As Mass. Peace Action’s economic justice coordinator he leads our coalition work with Raise Up Massachusetts for an increased minimum wage and sick time benefits, and against the Trans Pacific Partnership. He is the father of high school senior Daniel Bausher-Belton, who was an intern at Mass. Peace Action in summer 2013.


Fred Salvucci


Fred Salvucci – Salvucci, senior lecturer and senior research associate, is a civil engineer with interest in infrastructure, urban transportation and public transportation. He has over 30 years of contextual transportation experience, most of it in the public sector as former Secretary of Transportation for the Commonwealth of Massachusetts (1983-1990) and transportation advisor to Boston Mayor Kevin White (1975-1978). Some of his notable achievements include shifting public focus from highway spending towards rail transit investment and spearheading the depression of the Central Artery in Boston. He has participated in the expansion of the transit system, the development of the financial and political support for the Central Artery/Tunnel Project, and the design of implementation strategies to comply with the Clean Air Act consistent with economic growth. Other efforts include formulation of noise rules to reverse the increase in aircraft noise at Logan Airport and development of strategies to achieve high-speed rail service between Boston and New York.


elaine_scarry


Elaine Scarry – Scarry is an American essayist and professor of English and American Literature and Language. She is the Walter M. Cabot Professor of Aesthetics and the General Theory of Value at Harvard University. Her books include The Body in Pain, Thermonuclear Monarchy, and On Beauty and Being Just.



Paul Shannon – Shannon is program staff for the Peace and Economic Security program of the American Friends Service Committee (AFSC) in Cambridge, hosts regular educational forums at the Cambridge Public Library for the AFSC and has coordinated the National AFSC Film Lending Library for the past 26 years. For over 3 decades he has been active in various peace, union, prison reform, solidarity, economic justice and human rights movements particularly the Vietnam anti-war movement, the 1970’s United Farm Workers movement, the South Africa anti-apartheid movement, the 1980’s Central America and Cambodia solidarity movements, the Haiti Solidarity movement of the early 90’s and the Afghanistan and Iraq anti-war movement. Paul has been teaching social science courses at colleges in the greater Boston area for the past 27 years. Since 1982 he has been teaching a course on the history of the Vietnam War at Middlesex Community College and occasionally teaches professional development courses on the Vietnam war for high school teachers at Northeastern University and Merrimack Educational Center. He is past editor of the Indochina Newsletter and has written numerous articles for peace movement publications. He is on the Board of Directors of the community/fan organization, Save Fenway Park. He currently represents the American Friends Service Committee on the Coordinating Committee of the United for Justice with Peace Coalition.


denise_simmons


Denise Simmons – As Mayor of the City of Cambridge, Denise Simmons won praise for her open-door policy, for her excellent constituent services, and for her down-to-earth approach to her duties. She continues to bring these qualities to her work on the Cambridge City Council. She was sworn in to her second term as mayor on January 4, 2016.



Susie Snyder Mrs. Susi Snyder is the Nuclear Disarmament Programme Manager for Pax in the Netherlands. Mrs. Snyder is a primary author of the Don’t Bank on the Bomb: Global Report on the Financing of Nuclear Weapons Producers (2013, 2014, 2015) and has published numerous reports and articles, including the 2015 Dealing with a Ban & Escalating Tensions, the 2014 The Rotterdam Blast: The immediate humanitarian consequences of a 12 kiloton nuclear explosion; and the 2011 Withdrawal Issues: What NATO countries say about the future of tactical nuclear weapons in Europe. She is an International Steering Group member of the International Campaign to Abolish Nuclear Weapons. Previously, Mrs. Snyder served as the International Secretary General of the Women’s International League for Peace and Freedom, where she monitored various issues under the aegis of the United Nations, including sustainable development, human rights, and disarmament.



Geoffrey Supran – Longstanding interest in optoelectronics. Opportunities to overcome scientific and economic hurdles in solar cell design and significantly impact world energy markets are alluring. Hybrid devices combining the flexibility, large area and tunable absorption of low cost solution processable nanocrystals (or polymers) with the high carrier mobility of, for example, III-V semiconductors, appear promising. In particular, enhancement of photocurrent by nonradiative energy transfer and carrier multiplication is of interest. Additionally, the importance of a nanoscale test-bed for fundamental studies of photo-induced energy/charge transport motivates my curiosity for the investigation of stand-alone photovoltaic single nanowire heterostructures. I am also interested in the development of photoelectrochemical storage catalysts and the pursuit of coupled photovoltaic-electrolysis systems.


Herb Taylor


Herb Taylor – Taylor became Senior Pastor at Harvard-Epworth UMC in August, 2014. Before coming to the church, he served as President and CEO of Deaconess Abundant Life Communities, a not-for-profit aging services provider. Founded in 1889, the Deaconess has over 400 employees and serves over a thousand older adults through skilled nursing, assisted living and independent living apartments in multiple locations in Massachusetts and New Hampshire.


tegmark


Max Tegmark – Known as “Mad Max” for his unorthodox ideas and passion for adventure, his scientific interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book “Our Mathematical Universe”. He is an MIT physics professor with more than two hundred technical papers and has featured in dozens of science documentaries. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.” He is founder (with Anthony Aguirre) of the Foundational Questions Institute.


John Tierney


John Tierney – Tierney is an American politician who served as a U.S. Representative from Massachusetts from January 3, 1997, to January 3, 2015. In February 2016, he was appointed the executive director of the Council for a Livable World and the Center for Arms Control and Non-Proliferation, the council’s affiliated education and research organization. He is a Democrat who represented the state’s 6th district, which includes the state’s North Shore and Cape Ann. Born and raised in Salem, Massachusetts, Tierney graduated from Salem State College and Suffolk University Law School. He worked in private law and served on the Salem Chamber of Commerce (1976–97). Tierney was sworn in as a U.S. representative in 1997.


Frank Von Hippel


Frank Von Hippel – Hippel’s areas of policy research include nuclear arms control and nonproliferation, energy, and checks and balances in policy making for technology. Prior to coming to Princeton, he worked for ten years in the field of elementary-particle theoretical physics. He has written extensively on the technical basis for nuclear nonproliferation and disarmament initiatives, the future of nuclear energy, and improved automobile fuel economy. He won a 1993 MacArthur fellowship in recognition of his outstanding contributions to his fields of research. During 1993–1994, he served as assistant director for national security in the White House Office of Science and Technology Policy.


Jim Walsh


Jim Walsh – Walsh is a Senior Research Associate at the Massachusetts Institute of Technology’s Security Studies Program (SSP).Walsh’s research and writings focus on international security, and in particular, topics involving nuclear weapons, the Middle East, and East Asia. Walsh has testified before the United States Senate and House of Representatives on issues of nuclear terrorism, Iran, and North Korea. He is one of a handful of Americans who has traveled to both Iran and North Korea for talks with officials about nuclear issues. His recent publications include “Stopping North Korea, Inc.: Sanctions Effectiveness and Unintended Consequences” and “Rivals, Adversaries, and Partners: Iran and Iraq in the Middle East” in Iran and Its Neighbors. He is the international security contributor to the NPR program “Here and Now,” and his comments and analysis have appeared in the New York Times, the New York Review of Books, Washington Post, Wall Street Journal, ABC, CBS, NBC, Fox, and numerous other national and international media outlets. Before coming to MIT, Dr. Walsh was Executive Director of the Managing the Atom project at Harvard University’s John F. Kennedy School of Government and a visiting scholar at the Center for Global Security Research at Lawrence Livermore National Laboratory. He has taught at both Harvard University and MIT. Dr. Walsh received his Ph.D from the Massachusetts Institute of Technology.



Organizers



We would like to extend a special thank you to our Program Committee and sponsors for all their help creating and organizing this event.

Prof. Aron Bernstein (MIT, Council for a Livable), Joseph Gerson (AFSC), Subrata Ghoshroy (MIT), Prof. Gary Goldstein (Tufts University), Cole Harrison (Mass Peace Action), Jonathan King (MIT and Mass Peace Action), State Rep. Denise Provost; John Ratliff (Mass Peace Action, Mass Senior Action), Prof. Elaine Scarry (Harvard University), Prof.Max Tegmark (MIT, Future of Life Institute), Patricia Weinmann (MIT Radius).

Sponsored by MIT Radius (the former Technology and Culture Forum), Massachusetts Peace Action, the American Friends Service Committee, and the Future of Life Institute.

MIT_metal_building